Kaynağa Gözat

AMBARI-4341. Rename 2.0.8 to 2.1.1 in the stack definition. (mahadev)

Mahadev Konar 11 yıl önce
ebeveyn
işleme
ae534ed308
100 değiştirilmiş dosya ile 2632 ekleme ve 2 silme
  1. 1 1
      ambari-server/pom.xml
  2. 1 1
      ambari-server/set-hdp-repo-url.sh
  3. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-INSTALL/files/changeToSecureUid.sh
  4. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-INSTALL/scripts/hook.py
  5. 81 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-INSTALL/scripts/params.py
  6. 107 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-INSTALL/scripts/shared_initialization.py
  7. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/files/checkForFormat.sh
  8. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/files/task-log4j.properties
  9. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/scripts/hook.py
  10. 172 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/scripts/params.py
  11. 322 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/scripts/shared_initialization.py
  12. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/commons-logging.properties.j2
  13. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/exclude_hosts_list.j2
  14. 121 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/hadoop-env.sh.j2
  15. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/hadoop-metrics2.properties.j2
  16. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/hdfs.conf.j2
  17. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/health_check-v2.j2
  18. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/health_check.j2
  19. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/include_hosts_list.j2
  20. 200 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/log4j.properties.j2
  21. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/slaves.j2
  22. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/snmpd.conf.j2
  23. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/taskcontroller.cfg.j2
  24. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/checkGmetad.sh
  25. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/checkGmond.sh
  26. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/checkRrdcached.sh
  27. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/gmetad.init
  28. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/gmetadLib.sh
  29. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/gmond.init
  30. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/gmondLib.sh
  31. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/rrd.py
  32. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/rrdcachedLib.sh
  33. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/setupGanglia.sh
  34. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/startGmetad.sh
  35. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/startGmond.sh
  36. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/startRrdcached.sh
  37. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/stopGmetad.sh
  38. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/stopGmond.sh
  39. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/stopRrdcached.sh
  40. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/teardownGanglia.sh
  41. 106 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia.py
  42. 163 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia_monitor.py
  43. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia_monitor_service.py
  44. 181 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia_server.py
  45. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia_server_service.py
  46. 74 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/params.py
  47. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/status_params.py
  48. 34 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/templates/gangliaClusters.conf.j2
  49. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/templates/gangliaEnv.sh.j2
  50. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/templates/gangliaLib.sh.j2
  51. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/files/hbaseSmokeVerify.sh
  52. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/__init__.py
  53. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/functions.py
  54. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase.py
  55. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase_client.py
  56. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase_master.py
  57. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase_regionserver.py
  58. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase_service.py
  59. 84 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/params.py
  60. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/service_check.py
  61. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/status_params.py
  62. 50 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hadoop-metrics.properties-GANGLIA-MASTER.j2
  63. 50 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hadoop-metrics.properties-GANGLIA-RS.j2
  64. 50 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hadoop-metrics.properties.j2
  65. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase-env.sh.j2
  66. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase-smoke.sh.j2
  67. 23 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase_client_jaas.conf.j2
  68. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase_grant_permissions.j2
  69. 25 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase_master_jaas.conf.j2
  70. 25 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase_regionserver_jaas.conf.j2
  71. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/regionservers.j2
  72. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/files/checkForFormat.sh
  73. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/files/checkWebUI.py
  74. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/datanode.py
  75. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/hdfs_client.py
  76. 59 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/hdfs_datanode.py
  77. 192 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/hdfs_namenode.py
  78. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/hdfs_snamenode.py
  79. 66 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/namenode.py
  80. 165 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/params.py
  81. 106 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/service_check.py
  82. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/snamenode.py
  83. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/status_params.py
  84. 133 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/utils.py
  85. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/templates/exclude_hosts_list.j2
  86. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/addMysqlUser.sh
  87. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/hcatSmoke.sh
  88. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/hiveSmoke.sh
  89. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/hiveserver2.sql
  90. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/hiveserver2Smoke.sh
  91. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/pigSmoke.sh
  92. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/startHiveserver2.sh
  93. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/startMetastore.sh
  94. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/__init__.py
  95. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hcat.py
  96. 41 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hcat_client.py
  97. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hcat_service_check.py
  98. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hive.py
  99. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hive_client.py
  100. 0 0
      ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hive_metastore.py

+ 1 - 1
ambari-server/pom.xml

@@ -29,7 +29,7 @@
   <properties>
     <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
     <python.ver>python &gt;= 2.6</python.ver>
-    <hdpUrlForCentos6>http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.0</hdpUrlForCentos6>
+    <hdpUrlForCentos6>http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.1.0</hdpUrlForCentos6>
   </properties>
   <build>
     <plugins>

+ 1 - 1
ambari-server/set-hdp-repo-url.sh

@@ -26,7 +26,7 @@ then
   #  Modify the VERSION variable in this file to match the new version
   #  Modify the previous version to store concrete public repo url
 
-  VERSION=2.0.6
+  VERSION=2.1.1
   C6URL="$1"
   C5URL="${C6URL/centos6/centos5}"
   S11URL="${C6URL/centos6/suse11}"

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-INSTALL/files/changeToSecureUid.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-INSTALL/files/changeToSecureUid.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-INSTALL/scripts/hook.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-INSTALL/scripts/hook.py


+ 81 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-INSTALL/scripts/params.py

@@ -0,0 +1,81 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+from resource_management.core.system import System
+import os
+
+config = Script.get_config()
+
+#users and groups
+yarn_user = config['configurations']['global']['yarn_user']
+hbase_user = config['configurations']['global']['hbase_user']
+nagios_user = config['configurations']['global']['nagios_user']
+oozie_user = config['configurations']['global']['oozie_user']
+webhcat_user = config['configurations']['global']['hcat_user']
+hcat_user = config['configurations']['global']['hcat_user']
+hive_user = config['configurations']['global']['hive_user']
+smoke_user =  config['configurations']['global']['smokeuser']
+mapred_user = config['configurations']['global']['mapred_user']
+hdfs_user = config['configurations']['global']['hdfs_user']
+zk_user = config['configurations']['global']['zk_user']
+gmetad_user = config['configurations']['global']["gmetad_user"]
+gmond_user = config['configurations']['global']["gmond_user"]
+
+user_group = config['configurations']['global']['user_group']
+proxyuser_group =  config['configurations']['global']['proxyuser_group']
+nagios_group = config['configurations']['global']['nagios_group']
+smoke_user_group =  "users"
+mapred_tt_group = default("/configurations/mapred-site/mapreduce.tasktracker.group", user_group)
+
+#hosts
+hostname = config["hostname"]
+rm_host = default("/clusterHostInfo/rm_host", [])
+slave_hosts = default("/clusterHostInfo/slave_hosts", [])
+hagios_server_hosts = default("/clusterHostInfo/nagios_server_host", [])
+oozie_servers = default("/clusterHostInfo/oozie_server", [])
+hcat_server_hosts = default("/clusterHostInfo/webhcat_server_host", [])
+hive_server_host =  default("/clusterHostInfo/hive_server_host", [])
+hbase_master_hosts = default("/clusterHostInfo/hbase_master_hosts", [])
+hs_host = default("/clusterHostInfo/hs_host", [])
+jtnode_host = default("/clusterHostInfo/jtnode_host", [])
+namenode_host = default("/clusterHostInfo/namenode_host", [])
+zk_hosts = default("/clusterHostInfo/zookeeper_hosts", [])
+ganglia_server_hosts = default("/clusterHostInfo/ganglia_server_host", [])
+
+has_resourcemanager = not len(rm_host) == 0
+has_slaves = not len(slave_hosts) == 0
+has_nagios = not len(hagios_server_hosts) == 0
+has_oozie_server = not len(oozie_servers)  == 0
+has_hcat_server_host = not len(hcat_server_hosts)  == 0
+has_hive_server_host = not len(hive_server_host)  == 0
+has_hbase_masters = not len(hbase_master_hosts) == 0
+has_zk_host = not len(zk_hosts) == 0
+has_ganglia_server = not len(ganglia_server_hosts) == 0
+
+is_namenode_master = hostname in namenode_host
+is_jtnode_master = hostname in jtnode_host
+is_rmnode_master = hostname in rm_host
+is_hsnode_master = hostname in hs_host
+is_hbase_master = hostname in hbase_master_hosts
+is_slave = hostname in slave_hosts
+if has_ganglia_server:
+  ganglia_server_host = ganglia_server_hosts[0]
+
+hbase_tmp_dir = config['configurations']['hbase-site']['hbase.tmp.dir']

+ 107 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-INSTALL/scripts/shared_initialization.py

@@ -0,0 +1,107 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+
+from resource_management import *
+
+def setup_users():
+  """
+  Creates users before cluster installation
+  """
+  import params
+
+  Group(params.user_group)
+  Group(params.smoke_user_group)
+  Group(params.proxyuser_group)
+  User(params.smoke_user,
+       gid=params.user_group,
+       groups=[params.proxyuser_group]
+  )
+  smoke_user_dirs = format(
+    "/tmp/hadoop-{smoke_user},/tmp/hsperfdata_{smoke_user},/home/{smoke_user},/tmp/{smoke_user},/tmp/sqoop-{smoke_user}")
+  set_uid(params.smoke_user, smoke_user_dirs)
+
+  if params.has_hbase_masters:
+    User(params.hbase_user,
+         gid = params.user_group,
+         groups=[params.user_group])
+    hbase_user_dirs = format(
+      "/home/{hbase_user},/tmp/{hbase_user},/usr/bin/{hbase_user},/var/log/{hbase_user},{hbase_tmp_dir}")
+    set_uid(params.hbase_user, hbase_user_dirs)
+
+  if params.has_nagios:
+    Group(params.nagios_group)
+    User(params.nagios_user,
+         gid=params.nagios_group)
+
+  if params.has_oozie_server:
+    User(params.oozie_user,
+         gid = params.user_group)
+
+  if params.has_hcat_server_host:
+    User(params.webhcat_user,
+         gid = params.user_group)
+    User(params.hcat_user,
+         gid = params.user_group)
+
+  if params.has_hive_server_host:
+    User(params.hive_user,
+         gid = params.user_group)
+
+  if params.has_resourcemanager:
+    User(params.yarn_user,
+         gid = params.user_group)
+
+  if params.has_ganglia_server:
+    Group(params.gmetad_user)
+    Group(params.gmond_user)
+    User(params.gmond_user,
+         gid=params.user_group,
+        groups=[params.gmond_user])
+    User(params.gmetad_user,
+         gid=params.user_group,
+        groups=[params.gmetad_user])
+
+  User(params.hdfs_user,
+        gid=params.user_group,
+        groups=[params.user_group]
+  )
+  User(params.mapred_user,
+       gid=params.user_group,
+       groups=[params.user_group]
+  )
+  if params.has_zk_host:
+    User(params.zk_user,
+         gid=params.user_group)
+
+def set_uid(user, user_dirs):
+  """
+  user_dirs - comma separated directories
+  """
+  File("/tmp/changeUid.sh",
+       content=StaticFile("changeToSecureUid.sh"),
+       mode=0555)
+  Execute(format("/tmp/changeUid.sh {user} {user_dirs} 2>/dev/null"),
+          not_if = format("test $(id -u {user}) -gt 1000"))
+
+def install_packages():
+  Package("unzip")
+  Package("net-snmp")
+  Package("net-snmp-utils")

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/files/checkForFormat.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/files/checkForFormat.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/files/task-log4j.properties → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/files/task-log4j.properties


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/scripts/hook.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/scripts/hook.py


+ 172 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/scripts/params.py

@@ -0,0 +1,172 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+from resource_management.core.system import System
+import os
+
+config = Script.get_config()
+
+#java params
+artifact_dir = "/tmp/HDP-artifacts/"
+jdk_name = default("/hostLevelParams/jdk_name", None) # None when jdk is already installed by user
+jce_policy_zip = default("/hostLevelParams/jce_name", None) # None when jdk is already installed by user
+jce_location = config['hostLevelParams']['jdk_location']
+jdk_location = config['hostLevelParams']['jdk_location']
+#security params
+security_enabled = config['configurations']['global']['security_enabled']
+dfs_journalnode_keytab_file = config['configurations']['hdfs-site']['dfs.journalnode.keytab.file']
+dfs_web_authentication_kerberos_keytab = config['configurations']['hdfs-site']['dfs.journalnode.keytab.file']
+dfs_secondary_namenode_keytab_file =  config['configurations']['hdfs-site']['fs.secondary.namenode.keytab.file']
+dfs_datanode_keytab_file =  config['configurations']['hdfs-site']['dfs.datanode.keytab.file']
+dfs_namenode_keytab_file =  config['configurations']['hdfs-site']['dfs.namenode.keytab.file']
+
+dfs_datanode_kerberos_principal = config['configurations']['hdfs-site']['dfs.datanode.kerberos.principal']
+dfs_journalnode_kerberos_principal = config['configurations']['hdfs-site']['dfs.journalnode.kerberos.principal']
+dfs_secondary_namenode_kerberos_internal_spnego_principal = config['configurations']['hdfs-site']['dfs.secondary.namenode.kerberos.internal.spnego.principal']
+dfs_namenode_kerberos_principal = config['configurations']['hdfs-site']['dfs.namenode.kerberos.principal']
+dfs_web_authentication_kerberos_principal = config['configurations']['hdfs-site']['dfs.web.authentication.kerberos.principal']
+dfs_secondary_namenode_kerberos_principal = config['configurations']['hdfs-site']['dfs.secondary.namenode.kerberos.principal']
+dfs_journalnode_kerberos_internal_spnego_principal = config['configurations']['hdfs-site']['dfs.journalnode.kerberos.internal.spnego.principal']
+
+#users and groups
+mapred_user = config['configurations']['global']['mapred_user']
+hdfs_user = config['configurations']['global']['hdfs_user']
+yarn_user = config['configurations']['global']['yarn_user']
+
+user_group = config['configurations']['global']['user_group']
+mapred_tt_group = default("/configurations/mapred-site/mapreduce.tasktracker.group", user_group)
+
+#snmp
+snmp_conf_dir = "/etc/snmp/"
+snmp_source = "0.0.0.0/0"
+snmp_community = "hadoop"
+
+#hosts
+hostname = config["hostname"]
+rm_host = default("/clusterHostInfo/rm_host", [])
+slave_hosts = default("/clusterHostInfo/slave_hosts", [])
+hagios_server_hosts = default("/clusterHostInfo/nagios_server_host", [])
+oozie_servers = default("/clusterHostInfo/oozie_server", [])
+hcat_server_hosts = default("/clusterHostInfo/webhcat_server_host", [])
+hive_server_host =  default("/clusterHostInfo/hive_server_host", [])
+hbase_master_hosts = default("/clusterHostInfo/hbase_master_hosts", [])
+hs_host = default("/clusterHostInfo/hs_host", [])
+jtnode_host = default("/clusterHostInfo/jtnode_host", [])
+namenode_host = default("/clusterHostInfo/namenode_host", [])
+zk_hosts = default("/clusterHostInfo/zookeeper_hosts", [])
+ganglia_server_hosts = default("/clusterHostInfo/ganglia_server_host", [])
+
+has_resourcemanager = not len(rm_host) == 0
+has_slaves = not len(slave_hosts) == 0
+has_nagios = not len(hagios_server_hosts) == 0
+has_oozie_server = not len(oozie_servers)  == 0
+has_hcat_server_host = not len(hcat_server_hosts)  == 0
+has_hive_server_host = not len(hive_server_host)  == 0
+has_hbase_masters = not len(hbase_master_hosts) == 0
+has_zk_host = not len(zk_hosts) == 0
+has_ganglia_server = not len(ganglia_server_hosts) == 0
+
+is_namenode_master = hostname in namenode_host
+is_jtnode_master = hostname in jtnode_host
+is_rmnode_master = hostname in rm_host
+is_hsnode_master = hostname in hs_host
+is_hbase_master = hostname in hbase_master_hosts
+is_slave = hostname in slave_hosts
+if has_ganglia_server:
+  ganglia_server_host = ganglia_server_hosts[0]
+#hadoop params
+hadoop_tmp_dir = format("/tmp/hadoop-{hdfs_user}")
+hadoop_lib_home = "/usr/lib/hadoop/lib"
+hadoop_conf_dir = "/etc/hadoop/conf"
+hadoop_pid_dir_prefix = config['configurations']['global']['hadoop_pid_dir_prefix']
+hadoop_home = "/usr"
+hadoop_bin = "/usr/lib/hadoop/bin"
+
+task_log4j_properties_location = os.path.join(hadoop_conf_dir, "task-log4j.properties")
+limits_conf_dir = "/etc/security/limits.d"
+
+hdfs_log_dir_prefix = config['configurations']['global']['hdfs_log_dir_prefix']
+hbase_tmp_dir = config['configurations']['hbase-site']['hbase.tmp.dir']
+#db params
+server_db_name = config['hostLevelParams']['db_name']
+db_driver_filename = config['hostLevelParams']['db_driver_filename']
+oracle_driver_url = config['hostLevelParams']['oracle_jdbc_url']
+mysql_driver_url = config['hostLevelParams']['mysql_jdbc_url']
+
+ambari_db_rca_url = config['hostLevelParams']['ambari_db_rca_url']
+ambari_db_rca_driver = config['hostLevelParams']['ambari_db_rca_driver']
+ambari_db_rca_username = config['hostLevelParams']['ambari_db_rca_username']
+ambari_db_rca_password = config['hostLevelParams']['ambari_db_rca_password']
+
+rca_enabled = config['configurations']['global']['rca_enabled']
+rca_disabled_prefix = "###"
+if rca_enabled == True:
+  rca_prefix = ""
+else:
+  rca_prefix = rca_disabled_prefix
+
+#hadoop-env.sh
+java_home = config['hostLevelParams']['java_home']
+if System.get_instance().platform == "suse":
+  jsvc_path = "/usr/lib/bigtop-utils"
+else:
+  jsvc_path = "/usr/libexec/bigtop-utils"
+
+hadoop_heapsize = config['configurations']['global']['hadoop_heapsize']
+namenode_heapsize = config['configurations']['global']['namenode_heapsize']
+namenode_opt_newsize =  config['configurations']['global']['namenode_opt_newsize']
+namenode_opt_maxnewsize =  config['configurations']['global']['namenode_opt_maxnewsize']
+
+jtnode_opt_newsize = default("jtnode_opt_newsize","200m")
+jtnode_opt_maxnewsize = default("jtnode_opt_maxnewsize","200m")
+jtnode_heapsize =  default("jtnode_heapsize","1024m")
+ttnode_heapsize = "1024m"
+
+dtnode_heapsize = config['configurations']['global']['dtnode_heapsize']
+mapred_pid_dir_prefix = default("mapred_pid_dir_prefix","/var/run/hadoop-mapreduce")
+mapreduce_libs_path = "/usr/lib/hadoop-mapreduce/*"
+hadoop_libexec_dir = "/usr/lib/hadoop/libexec"
+mapred_log_dir_prefix = default("mapred_log_dir_prefix","/var/log/hadoop-mapreduce")
+
+#taskcontroller.cfg
+
+mapred_local_dir = "/tmp/hadoop-mapred/mapred/local"
+
+#log4j.properties
+
+yarn_log_dir_prefix = default("yarn_log_dir_prefix","/var/log/hadoop-yarn")
+
+#hdfs ha properties
+dfs_ha_enabled = False
+dfs_ha_nameservices = default("/configurations/hdfs-site/dfs.nameservices", None)
+dfs_ha_namenode_ids = default(format("hdfs-site/dfs.ha.namenodes.{dfs_ha_nameservices}"), None)
+if dfs_ha_namenode_ids:
+  dfs_ha_namenode_ids_array_len = len(dfs_ha_namenode_ids.split(","))
+  if dfs_ha_namenode_ids_array_len > 1:
+    dfs_ha_enabled = True
+
+if dfs_ha_enabled:
+  for nn_id in dfs_ha_namenode_ids:
+    nn_host = config['configurations']['hdfs-site'][format('dfs.namenode.rpc-address.{dfs_ha_nameservices}.{nn_id}')]
+    if hostname in nn_host:
+      namenode_id = nn_id
+  namenode_id = None
+
+dfs_hosts = default('/configurations/hdfs-site/dfs.hosts', None)

+ 322 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/scripts/shared_initialization.py

@@ -0,0 +1,322 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+
+from resource_management import *
+
+def setup_java():
+  """
+  Installs jdk using specific params, that comes from ambari-server
+  """
+  import params
+
+  jdk_curl_target = format("{artifact_dir}/{jdk_name}")
+  java_dir = os.path.dirname(params.java_home)
+  java_exec = format("{java_home}/bin/java")
+  
+  if not params.jdk_name:
+    return
+  
+  Execute(format("mkdir -p {artifact_dir} ; curl -kf --retry 10 {jdk_location}/{jdk_name} -o {jdk_curl_target}"),
+          path = ["/bin","/usr/bin/"],
+          not_if = format("test -e {java_exec}"))
+
+  if params.jdk_name.endswith(".bin"):
+    install_cmd = format("mkdir -p {java_dir} ; chmod +x {jdk_curl_target}; cd {java_dir} ; echo A | {jdk_curl_target} -noregister > /dev/null 2>&1")
+  elif params.jdk_name.endswith(".gz"):
+    install_cmd = format("mkdir -p {java_dir} ; cd {java_dir} ; tar -xf {jdk_curl_target} > /dev/null 2>&1")
+  
+  Execute(install_cmd,
+          path = ["/bin","/usr/bin/"],
+          not_if = format("test -e {java_exec}")
+  )
+  jce_curl_target = format("{artifact_dir}/{jce_policy_zip}")
+  download_jce = format("mkdir -p {artifact_dir}; curl -kf --retry 10 {jce_location}/{jce_policy_zip} -o {jce_curl_target}")
+  Execute( download_jce,
+        path = ["/bin","/usr/bin/"],
+        not_if =format("test -e {jce_curl_target}"),
+        ignore_failures = True
+  )
+  
+  if params.security_enabled:
+    security_dir = format("{java_home}/jre/lib/security")
+    extract_cmd = format("rm -f local_policy.jar; rm -f US_export_policy.jar; unzip -o -j -q {jce_curl_target}")
+    Execute(extract_cmd,
+          only_if = format("test -e {security_dir} && test -f {jce_curl_target}"),
+          cwd  = security_dir,
+          path = ['/bin/','/usr/bin']
+    )
+
+def setup_hadoop():
+  """
+  Setup hadoop files and directories
+  """
+  import params
+
+  File(os.path.join(params.snmp_conf_dir, 'snmpd.conf'),
+       content=Template("snmpd.conf.j2"))
+  Service("snmpd",
+          action = "restart")
+
+  Execute("/bin/echo 0 > /selinux/enforce",
+          only_if="test -f /selinux/enforce"
+  )
+
+  install_snappy()
+
+  #directories
+  Directory(params.hadoop_conf_dir,
+            recursive=True,
+            owner='root',
+            group='root'
+  )
+  Directory(params.hdfs_log_dir_prefix,
+            recursive=True,
+            owner='root',
+            group='root'
+  )
+  Directory(params.hadoop_pid_dir_prefix,
+            recursive=True,
+            owner='root',
+            group='root'
+  )
+
+  #files
+  File(os.path.join(params.limits_conf_dir, 'hdfs.conf'),
+       owner='root',
+       group='root',
+       mode=0644,
+       content=Template("hdfs.conf.j2")
+  )
+  if params.security_enabled:
+    File(os.path.join(params.hadoop_bin, "task-controller"),
+         owner="root",
+         group=params.mapred_tt_group,
+         mode=06050
+    )
+    tc_mode = 0644
+    tc_owner = "root"
+  else:
+    tc_mode = None
+    tc_owner = params.hdfs_user
+
+  if tc_mode:
+    File(os.path.join(params.hadoop_conf_dir, 'taskcontroller.cfg'),
+         owner = tc_owner,
+         mode = tc_mode,
+         group = params.mapred_tt_group,
+         content=Template("taskcontroller.cfg.j2")
+    )
+  else:
+    File(os.path.join(params.hadoop_conf_dir, 'taskcontroller.cfg'),
+         owner=tc_owner,
+         content=Template("taskcontroller.cfg.j2")
+    )
+  for file in ['hadoop-env.sh', 'commons-logging.properties', 'slaves']:
+    File(os.path.join(params.hadoop_conf_dir, file),
+         owner=tc_owner,
+         content=Template(file + ".j2")
+    )
+
+  health_check_template = "health_check" #for stack 1 use 'health_check'
+  File(os.path.join(params.hadoop_conf_dir, "health_check"),
+       owner=tc_owner,
+       content=Template(health_check_template + ".j2")
+  )
+
+  File(os.path.join(params.hadoop_conf_dir, "log4j.properties"),
+       owner=params.hdfs_user,
+       content=Template("log4j.properties.j2")
+  )
+
+  update_log4j_props(os.path.join(params.hadoop_conf_dir, "log4j.properties"))
+
+  File(os.path.join(params.hadoop_conf_dir, "hadoop-metrics2.properties"),
+       owner=params.hdfs_user,
+       content=Template("hadoop-metrics2.properties.j2")
+  )
+
+  db_driver_dload_cmd = ""
+  if params.server_db_name == 'oracle' and params.oracle_driver_url != "":
+    db_driver_dload_cmd = format(
+      "curl -kf --retry 5 {oracle_driver_url} -o {hadoop_lib_home}/{db_driver_filename}")
+  elif params.server_db_name == 'mysql' and params.mysql_driver_url != "":
+    db_driver_dload_cmd = format(
+      "curl -kf --retry 5 {mysql_driver_url} -o {hadoop_lib_home}/{db_driver_filename}")
+
+  if db_driver_dload_cmd:
+    Execute(db_driver_dload_cmd,
+            not_if =format("test -e {hadoop_lib_home}/{db_driver_filename}")
+    )
+
+
+def setup_configs():
+  """
+  Creates configs for services DHFS mapred
+  """
+  import params
+
+  if "mapred-queue-acls" in params.config['configurations']:
+    XmlConfig("mapred-queue-acls.xml",
+              conf_dir=params.hadoop_conf_dir,
+              configurations=params.config['configurations'][
+                'mapred-queue-acls'],
+              owner=params.mapred_user,
+              group=params.user_group
+    )
+  elif os.path.exists(
+      os.path.join(params.hadoop_conf_dir, "mapred-queue-acls.xml")):
+    File(os.path.join(params.hadoop_conf_dir, "mapred-queue-acls.xml"),
+         owner=params.mapred_user,
+         group=params.user_group
+    )
+
+  if "hadoop-policy" in params.config['configurations']:
+    XmlConfig("hadoop-policy.xml",
+              conf_dir=params.hadoop_conf_dir,
+              configurations=params.config['configurations']['hadoop-policy'],
+              owner=params.hdfs_user,
+              group=params.user_group
+    )
+
+  XmlConfig("core-site.xml",
+            conf_dir=params.hadoop_conf_dir,
+            configurations=params.config['configurations']['core-site'],
+            owner=params.hdfs_user,
+            group=params.user_group
+  )
+
+  if "mapred-site" in params.config['configurations']:
+    XmlConfig("mapred-site.xml",
+              conf_dir=params.hadoop_conf_dir,
+              configurations=params.config['configurations']['mapred-site'],
+              owner=params.mapred_user,
+              group=params.user_group
+    )
+
+  File(params.task_log4j_properties_location,
+       content=StaticFile("task-log4j.properties"),
+       mode=0755
+  )
+
+  if "capacity-scheduler" in params.config['configurations']:
+    XmlConfig("capacity-scheduler.xml",
+              conf_dir=params.hadoop_conf_dir,
+              configurations=params.config['configurations'][
+                'capacity-scheduler'],
+              owner=params.hdfs_user,
+              group=params.user_group
+    )
+
+  XmlConfig("hdfs-site.xml",
+            conf_dir=params.hadoop_conf_dir,
+            configurations=params.config['configurations']['hdfs-site'],
+            owner=params.hdfs_user,
+            group=params.user_group
+  )
+
+  # if params.stack_version[0] == "1":
+  Link('/usr/lib/hadoop/lib/hadoop-tools.jar',
+       to = '/usr/lib/hadoop/hadoop-tools.jar'
+  )
+
+  if os.path.exists(os.path.join(params.hadoop_conf_dir, 'configuration.xsl')):
+    File(os.path.join(params.hadoop_conf_dir, 'configuration.xsl'),
+         owner=params.hdfs_user,
+         group=params.user_group
+    )
+  if os.path.exists(os.path.join(params.hadoop_conf_dir, 'fair-scheduler.xml')):
+    File(os.path.join(params.hadoop_conf_dir, 'fair-scheduler.xml'),
+         owner=params.mapred_user,
+         group=params.user_group
+    )
+  if os.path.exists(os.path.join(params.hadoop_conf_dir, 'masters')):
+    File(os.path.join(params.hadoop_conf_dir, 'masters'),
+              owner=params.hdfs_user,
+              group=params.user_group
+    )
+  if os.path.exists(
+      os.path.join(params.hadoop_conf_dir, 'ssl-client.xml.example')):
+    File(os.path.join(params.hadoop_conf_dir, 'ssl-client.xml.example'),
+         owner=params.mapred_user,
+         group=params.user_group
+    )
+  if os.path.exists(
+      os.path.join(params.hadoop_conf_dir, 'ssl-server.xml.example')):
+    File(os.path.join(params.hadoop_conf_dir, 'ssl-server.xml.example'),
+         owner=params.mapred_user,
+         group=params.user_group
+    )
+
+  # generate_include_file()
+
+def update_log4j_props(file):
+  import params
+
+  property_map = {
+    'ambari.jobhistory.database': params.ambari_db_rca_url,
+    'ambari.jobhistory.driver': params.ambari_db_rca_driver,
+    'ambari.jobhistory.user': params.ambari_db_rca_username,
+    'ambari.jobhistory.password': params.ambari_db_rca_password,
+    'ambari.jobhistory.logger': 'DEBUG,JHA',
+
+    'log4j.appender.JHA': 'org.apache.ambari.log4j.hadoop.mapreduce.jobhistory.JobHistoryAppender',
+    'log4j.appender.JHA.database': '${ambari.jobhistory.database}',
+    'log4j.appender.JHA.driver': '${ambari.jobhistory.driver}',
+    'log4j.appender.JHA.user': '${ambari.jobhistory.user}',
+    'log4j.appender.JHA.password': '${ambari.jobhistory.password}',
+
+    'log4j.logger.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger': '${ambari.jobhistory.logger}',
+    'log4j.additivity.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger': 'true'
+  }
+  for key in property_map:
+    value = property_map[key]
+    Execute(format(
+      "sed -i 's~\\({rca_disabled_prefix}\\)\\?{key}=.*~{rca_prefix}{key}={value}~' {file}"))
+
+
+def generate_include_file():
+  import params
+
+  if params.dfs_hosts and params.has_slaves:
+    include_hosts_list = params.slave_hosts
+    File(params.dfs_hosts,
+         content=Template("include_hosts_list.j2"),
+         owner=params.hdfs_user,
+         group=params.user_group
+    )
+
+
+def install_snappy():
+  import params
+
+  snappy_so = "libsnappy.so"
+  so_target_dir_x86 = format("{hadoop_lib_home}/native/Linux-i386-32")
+  so_target_dir_x64 = format("{hadoop_lib_home}/native/Linux-amd64-64")
+  so_target_x86 = format("{so_target_dir_x86}/{snappy_so}")
+  so_target_x64 = format("{so_target_dir_x64}/{snappy_so}")
+  so_src_dir_x86 = format("{hadoop_home}/lib")
+  so_src_dir_x64 = format("{hadoop_home}/lib64")
+  so_src_x86 = format("{so_src_dir_x86}/{snappy_so}")
+  so_src_x64 = format("{so_src_dir_x64}/{snappy_so}")
+  Execute(
+    format("mkdir -p {so_target_dir_x86}; ln -sf {so_src_x86} {so_target_x86}"))
+  Execute(
+    format("mkdir -p {so_target_dir_x64}; ln -sf {so_src_x64} {so_target_x64}"))

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/commons-logging.properties.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/commons-logging.properties.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/exclude_hosts_list.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/exclude_hosts_list.j2


+ 121 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/hadoop-env.sh.j2

@@ -0,0 +1,121 @@
+#/*
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+# Set Hadoop-specific environment variables here.
+
+# The only required environment variable is JAVA_HOME.  All others are
+# optional.  When running a distributed configuration it is best to
+# set JAVA_HOME in this file, so that it is correctly defined on
+# remote nodes.
+
+# The java implementation to use.  Required.
+export JAVA_HOME={{java_home}}
+export HADOOP_HOME_WARN_SUPPRESS=1
+
+# Hadoop Configuration Directory
+#TODO: if env var set that can cause problems
+export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-{{hadoop_conf_dir}}}
+
+# this is different for HDP1 #
+# Path to jsvc required by secure HDP 2.0 datanode
+# export JSVC_HOME={{jsvc_path}}
+
+
+# The maximum amount of heap to use, in MB. Default is 1000.
+export HADOOP_HEAPSIZE="{{hadoop_heapsize}}"
+
+export HADOOP_NAMENODE_INIT_HEAPSIZE="-Xms{{namenode_heapsize}}"
+
+# Extra Java runtime options.  Empty by default.
+export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}"
+
+# Command specific options appended to HADOOP_OPTS when specified
+export HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_NAMENODE_OPTS}"
+HADOOP_JOBTRACKER_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{jtnode_opt_newsize}} -XX:MaxNewSize={{jtnode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xmx{{jtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dhadoop.mapreduce.jobsummary.logger=INFO,JSA ${HADOOP_JOBTRACKER_OPTS}"
+
+HADOOP_TASKTRACKER_OPTS="-server -Xmx{{ttnode_heapsize}} -Dhadoop.security.logger=ERROR,console -Dmapred.audit.logger=ERROR,console ${HADOOP_TASKTRACKER_OPTS}"
+HADOOP_DATANODE_OPTS="-Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=ERROR,DRFAS ${HADOOP_DATANODE_OPTS}"
+HADOOP_BALANCER_OPTS="-server -Xmx{{hadoop_heapsize}}m ${HADOOP_BALANCER_OPTS}"
+
+export HADOOP_SECONDARYNAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps ${HADOOP_NAMENODE_INIT_HEAPSIZE} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_SECONDARYNAMENODE_OPTS}"
+
+# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
+export HADOOP_CLIENT_OPTS="-Xmx${HADOOP_HEAPSIZE}m $HADOOP_CLIENT_OPTS"
+# On secure datanodes, user to run the datanode as after dropping privileges
+export HADOOP_SECURE_DN_USER={{hdfs_user}}
+
+# Extra ssh options.  Empty by default.
+export HADOOP_SSH_OPTS="-o ConnectTimeout=5 -o SendEnv=HADOOP_CONF_DIR"
+
+# Where log files are stored.  $HADOOP_HOME/logs by default.
+export HADOOP_LOG_DIR={{hdfs_log_dir_prefix}}/$USER
+
+# History server logs
+export HADOOP_MAPRED_LOG_DIR={{mapred_log_dir_prefix}}/$USER
+
+# Where log files are stored in the secure data environment.
+export HADOOP_SECURE_DN_LOG_DIR={{hdfs_log_dir_prefix}}/$HADOOP_SECURE_DN_USER
+
+# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
+# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
+
+# host:path where hadoop code should be rsync'd from.  Unset by default.
+# export HADOOP_MASTER=master:/home/$USER/src/hadoop
+
+# Seconds to sleep between slave commands.  Unset by default.  This
+# can be useful in large clusters, where, e.g., slave rsyncs can
+# otherwise arrive faster than the master can service them.
+# export HADOOP_SLAVE_SLEEP=0.1
+
+# The directory where pid files are stored. /tmp by default.
+export HADOOP_PID_DIR={{hadoop_pid_dir_prefix}}/$USER
+export HADOOP_SECURE_DN_PID_DIR={{hadoop_pid_dir_prefix}}/$HADOOP_SECURE_DN_USER
+
+# History server pid
+export HADOOP_MAPRED_PID_DIR={{mapred_pid_dir_prefix}}/$USER
+
+YARN_RESOURCEMANAGER_OPTS="-Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY"
+
+# A string representing this instance of hadoop. $USER by default.
+export HADOOP_IDENT_STRING=$USER
+
+# The scheduling priority for daemon processes.  See 'man nice'.
+
+# export HADOOP_NICENESS=10
+
+# Use libraries from standard classpath
+JAVA_JDBC_LIBS=""
+#Add libraries required by mysql connector
+for jarFile in `ls /usr/share/java/*mysql* 2>/dev/null`
+do
+  JAVA_JDBC_LIBS=${JAVA_JDBC_LIBS}:$jarFile
+done
+#Add libraries required by oracle connector
+for jarFile in `ls /usr/share/java/*ojdbc* 2>/dev/null`
+do
+  JAVA_JDBC_LIBS=${JAVA_JDBC_LIBS}:$jarFile
+done
+#Add libraries required by nodemanager
+MAPREDUCE_LIBS={{mapreduce_libs_path}}
+export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}${JAVA_JDBC_LIBS}:${MAPREDUCE_LIBS}
+
+# Setting path to hdfs command line
+export HADOOP_LIBEXEC_DIR={{hadoop_libexec_dir}}
+
+#Mostly required for hadoop 2.0
+export JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}:/usr/lib/hadoop/lib/native/Linux-amd64-64

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/hadoop-metrics2.properties.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/hadoop-metrics2.properties.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/hdfs.conf.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/hdfs.conf.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/health_check-v2.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/health_check-v2.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/health_check.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/health_check.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/include_hosts_list.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/include_hosts_list.j2


+ 200 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/log4j.properties.j2

@@ -0,0 +1,200 @@
+# Copyright 2011 The Apache Software Foundation
+# 
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Define some default values that can be overridden by system properties
+hadoop.root.logger=INFO,console
+hadoop.log.dir=.
+hadoop.log.file=hadoop.log
+
+
+# Define the root logger to the system property "hadoop.root.logger".
+log4j.rootLogger=${hadoop.root.logger}, EventCounter
+
+# Logging Threshold
+log4j.threshhold=ALL
+
+#
+# Daily Rolling File Appender
+#
+
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+#log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+# Debugging Pattern format
+#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
+
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this 
+#
+
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
+
+#
+# TaskLog Appender
+#
+
+#Default values
+hadoop.tasklog.taskid=null
+hadoop.tasklog.iscleanup=false
+hadoop.tasklog.noKeepSplits=4
+hadoop.tasklog.totalLogFileSize=100
+hadoop.tasklog.purgeLogSplits=true
+hadoop.tasklog.logsRetainHours=12
+
+log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
+log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
+log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
+log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
+
+log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
+log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+
+#
+#Security audit appender
+#
+hadoop.security.logger=INFO,console
+hadoop.security.log.maxfilesize=256MB
+hadoop.security.log.maxbackupindex=20
+log4j.category.SecurityLogger=${hadoop.security.logger}
+hadoop.security.log.file=SecurityAuth.audit
+log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender 
+log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
+log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
+log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd
+
+log4j.appender.RFAS=org.apache.log4j.RollingFileAppender 
+log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
+log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
+log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}
+
+#
+# hdfs audit logging
+#
+hdfs.audit.logger=INFO,console
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
+log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
+
+#
+# mapred audit logging
+#
+mapred.audit.logger=INFO,console
+log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
+log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
+log4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
+log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
+log4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd
+
+#
+# Rolling File Appender
+#
+
+log4j.appender.RFA=org.apache.log4j.RollingFileAppender
+log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
+
+# Logfile size and and 30-day backups
+log4j.appender.RFA.MaxFileSize=256MB
+log4j.appender.RFA.MaxBackupIndex=10
+
+log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n
+log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
+
+
+# Custom Logging levels
+
+hadoop.metrics.log.level=INFO
+#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
+#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
+#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
+log4j.logger.org.apache.hadoop.metrics2=${hadoop.metrics.log.level}
+
+# Jets3t library
+log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
+
+#
+# Null Appender
+# Trap security logger on the hadoop client side
+#
+log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
+
+#
+# Event Counter Appender
+# Sends counts of logging messages at different severity levels to Hadoop Metrics.
+#
+log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
+
+{% if is_jtnode_master or is_rmnode_master %}
+#
+# Job Summary Appender 
+#
+# Use following logger to send summary to separate file defined by 
+# hadoop.mapreduce.jobsummary.log.file rolled daily:
+# hadoop.mapreduce.jobsummary.logger=INFO,JSA
+# 
+hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
+hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
+log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender
+
+log4j.appender.JSA.File={{hdfs_log_dir_prefix}}/{{mapred_user}}/${hadoop.mapreduce.jobsummary.log.file}
+
+log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
+log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
+log4j.appender.JSA.DatePattern=.yyyy-MM-dd
+log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
+log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
+log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false
+{% endif %}
+
+{{rca_prefix}}ambari.jobhistory.database={{ambari_db_rca_url}}
+{{rca_prefix}}ambari.jobhistory.driver={{ambari_db_rca_driver}}
+{{rca_prefix}}ambari.jobhistory.user={{ambari_db_rca_username}}
+{{rca_prefix}}ambari.jobhistory.password={{ambari_db_rca_password}}
+{{rca_prefix}}ambari.jobhistory.logger=DEBUG,JHA
+
+{{rca_prefix}}log4j.appender.JHA=org.apache.ambari.log4j.hadoop.mapreduce.jobhistory.JobHistoryAppender
+{{rca_prefix}}log4j.appender.JHA.database=${ambari.jobhistory.database}
+{{rca_prefix}}log4j.appender.JHA.driver=${ambari.jobhistory.driver}
+{{rca_prefix}}log4j.appender.JHA.user=${ambari.jobhistory.user}
+{{rca_prefix}}log4j.appender.JHA.password=${ambari.jobhistory.password}
+
+{{rca_prefix}}log4j.logger.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=${ambari.jobhistory.logger}
+{{rca_prefix}}log4j.additivity.org.apache.hadoop.mapred.JobHistory$JobHistoryLogger=true

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/slaves.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/slaves.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/snmpd.conf.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/snmpd.conf.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/hooks/before-START/templates/taskcontroller.cfg.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/hooks/before-START/templates/taskcontroller.cfg.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/checkGmetad.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/checkGmetad.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/checkGmond.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/checkGmond.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/checkRrdcached.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/checkRrdcached.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/gmetad.init → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/gmetad.init


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/gmetadLib.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/gmetadLib.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/gmond.init → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/gmond.init


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/gmondLib.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/gmondLib.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/rrd.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/rrd.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/rrdcachedLib.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/rrdcachedLib.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/setupGanglia.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/setupGanglia.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmetad.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/startGmetad.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmond.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/startGmond.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startRrdcached.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/startRrdcached.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmetad.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/stopGmetad.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmond.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/stopGmond.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopRrdcached.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/stopRrdcached.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/teardownGanglia.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/files/teardownGanglia.sh


+ 106 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia.py

@@ -0,0 +1,106 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+from resource_management import *
+import os
+
+
+def groups_and_users():
+  import params
+
+  Group(params.user_group)
+  Group(params.gmetad_user)
+  Group(params.gmond_user)
+  User(params.gmond_user,
+       groups=[params.gmond_user])
+  User(params.gmetad_user,
+       groups=[params.gmetad_user])
+
+
+def config():
+  import params
+
+  shell_cmds_dir = params.ganglia_shell_cmds_dir
+  shell_files = ['checkGmond.sh', 'checkRrdcached.sh', 'gmetadLib.sh',
+                 'gmondLib.sh', 'rrdcachedLib.sh',
+                 'setupGanglia.sh', 'startGmetad.sh', 'startGmond.sh',
+                 'startRrdcached.sh', 'stopGmetad.sh',
+                 'stopGmond.sh', 'stopRrdcached.sh', 'teardownGanglia.sh']
+  Directory(shell_cmds_dir,
+            owner="root",
+            group="root",
+            recursive=True
+  )
+  init_file("gmetad")
+  init_file("gmond")
+  for sh_file in shell_files:
+    shell_file(sh_file)
+  for conf_file in ['gangliaClusters.conf', 'gangliaEnv.sh', 'gangliaLib.sh']:
+    ganglia_TemplateConfig(conf_file)
+
+
+def init_file(name):
+  import params
+
+  File("/etc/init.d/hdp-" + name,
+       content=StaticFile(name + ".init"),
+       mode=0755
+  )
+
+
+def shell_file(name):
+  import params
+
+  File(params.ganglia_shell_cmds_dir + os.sep + name,
+       content=StaticFile(name),
+       mode=0755
+  )
+
+
+def ganglia_TemplateConfig(name, mode=755, tag=None):
+  import params
+
+  TemplateConfig(format("{params.ganglia_shell_cmds_dir}/{name}"),
+                 owner="root",
+                 group="root",
+                 template_tag=tag,
+                 mode=mode
+  )
+
+
+def generate_daemon(ganglia_service,
+                    name=None,
+                    role=None,
+                    owner=None,
+                    group=None):
+  import params
+
+  cmd = ""
+  if ganglia_service == "gmond":
+    if role == "server":
+      cmd = "{params.ganglia_shell_cmds_dir}/setupGanglia.sh -c {name} -m -o {owner} -g {group}"
+    else:
+      cmd = "{params.ganglia_shell_cmds_dir}/setupGanglia.sh -c {name} -o {owner} -g {group}"
+  elif ganglia_service == "gmetad":
+    cmd = "{params.ganglia_shell_cmds_dir}/setupGanglia.sh -t -o {owner} -g {group}"
+  else:
+    raise Fail("Unexpected ganglia service")
+  Execute(format(cmd),
+          path=[params.ganglia_shell_cmds_dir, "/usr/sbin",
+                "/sbin:/usr/local/bin", "/bin", "/usr/bin"]
+  )

+ 163 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia_monitor.py

@@ -0,0 +1,163 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import sys
+import os
+from os import path
+from resource_management import *
+from ganglia import generate_daemon
+import ganglia
+import ganglia_monitor_service
+
+
+class GangliaMonitor(Script):
+  def install(self, env):
+    import params
+
+    self.install_packages(env)
+    env.set_params(params)
+    self.config(env)
+
+  def start(self, env):
+    ganglia_monitor_service.monitor("start")
+
+  def stop(self, env):
+    ganglia_monitor_service.monitor("stop")
+
+
+  def status(self, env):
+    import status_params
+    pid_file_name = 'gmond.pid'
+    pid_file_count = 0
+    pid_dir = status_params.pid_dir
+    # Recursively check all existing gmond pid files
+    for cur_dir, subdirs, files in os.walk(pid_dir):
+      for file_name in files:
+        if file_name == pid_file_name:
+          pid_file = os.path.join(cur_dir, file_name)
+          check_process_status(pid_file)
+          pid_file_count += 1
+    if pid_file_count == 0: # If no any pid file is present
+      raise ComponentIsNotRunning()
+
+
+  def config(self, env):
+    import params
+
+    ganglia.groups_and_users()
+
+    Directory(params.ganglia_conf_dir,
+              owner="root",
+              group=params.user_group,
+              recursive=True
+    )
+
+    ganglia.config()
+
+    if params.is_namenode_master:
+      generate_daemon("gmond",
+                      name = "HDPNameNode",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.is_jtnode_master:
+      generate_daemon("gmond",
+                      name = "HDPJobTracker",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.is_rmnode_master:
+      generate_daemon("gmond",
+                      name = "HDPResourceManager",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.is_hsnode_master:
+      generate_daemon("gmond",
+                      name = "HDPHistoryServer",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.is_hbase_master:
+      generate_daemon("gmond",
+                      name = "HDPHBaseMaster",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.is_hsnode_master:
+      generate_daemon("gmond",
+                      name = "HDPHistoryServer",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.is_slave:
+      generate_daemon("gmond",
+                      name = "HDPDataNode",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.is_tasktracker:
+      generate_daemon("gmond",
+                      name = "HDPTaskTracker",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.is_hbase_rs:
+      generate_daemon("gmond",
+                      name = "HDPHBaseRegionServer",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.is_flume:
+      generate_daemon("gmond",
+                      name = "HDPFlumeServer",
+                      role = "monitor",
+                      owner = "root",
+                      group = params.user_group)
+
+
+    Directory(path.join(params.ganglia_dir, "conf.d"),
+              owner="root",
+              group=params.user_group
+    )
+
+    File(path.join(params.ganglia_dir, "conf.d/modgstatus.conf"),
+         owner="root",
+         group=params.user_group
+    )
+    File(path.join(params.ganglia_dir, "conf.d/multicpu.conf"),
+         owner="root",
+         group=params.user_group
+    )
+    File(path.join(params.ganglia_dir, "gmond.conf"),
+         owner="root",
+         group=params.user_group
+    )
+
+
+if __name__ == "__main__":
+  GangliaMonitor().execute()

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_monitor_service.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia_monitor_service.py


+ 181 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia_server.py

@@ -0,0 +1,181 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import sys
+import os
+from os import path
+from resource_management import *
+from ganglia import generate_daemon
+import ganglia
+import ganglia_server_service
+
+
+class GangliaServer(Script):
+  def install(self, env):
+    import params
+
+    self.install_packages(env)
+    env.set_params(params)
+    self.config(env)
+
+  def start(self, env):
+    import params
+
+    env.set_params(params)
+    ganglia_server_service.server("start")
+
+  def stop(self, env):
+    import params
+
+    env.set_params(params)
+    ganglia_server_service.server("stop")
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    pid_file = format("{pid_dir}/gmetad.pid")
+    # Recursively check all existing gmetad pid files
+    check_process_status(pid_file)
+
+  def config(self, env):
+    import params
+
+    ganglia.groups_and_users()
+    ganglia.config()
+
+    if params.has_namenodes:
+      generate_daemon("gmond",
+                      name = "HDPNameNode",
+                      role = "server",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.has_jobtracker:
+      generate_daemon("gmond",
+                      name = "HDPJobTracker",
+                      role = "server",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.has_hbase_masters:
+      generate_daemon("gmond",
+                      name = "HDPHBaseMaster",
+                      role = "server",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.has_resourcemanager:
+      generate_daemon("gmond",
+                      name = "HDPResourceManager",
+                      role = "server",
+                      owner = "root",
+                      group = params.user_group)
+    if params.has_historyserver:
+      generate_daemon("gmond",
+                      name = "HDPHistoryServer",
+                      role = "server",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.has_slaves:
+      generate_daemon("gmond",
+                      name = "HDPDataNode",
+                      role = "server",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.has_tasktracker:
+      generate_daemon("gmond",
+                      name = "HDPTaskTracker",
+                      role = "server",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.has_hbase_rs:
+      generate_daemon("gmond",
+                      name = "HDPHBaseRegionServer",
+                      role = "server",
+                      owner = "root",
+                      group = params.user_group)
+
+    if params.has_flume:
+      generate_daemon("gmond",
+                      name = "HDPFlumeServer",
+                      role = "server",
+                      owner = "root",
+                      group = params.user_group)
+    generate_daemon("gmetad",
+                    name = "gmetad",
+                    role = "server",
+                    owner = "root",
+                    group = params.user_group)
+
+    change_permission()
+    server_files()
+    File(path.join(params.ganglia_dir, "gmetad.conf"),
+         owner="root",
+         group=params.user_group
+    )
+
+
+def change_permission():
+  import params
+
+  Directory('/var/lib/ganglia/dwoo',
+            mode=0777,
+            owner=params.gmetad_user,
+            recursive=True
+  )
+
+
+def server_files():
+  import params
+
+  rrd_py_path = params.rrd_py_path
+  Directory(rrd_py_path,
+            recursive=True
+  )
+  rrd_py_file_path = path.join(rrd_py_path, "rrd.py")
+  File(rrd_py_file_path,
+       content=StaticFile("rrd.py"),
+       mode=0755
+  )
+  rrd_file_owner = params.gmetad_user
+  if params.rrdcached_default_base_dir != params.rrdcached_base_dir:
+    Directory(params.rrdcached_base_dir,
+              owner=rrd_file_owner,
+              group=rrd_file_owner,
+              mode=0755,
+              recursive=True
+    )
+    Directory(params.rrdcached_default_base_dir,
+              action = "delete"
+    )
+    Link(params.rrdcached_default_base_dir,
+         to=params.rrdcached_base_dir
+    )
+  elif rrd_file_owner != 'nobody':
+    Directory(params.rrdcached_default_base_dir,
+              owner=rrd_file_owner,
+              group=rrd_file_owner,
+              recursive=True
+    )
+
+
+if __name__ == "__main__":
+  GangliaServer().execute()

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_server_service.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/ganglia_server_service.py


+ 74 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/params.py

@@ -0,0 +1,74 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+from resource_management import *
+from resource_management.core.system import System
+
+config = Script.get_config()
+
+user_group = config['configurations']['global']["user_group"]
+ganglia_conf_dir = config['configurations']['global']["ganglia_conf_dir"]
+ganglia_dir = "/etc/ganglia"
+ganglia_runtime_dir = config['configurations']['global']["ganglia_runtime_dir"]
+ganglia_shell_cmds_dir = "/usr/libexec/hdp/ganglia"
+
+gmetad_user = config['configurations']['global']["gmetad_user"]
+gmond_user = config['configurations']['global']["gmond_user"]
+
+webserver_group = "apache"
+rrdcached_default_base_dir = "/var/lib/ganglia/rrds"
+rrdcached_base_dir = config['configurations']['global']["rrdcached_base_dir"]
+
+ganglia_server_host = config["clusterHostInfo"]["ganglia_server_host"][0]
+
+hostname = config["hostname"]
+namenode_host = default("/clusterHostInfo/namenode_host", [])
+jtnode_host = default("/clusterHostInfo/jtnode_host", [])
+rm_host = default("/clusterHostInfo/rm_host", [])
+hs_host = default("/clusterHostInfo/hs_host", [])
+hbase_master_hosts = default("/clusterHostInfo/hbase_master_hosts", [])
+# datanodes are marked as slave_hosts
+slave_hosts = default("/clusterHostInfo/slave_hosts", [])
+tt_hosts = default("/clusterHostInfo/mapred_tt_hosts", [])
+hbase_rs_hosts = default("/clusterHostInfo/hbase_rs_hosts", [])
+flume_hosts = default("/clusterHostInfo/flume_hosts", [])
+
+is_namenode_master = hostname in namenode_host
+is_jtnode_master = hostname in jtnode_host
+is_rmnode_master = hostname in rm_host
+is_hsnode_master = hostname in hs_host
+is_hbase_master = hostname in hbase_master_hosts
+is_slave = hostname in slave_hosts
+is_tasktracker = hostname in tt_hosts
+is_hbase_rs = hostname in hbase_rs_hosts
+is_flume = hostname in flume_hosts
+
+has_namenodes = not len(namenode_host) == 0
+has_jobtracker = not len(jtnode_host) == 0
+has_resourcemanager = not len(rm_host) == 0
+has_historyserver = not len(hs_host) == 0
+has_hbase_masters = not len(hbase_master_hosts) == 0
+has_slaves = not len(slave_hosts) == 0
+has_tasktracker = not len(tt_hosts) == 0
+has_hbase_rs = not len(hbase_rs_hosts) == 0
+has_flume = not len(flume_hosts) == 0
+
+if System.get_instance().platform == "suse":
+  rrd_py_path = '/srv/www/cgi-bin'
+else:
+  rrd_py_path = '/var/www/cgi-bin'

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/status_params.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/scripts/status_params.py


+ 34 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/templates/gangliaClusters.conf.j2

@@ -0,0 +1,34 @@
+#/*
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+#########################################################
+### ClusterName           GmondMasterHost   GmondPort ###
+#########################################################
+
+    HDPJournalNode      {{ganglia_server_host}}   8654
+    HDPFlumeServer      {{ganglia_server_host}}   8655
+    HDPHBaseRegionServer       	{{ganglia_server_host}}   8656
+    HDPNodeManager     	{{ganglia_server_host}}   8657
+    HDPTaskTracker     	{{ganglia_server_host}}   8658
+    HDPDataNode       	{{ganglia_server_host}}   8659
+    HDPSlaves       	{{ganglia_server_host}}   8660
+    HDPNameNode         {{ganglia_server_host}}   8661
+    HDPJobTracker     	{{ganglia_server_host}}  8662
+    HDPHBaseMaster      {{ganglia_server_host}}   8663
+    HDPResourceManager  {{ganglia_server_host}}   8664
+    HDPHistoryServer    {{ganglia_server_host}}   8666

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaEnv.sh.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/templates/gangliaEnv.sh.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaLib.sh.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/package/templates/gangliaLib.sh.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/files/hbaseSmokeVerify.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/files/hbaseSmokeVerify.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/scripts/__init__.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/__init__.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/scripts/functions.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/functions.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/scripts/hbase.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/scripts/hbase_client.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase_client.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/scripts/hbase_master.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase_master.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/scripts/hbase_regionserver.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase_regionserver.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/scripts/hbase_service.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/hbase_service.py


+ 84 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/params.py

@@ -0,0 +1,84 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+import functions
+import status_params
+
+# server configurations
+config = Script.get_config()
+
+conf_dir = "/etc/hbase/conf"
+daemon_script = "/usr/lib/hbase/bin/hbase-daemon.sh"
+
+hbase_user = config['configurations']['global']['hbase_user']
+smokeuser = config['configurations']['global']['smokeuser']
+security_enabled = config['configurations']['global']['security_enabled']
+user_group = config['configurations']['global']['user_group']
+
+# this is "hadoop-metrics2-hbase.properties" for 2.x stacks
+metric_prop_file_name = "hadoop-metrics.properties" 
+
+# not supporting 32 bit jdk.
+java64_home = config['hostLevelParams']['java_home']
+
+log_dir = config['configurations']['global']['hbase_log_dir']
+master_heapsize = config['configurations']['global']['hbase_master_heapsize']
+
+regionserver_heapsize = config['configurations']['global']['hbase_regionserver_heapsize']
+regionserver_xmn_size = functions.calc_xmn_from_xms(regionserver_heapsize, 0.2, 512)
+
+pid_dir = status_params.pid_dir
+tmp_dir = config['configurations']['hbase-site']['hbase.tmp.dir']
+
+client_jaas_config_file = default('hbase_client_jaas_config_file', format("{conf_dir}/hbase_client_jaas.conf"))
+master_jaas_config_file = default('hbase_master_jaas_config_file', format("{conf_dir}/hbase_master_jaas.conf"))
+regionserver_jaas_config_file = default('hbase_regionserver_jaas_config_file', format("{conf_dir}/hbase_regionserver_jaas.conf"))
+
+ganglia_server_hosts = default('/clusterHostInfo/ganglia_server_host', []) # is not passed when ganglia is not present
+ganglia_server_host = '' if len(ganglia_server_hosts) == 0 else ganglia_server_hosts[0]
+
+rs_hosts = default('hbase_rs_hosts', config['clusterHostInfo']['slave_hosts']) #if hbase_rs_hosts not given it is assumed that region servers on same nodes as slaves
+
+smoke_test_user = config['configurations']['global']['smokeuser']
+smokeuser_permissions = default('smokeuser_permissions', "RWXCA")
+service_check_data = get_unique_id_and_date()
+
+if security_enabled:
+  
+  _use_hostname_in_principal = default('instance_name', True)
+  _master_primary_name = config['configurations']['global']['hbase_master_primary_name']
+  _hostname = config['hostname']
+  _kerberos_domain = config['configurations']['global']['kerberos_domain']
+  _master_principal_name = config['configurations']['global']['hbase_master_principal_name']
+  _regionserver_primary_name = config['configurations']['global']['hbase_regionserver_primary_name']
+  
+  if _use_hostname_in_principal:
+    master_jaas_princ = format("{_master_primary_name}/{_hostname}@{_kerberos_domain}")
+    regionserver_jaas_princ = format("{_regionserver_primary_name}/{_hostname}@{_kerberos_domain}")
+  else:
+    master_jaas_princ = format("{_master_principal_name}@{_kerberos_domain}")
+    regionserver_jaas_princ = format("{_regionserver_primary_name}@{_kerberos_domain}")
+    
+master_keytab_path = config['configurations']['hbase-site']['hbase.master.keytab.file']
+regionserver_keytab_path = config['configurations']['hbase-site']['hbase.regionserver.keytab.file']
+smoke_user_keytab = config['configurations']['global']['smokeuser_keytab']
+hbase_user_keytab = config['configurations']['global']['hbase_user_keytab']
+kinit_path_local = get_kinit_path([default("kinit_path_local",None), "/usr/bin", "/usr/kerberos/bin", "/usr/sbin"])

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/scripts/service_check.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/service_check.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/scripts/status_params.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/scripts/status_params.py


+ 50 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hadoop-metrics.properties-GANGLIA-MASTER.j2

@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# See http://wiki.apache.org/hadoop/GangliaMetrics
+#
+# Make sure you know whether you are using ganglia 3.0 or 3.1.
+# If 3.1, you will have to patch your hadoop instance with HADOOP-4675
+# And, yes, this file is named hadoop-metrics.properties rather than
+# hbase-metrics.properties because we're leveraging the hadoop metrics
+# package and hadoop-metrics.properties is an hardcoded-name, at least
+# for the moment.
+#
+# See also http://hadoop.apache.org/hbase/docs/current/metrics.html
+
+# HBase-specific configuration to reset long-running stats (e.g. compactions)
+# If this variable is left out, then the default is no expiration.
+hbase.extendedperiod = 3600
+
+# Configuration of the "hbase" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+hbase.period=10
+hbase.servers={{ganglia_server_host}}:8663
+
+# Configuration of the "jvm" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+jvm.period=10
+jvm.servers={{ganglia_server_host}}:8663
+
+# Configuration of the "rpc" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+rpc.period=10
+rpc.servers={{ganglia_server_host}}:8663

+ 50 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hadoop-metrics.properties-GANGLIA-RS.j2

@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# See http://wiki.apache.org/hadoop/GangliaMetrics
+#
+# Make sure you know whether you are using ganglia 3.0 or 3.1.
+# If 3.1, you will have to patch your hadoop instance with HADOOP-4675
+# And, yes, this file is named hadoop-metrics.properties rather than
+# hbase-metrics.properties because we're leveraging the hadoop metrics
+# package and hadoop-metrics.properties is an hardcoded-name, at least
+# for the moment.
+#
+# See also http://hadoop.apache.org/hbase/docs/current/metrics.html
+
+# HBase-specific configuration to reset long-running stats (e.g. compactions)
+# If this variable is left out, then the default is no expiration.
+hbase.extendedperiod = 3600
+
+# Configuration of the "hbase" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+hbase.period=10
+hbase.servers={{ganglia_server_host}}:8656
+
+# Configuration of the "jvm" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+jvm.period=10
+jvm.servers={{ganglia_server_host}}:8656
+
+# Configuration of the "rpc" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+rpc.period=10
+rpc.servers={{ganglia_server_host}}:8656

+ 50 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hadoop-metrics.properties.j2

@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# See http://wiki.apache.org/hadoop/GangliaMetrics
+#
+# Make sure you know whether you are using ganglia 3.0 or 3.1.
+# If 3.1, you will have to patch your hadoop instance with HADOOP-4675
+# And, yes, this file is named hadoop-metrics.properties rather than
+# hbase-metrics.properties because we're leveraging the hadoop metrics
+# package and hadoop-metrics.properties is an hardcoded-name, at least
+# for the moment.
+#
+# See also http://hadoop.apache.org/hbase/docs/current/metrics.html
+
+# HBase-specific configuration to reset long-running stats (e.g. compactions)
+# If this variable is left out, then the default is no expiration.
+hbase.extendedperiod = 3600
+
+# Configuration of the "hbase" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+hbase.period=10
+hbase.servers={{ganglia_server_host}}:8663
+
+# Configuration of the "jvm" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+jvm.period=10
+jvm.servers={{ganglia_server_host}}:8663
+
+# Configuration of the "rpc" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+rpc.period=10
+rpc.servers={{ganglia_server_host}}:8663

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/templates/hbase-env.sh.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase-env.sh.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/templates/hbase-smoke.sh.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase-smoke.sh.j2


+ 23 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase_client_jaas.conf.j2

@@ -0,0 +1,23 @@
+#/*
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=false
+useTicketCache=true;
+};

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/templates/hbase_grant_permissions.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase_grant_permissions.j2


+ 25 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase_master_jaas.conf.j2

@@ -0,0 +1,25 @@
+#/*
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=true
+storeKey=true
+useTicketCache=false
+keyTab="{{master_keytab_path}}"
+principal="{{master_jaas_princ}}";
+};

+ 25 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/hbase_regionserver_jaas.conf.j2

@@ -0,0 +1,25 @@
+#/*
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=true
+storeKey=true
+useTicketCache=false
+keyTab="{{regionserver_keytab_path}}"
+principal="{{regionserver_jaas_princ}}";
+};

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/package/templates/regionservers.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/package/templates/regionservers.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HDFS/package/files/checkForFormat.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/files/checkForFormat.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HDFS/package/files/checkWebUI.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/files/checkWebUI.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HDFS/package/scripts/datanode.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/datanode.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HDFS/package/scripts/hdfs_client.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/hdfs_client.py


+ 59 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/hdfs_datanode.py

@@ -0,0 +1,59 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+from utils import service
+import os
+
+def datanode(action=None):
+  import params
+
+  if action == "configure":
+    Directory(params.dfs_domain_socket_dir,
+              recursive=True,
+              mode=0750,
+              owner=params.hdfs_user,
+              group=params.user_group)
+    Directory(os.path.dirname(params.dfs_data_dir),
+              recursive=True,
+              mode=0755)
+    Directory(params.dfs_data_dir,
+              recursive=False,
+              mode=0750,
+              owner=params.hdfs_user,
+              group=params.user_group)
+
+  if action == "start":
+    service(
+      action=action, name="datanode",
+      user=params.hdfs_user,
+      create_pid_dir=True,
+      create_log_dir=True,
+      keytab=params.dfs_datanode_keytab_file,
+      principal=params.dfs_datanode_kerberos_principal
+    )
+  if action == "stop":
+    service(
+      action=action, name="datanode",
+      user=params.hdfs_user,
+      create_pid_dir=True,
+      create_log_dir=True,
+      keytab=params.dfs_datanode_keytab_file,
+      principal=params.dfs_datanode_kerberos_principal
+    )

+ 192 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/hdfs_namenode.py

@@ -0,0 +1,192 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+from utils import service
+from utils import hdfs_directory
+import urlparse
+
+
+def namenode(action=None, format=True):
+  import params
+
+  if action == "configure":
+    create_name_dirs(params.dfs_name_dir)
+
+  if action == "start":
+    if format:
+      format_namenode()
+      pass
+    service(
+      action="start", name="namenode", user=params.hdfs_user,
+      keytab=params.dfs_namenode_keytab_file,
+      create_pid_dir=True,
+      create_log_dir=True,
+      principal=params.dfs_namenode_kerberos_principal
+    )
+
+    # TODO: extract creating of dirs to different services
+    create_app_directories()
+    create_user_directories()
+
+  if action == "stop":
+    service(
+      action="stop", name="namenode", user=params.hdfs_user,
+      keytab=params.dfs_namenode_keytab_file,
+      principal=params.dfs_namenode_kerberos_principal
+    )
+
+  if action == "decommission":
+    decommission()
+
+def create_name_dirs(directories):
+  import params
+
+  dirs = directories.split(",")
+  Directory(dirs,
+            mode=0755,
+            owner=params.hdfs_user,
+            group=params.user_group,
+            recursive=True
+  )
+
+
+def create_app_directories():
+  import params
+
+  hdfs_directory(name="/tmp",
+                 owner=params.hdfs_user,
+                 mode="777"
+  )
+  #mapred directories
+  if params.has_jobtracker:
+    hdfs_directory(name="/mapred",
+                   owner=params.mapred_user
+    )
+    hdfs_directory(name="/mapred/system",
+                   owner=params.mapred_user
+    )
+    #hbase directories
+  if len(params.hbase_master_hosts) != 0:
+    hdfs_directory(name=params.hbase_hdfs_root_dir,
+                   owner=params.hbase_user
+    )
+    hdfs_directory(name=params.hbase_staging_dir,
+                   owner=params.hbase_user,
+                   mode="711"
+    )
+    #hive directories
+  if len(params.hive_server_host) != 0:
+    hdfs_directory(name=params.hive_apps_whs_dir,
+                   owner=params.hive_user,
+                   mode="777"
+    )
+  if len(params.hcat_server_hosts) != 0:
+    hdfs_directory(name=params.webhcat_apps_dir,
+                   owner=params.webhcat_user,
+                   mode="755"
+    )
+  if len(params.hs_host) != 0:
+    hdfs_directory(name=params.mapreduce_jobhistory_intermediate_done_dir,
+                   owner=params.mapred_user,
+                   group=params.user_group,
+                   mode="777"
+    )
+
+    hdfs_directory(name=params.mapreduce_jobhistory_done_dir,
+                   owner=params.mapred_user,
+                   group=params.user_group,
+                   mode="777"
+    )
+
+  pass
+
+
+def create_user_directories():
+  import params
+
+  hdfs_directory(name=params.smoke_hdfs_user_dir,
+                 owner=params.smoke_user,
+                 mode=params.smoke_hdfs_user_mode
+  )
+
+  if params.has_hive_server_host:
+    hdfs_directory(name=params.hive_hdfs_user_dir,
+                   owner=params.hive_user,
+                   mode=params.hive_hdfs_user_mode
+    )
+
+  if params.has_hcat_server_host:
+    if params.hcat_hdfs_user_dir != params.webhcat_hdfs_user_dir:
+      hdfs_directory(name=params.hcat_hdfs_user_dir,
+                     owner=params.hcat_user,
+                     mode=params.hcat_hdfs_user_mode
+      )
+    hdfs_directory(name=params.webhcat_hdfs_user_dir,
+                   owner=params.webhcat_user,
+                   mode=params.webhcat_hdfs_user_mode
+    )
+
+  if params.has_oozie_server:
+    hdfs_directory(name=params.oozie_hdfs_user_dir,
+                   owner=params.oozie_user,
+                   mode=params.oozie_hdfs_user_mode
+    )
+
+
+def format_namenode(force=None):
+  import params
+
+  mark_dir = params.namenode_formatted_mark_dir
+  dfs_name_dir = params.dfs_name_dir
+  hdfs_user = params.hdfs_user
+  hadoop_conf_dir = params.hadoop_conf_dir
+
+  if True:
+    if force:
+      ExecuteHadoop('namenode -format',
+                    kinit_override=True)
+    else:
+      File('/tmp/checkForFormat.sh',
+           content=StaticFile("checkForFormat.sh"),
+           mode=0755)
+      Execute(format(
+        "sh /tmp/checkForFormat.sh {hdfs_user} {hadoop_conf_dir} {mark_dir} "
+        "{dfs_name_dir}"),
+              not_if=format("test -d {mark_dir}"),
+              path="/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin")
+    Execute(format("mkdir -p {mark_dir}"))
+
+
+def decommission():
+  import params
+
+  hdfs_user = params.hdfs_user
+  conf_dir = params.hadoop_conf_dir
+
+  File(params.exclude_file_path,
+       content=Template("exclude_hosts_list.j2"),
+       owner=hdfs_user,
+       group=params.user_group
+  )
+
+  ExecuteHadoop('dfsadmin -refreshNodes',
+                user=hdfs_user,
+                conf_dir=conf_dir,
+                kinit_override=True)

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HDFS/package/scripts/hdfs_snamenode.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/hdfs_snamenode.py


+ 66 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/namenode.py

@@ -0,0 +1,66 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+from hdfs_namenode import namenode
+
+
+class NameNode(Script):
+  def install(self, env):
+    import params
+
+    self.install_packages(env)
+    env.set_params(params)
+
+  def start(self, env):
+    import params
+
+    env.set_params(params)
+    self.config(env)
+    namenode(action="start")
+
+  def stop(self, env):
+    import params
+
+    env.set_params(params)
+    namenode(action="stop")
+
+  def config(self, env):
+    import params
+
+    env.set_params(params)
+    namenode(action="configure")
+    pass
+
+  def status(self, env):
+    import status_params
+
+    env.set_params(status_params)
+    check_process_status(status_params.namenode_pid_file)
+    pass
+
+  def decommission(self, env):
+    import params
+
+    env.set_params(params)
+    namenode(action="decommission")
+    pass
+
+if __name__ == "__main__":
+  NameNode().execute()

+ 165 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/params.py

@@ -0,0 +1,165 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+import status_params
+import os
+
+config = Script.get_config()
+
+#security params
+security_enabled = config['configurations']['global']['security_enabled']
+dfs_journalnode_keytab_file = config['configurations']['hdfs-site']['dfs.journalnode.keytab.file']
+dfs_web_authentication_kerberos_keytab = config['configurations']['hdfs-site']['dfs.journalnode.keytab.file']
+dfs_secondary_namenode_keytab_file =  config['configurations']['hdfs-site']['dfs.secondary.namenode.keytab.file']
+dfs_datanode_keytab_file =  config['configurations']['hdfs-site']['dfs.datanode.keytab.file']
+dfs_namenode_keytab_file =  config['configurations']['hdfs-site']['dfs.namenode.keytab.file']
+smoke_user_keytab = config['configurations']['global']['smokeuser_keytab']
+hdfs_user_keytab = config['configurations']['global']['hdfs_user_keytab']
+
+dfs_datanode_kerberos_principal = config['configurations']['hdfs-site']['dfs.datanode.kerberos.principal']
+dfs_journalnode_kerberos_principal = config['configurations']['hdfs-site']['dfs.journalnode.kerberos.principal']
+dfs_secondary_namenode_kerberos_internal_spnego_principal = config['configurations']['hdfs-site']['dfs.secondary.namenode.kerberos.internal.spnego.principal']
+dfs_namenode_kerberos_principal = config['configurations']['hdfs-site']['dfs.namenode.kerberos.principal']
+dfs_web_authentication_kerberos_principal = config['configurations']['hdfs-site']['dfs.web.authentication.kerberos.principal']
+dfs_secondary_namenode_kerberos_principal = config['configurations']['hdfs-site']['dfs.secondary.namenode.kerberos.principal']
+dfs_journalnode_kerberos_internal_spnego_principal = config['configurations']['hdfs-site']['dfs.journalnode.kerberos.internal.spnego.principal']
+
+#exclude file
+hdfs_exclude_file = default("/clusterHostInfo/decom_dn_hosts", [])
+exclude_file_path = config['configurations']['hdfs-site']['dfs.hosts.exclude']
+
+kinit_path_local = get_kinit_path([default("kinit_path_local",None), "/usr/bin", "/usr/kerberos/bin", "/usr/sbin"])
+#hosts
+hostname = config["hostname"]
+rm_host = default("/clusterHostInfo/rm_host", [])
+slave_hosts = default("/clusterHostInfo/slave_hosts", [])
+hagios_server_hosts = default("/clusterHostInfo/nagios_server_host", [])
+oozie_servers = default("/clusterHostInfo/oozie_server", [])
+hcat_server_hosts = default("/clusterHostInfo/webhcat_server_host", [])
+hive_server_host =  default("/clusterHostInfo/hive_server_host", [])
+hbase_master_hosts = default("/clusterHostInfo/hbase_master_hosts", [])
+hs_host = default("/clusterHostInfo/hs_host", [])
+jtnode_host = default("/clusterHostInfo/jtnode_host", [])
+namenode_host = default("/clusterHostInfo/namenode_host", [])
+nm_host = default("/clusterHostInfo/nm_host", [])
+ganglia_server_hosts = default("/clusterHostInfo/ganglia_server_host", [])
+journalnode_hosts = default("/clusterHostInfo/journalnode_hosts", [])
+zkfc_hosts = default("/clusterHostInfo/zkfc_hosts", [])
+
+has_ganglia_server = not len(ganglia_server_hosts) == 0
+has_namenodes = not len(namenode_host) == 0
+has_jobtracker = not len(jtnode_host) == 0
+has_resourcemanager = not len(rm_host) == 0
+has_histroryserver = not len(hs_host) == 0
+has_hbase_masters = not len(hbase_master_hosts) == 0
+has_slaves = not len(slave_hosts) == 0
+has_nagios = not len(hagios_server_hosts) == 0
+has_oozie_server = not len(oozie_servers)  == 0
+has_hcat_server_host = not len(hcat_server_hosts)  == 0
+has_hive_server_host = not len(hive_server_host)  == 0
+has_journalnode_hosts = not len(journalnode_hosts)  == 0
+has_zkfc_hosts = not len(zkfc_hosts)  == 0
+
+
+is_namenode_master = hostname in namenode_host
+is_jtnode_master = hostname in jtnode_host
+is_rmnode_master = hostname in rm_host
+is_hsnode_master = hostname in hs_host
+is_hbase_master = hostname in hbase_master_hosts
+is_slave = hostname in slave_hosts
+
+if has_ganglia_server:
+  ganglia_server_host = ganglia_server_hosts[0]
+
+#users and groups
+yarn_user = config['configurations']['global']['yarn_user']
+hbase_user = config['configurations']['global']['hbase_user']
+nagios_user = config['configurations']['global']['nagios_user']
+oozie_user = config['configurations']['global']['oozie_user']
+webhcat_user = config['configurations']['global']['hcat_user']
+hcat_user = config['configurations']['global']['hcat_user']
+hive_user = config['configurations']['global']['hive_user']
+smoke_user =  config['configurations']['global']['smokeuser']
+mapred_user = config['configurations']['global']['mapred_user']
+hdfs_user = status_params.hdfs_user
+
+user_group = config['configurations']['global']['user_group']
+proxyuser_group =  config['configurations']['global']['proxyuser_group']
+nagios_group = config['configurations']['global']['nagios_group']
+smoke_user_group = "users"
+
+#hadoop params
+hadoop_conf_dir = "/etc/hadoop/conf"
+hadoop_pid_dir_prefix = status_params.hadoop_pid_dir_prefix
+hadoop_bin = "/usr/lib/hadoop/bin"
+
+hdfs_log_dir_prefix = config['configurations']['global']['hdfs_log_dir_prefix']
+
+dfs_domain_socket_path = "/var/lib/hadoop-hdfs/dn_socket"
+dfs_domain_socket_dir = os.path.dirname(dfs_domain_socket_path)
+
+hadoop_libexec_dir = "/usr/lib/hadoop/libexec"
+
+jn_edits_dir = config['configurations']['hdfs-site']['dfs.journalnode.edits.dir']#"/grid/0/hdfs/journal"
+
+# if stack_version[0] == "2":
+#dfs_name_dir = config['configurations']['hdfs-site']['dfs.namenode.name.dir']
+# else:
+dfs_name_dir = config['configurations']['hdfs-site']['dfs.name.dir']#","/tmp/hadoop-hdfs/dfs/name")
+
+namenode_dirs_created_stub_dir = format("{hdfs_log_dir_prefix}/{hdfs_user}")
+namenode_dirs_stub_filename = "namenode_dirs_created"
+
+hbase_hdfs_root_dir = config['configurations']['hbase-site']['hbase.rootdir']#","/apps/hbase/data")
+hbase_staging_dir = "/apps/hbase/staging"
+hive_apps_whs_dir = config['configurations']['hive-site']["hive.metastore.warehouse.dir"] #, "/apps/hive/warehouse")
+webhcat_apps_dir = "/apps/webhcat"
+mapreduce_jobhistory_intermediate_done_dir = config['configurations']['mapred-site']['mapreduce.jobhistory.intermediate-done-dir']#","/app-logs")
+mapreduce_jobhistory_done_dir = config['configurations']['mapred-site']['mapreduce.jobhistory.done-dir']#","/mr-history/done")
+
+if has_oozie_server:
+  oozie_hdfs_user_dir = format("/user/{oozie_user}")
+  oozie_hdfs_user_mode = 775
+if has_hcat_server_host:
+  hcat_hdfs_user_dir = format("/user/{hcat_user}")
+  hcat_hdfs_user_mode = 755
+  webhcat_hdfs_user_dir = format("/user/{webhcat_user}")
+  webhcat_hdfs_user_mode = 755
+if has_hive_server_host:
+  hive_hdfs_user_dir = format("/user/{hive_user}")
+  hive_hdfs_user_mode = 700
+smoke_hdfs_user_dir = format("/user/{smoke_user}")
+smoke_hdfs_user_mode = 770
+
+namenode_formatted_mark_dir = format("{hadoop_pid_dir_prefix}/hdfs/namenode/formatted/")
+
+# if stack_version[0] == "2":
+#fs_checkpoint_dir = config['configurations']['hdfs-site']['dfs.namenode.checkpoint.dir'] #","/tmp/hadoop-hdfs/dfs/namesecondary")
+# else:
+fs_checkpoint_dir = config['configurations']['core-site']['fs.checkpoint.dir']#","/tmp/hadoop-hdfs/dfs/namesecondary")
+
+# if stack_version[0] == "2":
+#dfs_data_dir = config['configurations']['hdfs-site']['dfs.datanode.data.dir']#,"/tmp/hadoop-hdfs/dfs/data")
+# else:
+dfs_data_dir = config['configurations']['hdfs-site']['dfs.data.dir']#,"/tmp/hadoop-hdfs/dfs/data")
+
+
+
+

+ 106 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/service_check.py

@@ -0,0 +1,106 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+
+
+class HdfsServiceCheck(Script):
+  def service_check(self, env):
+    import params
+
+    env.set_params(params)
+    unique = get_unique_id_and_date()
+    dir = '/tmp'
+    tmp_file = format("{dir}/{unique}")
+
+    safemode_command = "dfsadmin -safemode get | grep OFF"
+
+    create_dir_cmd = format("fs -mkdir {dir} ; hadoop fs -chmod -R 777 {dir}")
+    test_dir_exists = format("hadoop fs -test -e {dir}")
+    cleanup_cmd = format("fs -rm {tmp_file}")
+    #cleanup put below to handle retries; if retrying there wil be a stale file
+    #that needs cleanup; exit code is fn of second command
+    create_file_cmd = format(
+      "{cleanup_cmd}; hadoop fs -put /etc/passwd {tmp_file}")
+    test_cmd = format("fs -test -e {tmp_file}")
+    if params.security_enabled:
+      Execute(format(
+        "su - {smoke_user} -c '{kinit_path_local} -kt {smoke_user_keytab} "
+        "{smoke_user}'"))
+    ExecuteHadoop(safemode_command,
+                  user=params.smoke_user,
+                  logoutput=True,
+                  conf_dir=params.hadoop_conf_dir,
+                  try_sleep=15,
+                  tries=20
+    )
+    ExecuteHadoop(create_dir_cmd,
+                  user=params.smoke_user,
+                  logoutput=True,
+                  not_if=test_dir_exists,
+                  conf_dir=params.hadoop_conf_dir,
+                  try_sleep=3,
+                  tries=5
+    )
+    ExecuteHadoop(create_file_cmd,
+                  user=params.smoke_user,
+                  logoutput=True,
+                  conf_dir=params.hadoop_conf_dir,
+                  try_sleep=3,
+                  tries=5
+    )
+    ExecuteHadoop(test_cmd,
+                  user=params.smoke_user,
+                  logoutput=True,
+                  conf_dir=params.hadoop_conf_dir,
+                  try_sleep=3,
+                  tries=5
+    )
+    if params.has_journalnode_hosts:
+      journalnode_port = params.journalnode_port
+      smoke_test_user = params.smoke_user
+      checkWebUIFileName = "checkWebUI.py"
+      checkWebUIFilePath = format("/tmp/{checkWebUIFileName}")
+      comma_sep_jn_hosts = ",".join(params.journalnode_hosts)
+      checkWebUICmd = format(
+        "su - {smoke_test_user} -c 'python {checkWebUIFilePath} -m "
+        "{comma_sep_jn_hosts} -p {journalnode_port}'")
+      File(checkWebUIFilePath,
+           content=StaticFile(checkWebUIFileName))
+
+      Execute(checkWebUICmd,
+              logoutput=True,
+              try_sleep=3,
+              tries=5
+      )
+
+    if params.has_zkfc_hosts:
+      pid_dir = format("{hadoop_pid_dir_prefix}/{hdfs_user}")
+      pid_file = format("{pid_dir}/hadoop-{hdfs_user}-zkfc.pid")
+      check_zkfc_process_cmd = format(
+        "ls {pid_file} >/dev/null 2>&1 && ps `cat {pid_file}` >/dev/null 2>&1")
+      Execute(check_zkfc_process_cmd,
+              logoutput=True,
+              try_sleep=3,
+              tries=5
+      )
+
+
+if __name__ == "__main__":
+  HdfsServiceCheck().execute()

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HDFS/package/scripts/snamenode.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/snamenode.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HDFS/package/scripts/status_params.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/status_params.py


+ 133 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/scripts/utils.py

@@ -0,0 +1,133 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+
+
+def service(action=None, name=None, user=None, create_pid_dir=False,
+            create_log_dir=False, keytab=None, principal=None):
+  import params
+
+  kinit_cmd = "true"
+  pid_dir = format("{hadoop_pid_dir_prefix}/{user}")
+  pid_file = format("{pid_dir}/hadoop-{user}-{name}.pid")
+  log_dir = format("{hdfs_log_dir_prefix}/{user}")
+  hadoop_daemon = format(
+    "export HADOOP_LIBEXEC_DIR={hadoop_libexec_dir} && "
+    "{hadoop_bin}/hadoop-daemon.sh")
+  cmd = format("{hadoop_daemon} --config {hadoop_conf_dir}")
+
+  if create_pid_dir:
+    Directory(pid_dir,
+              owner=user,
+              recursive=True)
+  if create_log_dir:
+    Directory(log_dir,
+              owner=user,
+              recursive=True)
+
+  if params.security_enabled:
+    principal_replaced = principal.replace("_HOST", params.hostname)
+    kinit_cmd = format("kinit -kt {keytab} {principal_replaced}")
+
+    if name == "datanode":
+      user = "root"
+      pid_file = format(
+        "{hadoop_pid_dir_prefix}/{hdfs_user}/hadoop-{hdfs_user}-{name}.pid")
+
+  daemon_cmd = format("{cmd} {action} {name}")
+
+  service_is_up = format(
+    "ls {pid_file} >/dev/null 2>&1 &&"
+    " ps `cat {pid_file}` >/dev/null 2>&1") if action == "start" else None
+
+  Execute(kinit_cmd)
+  Execute(daemon_cmd,
+          user = user,
+          not_if=service_is_up
+  )
+  if action == "stop":
+    File(pid_file,
+         action="delete",
+         ignore_failures=True
+    )
+
+
+def hdfs_directory(name=None, owner=None, group=None,
+                   mode=None, recursive_chown=False, recursive_chmod=False):
+  import params
+
+  dir_exists = format("hadoop fs -ls {name} >/dev/null 2>&1")
+  namenode_safe_mode_off = "hadoop dfsadmin -safemode get|grep 'Safe mode is OFF'"
+
+  stub_dir = params.namenode_dirs_created_stub_dir
+  stub_filename = params.namenode_dirs_stub_filename
+  dir_absent_in_stub = format(
+    "grep -q '^{name}$' {stub_dir}/{stub_filename} > /dev/null 2>&1; test $? -ne 0")
+  record_dir_in_stub = format("echo '{name}' >> {stub_dir}/{stub_filename}")
+  tries = 3
+  try_sleep = 10
+  dfs_check_nn_status_cmd = "true"
+
+  #if params.stack_version[0] == "2":
+  #mkdir_cmd = format("fs -mkdir -p {name}")
+  #else:
+  mkdir_cmd = format("fs -mkdir {name}")
+
+  if params.security_enabled:
+    Execute(format("kinit -kt {hdfs_user_keytab} {hdfs_user}"),
+            user = params.hdfs_user)
+  ExecuteHadoop(mkdir_cmd,
+                try_sleep=try_sleep,
+                tries=tries,
+                not_if=format(
+                  "{dir_absent_in_stub} && {dfs_check_nn_status_cmd} && "
+                  "{dir_exists} && ! {namenode_safe_mode_off}"),
+                only_if=format(
+                  "su - hdfs -c '{dir_absent_in_stub} && {dfs_check_nn_status_cmd} && "
+                  "! {dir_exists}'"),
+                conf_dir=params.hadoop_conf_dir,
+                user=params.hdfs_user
+  )
+  Execute(record_dir_in_stub,
+          user=params.hdfs_user,
+          only_if=format("! {dir_absent_in_stub}")
+  )
+
+  recursive = "-R" if recursive_chown else ""
+  perm_cmds = []
+
+  if owner:
+    chown = owner
+    if group:
+      chown = format("{owner}:{group}")
+    perm_cmds.append(format("fs -chown {recursive} {chown} {name}"))
+  if mode:
+    perm_cmds.append(format("fs -chmod {recursive} {mode} {name}"))
+  for cmd in perm_cmds:
+    ExecuteHadoop(cmd,
+                  user=params.hdfs_user,
+                  only_if=format("su - hdfs -c '{dir_absent_in_stub} && {dfs_check_nn_status_cmd} && {namenode_safe_mode_off} && {dir_exists}'"),
+                  try_sleep=try_sleep,
+                  tries=tries,
+                  conf_dir=params.hadoop_conf_dir
+    )
+
+
+

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HDFS/package/templates/exclude_hosts_list.j2 → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/package/templates/exclude_hosts_list.j2


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/files/addMysqlUser.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/addMysqlUser.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/files/hcatSmoke.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/hcatSmoke.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/files/hiveSmoke.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/hiveSmoke.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/files/hiveserver2.sql → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/hiveserver2.sql


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/files/hiveserver2Smoke.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/hiveserver2Smoke.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/files/pigSmoke.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/pigSmoke.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/files/startHiveserver2.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/startHiveserver2.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/files/startMetastore.sh → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/files/startMetastore.sh


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/scripts/__init__.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/__init__.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/scripts/hcat.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hcat.py


+ 41 - 0
ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hcat_client.py

@@ -0,0 +1,41 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+from hcat import hcat
+
+class HCatClient(Script):
+  def install(self, env):
+    self.install_packages(env)
+    self.configure(env)
+
+  def configure(self, env):
+    import params
+    env.set_params(params)
+    hcat()
+
+
+  def status(self, env):
+    raise ClientComponentHasNoStatus()
+
+
+if __name__ == "__main__":
+  HCatClient().execute()

+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/scripts/hcat_service_check.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hcat_service_check.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/scripts/hive.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hive.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/scripts/hive_client.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hive_client.py


+ 0 - 0
ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HIVE/package/scripts/hive_metastore.py → ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/package/scripts/hive_metastore.py


Bu fark içinde çok fazla dosya değişikliği olduğu için bazı dosyalar gösterilmiyor