소스 검색

AMBARI-25732: Introduce BIGTOP stack (#3366)

* AMBARI-25732: Introduce BIGTOP stack

* Fix errors and update dev-support docker files for centos7

* sync code from bigtop-pr-1014

* disable fail raise when stack packages is not defined

* add license header and add exclude dir for rat check

* add license header and add exclude dir for rat check

* add rebuild-ambari.sh

* add license header

* change name

* update scripts

* update dev-support readme file
Zhiguo Wu 3 년 전
부모
커밋
281af01fb4
100개의 변경된 파일19296개의 추가작업 그리고 4개의 파일을 삭제
  1. 1 0
      .gitignore
  2. 4 1
      ambari-common/src/main/python/resource_management/libraries/functions/stack_select.py
  3. 3 3
      ambari-server/src/main/resources/stack-hooks/before-INSTALL/scripts/shared_initialization.py
  4. 108 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/blueprints/multinode-default.json
  5. 65 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/blueprints/singlenode-default.json
  6. 250 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/configuration/cluster-env.xml
  7. 39 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/after-INSTALL/scripts/hook.py
  8. 125 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/after-INSTALL/scripts/params.py
  9. 148 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/after-INSTALL/scripts/shared_initialization.py
  10. 64 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-ANY/files/changeToSecureUid.sh
  11. 39 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-ANY/scripts/hook.py
  12. 290 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-ANY/scripts/params.py
  13. 273 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-ANY/scripts/shared_initialization.py
  14. 37 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-INSTALL/scripts/hook.py
  15. 114 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-INSTALL/scripts/params.py
  16. 76 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-INSTALL/scripts/repo_initialization.py
  17. 37 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-INSTALL/scripts/shared_initialization.py
  18. 30 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-RESTART/scripts/hook.py
  19. 39 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-SET_KEYTAB/scripts/hook.py
  20. 65 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/files/checkForFormat.sh
  21. BIN
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/files/fast-hdfs-resource.jar
  22. 134 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/files/task-log4j.properties
  23. 66 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/files/topology_script.py
  24. 173 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/custom_extensions.py
  25. 43 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/hook.py
  26. 382 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/params.py
  27. 48 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/rack_awareness.py
  28. 262 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/shared_initialization.py
  29. 43 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/commons-logging.properties.j2
  30. 21 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/exclude_hosts_list.j2
  31. 114 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/hadoop-metrics2.properties.j2
  32. 81 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/health_check.j2
  33. 21 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/include_hosts_list.j2
  34. 24 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/topology_mappings.data.j2
  35. 60 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/kerberos.json
  36. 22 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/metainfo.xml
  37. 58 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/properties/stack_features.json
  38. 14 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/properties/stack_tools.json
  39. 26 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/repos/repoinfo.xml
  40. 75 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/role_command_order.json
  41. 32 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/alerts.json
  42. 397 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-conf.xml
  43. 75 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-env.xml
  44. 100 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-log4j-cli-properties.xml
  45. 101 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-log4j-console-properties.xml
  46. 90 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-log4j-properties.xml
  47. 75 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-log4j-session-properties.xml
  48. 50 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/kerberos.json
  49. 144 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/metainfo.xml
  50. 47 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/flink_client.py
  51. 88 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/flink_history_server.py
  52. 77 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/flink_service.py
  53. 115 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/params.py
  54. 46 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/service_check.py
  55. 94 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/setup_flink.py
  56. 33 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/status_params.py
  57. 28 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/quicklinks/quicklinks.json
  58. 7 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/role_command_order.json
  59. 127 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/alerts.json
  60. 311 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/hbase-env.xml
  61. 188 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/hbase-log4j.xml
  62. 53 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/hbase-policy.xml
  63. 808 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/hbase-site.xml
  64. 132 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/ranger-hbase-audit.xml
  65. 135 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/ranger-hbase-plugin-properties.xml
  66. 72 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/ranger-hbase-policymgr-ssl.xml
  67. 74 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/ranger-hbase-security.xml
  68. 150 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/kerberos.json
  69. 192 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/metainfo.xml
  70. 9394 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/metrics.json
  71. 23 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/files/hbase-smoke-cleanup.sh
  72. 34 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/files/hbaseSmokeVerify.sh
  73. 19 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/__init__.py
  74. 54 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/functions.py
  75. 252 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase.py
  76. 69 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_client.py
  77. 88 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_decommission.py
  78. 170 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_master.py
  79. 171 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_regionserver.py
  80. 66 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_service.py
  81. 42 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_upgrade.py
  82. 28 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/params.py
  83. 463 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/params_linux.py
  84. 43 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/params_windows.py
  85. 99 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/service_check.py
  86. 89 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/setup_ranger_hbase.py
  87. 67 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/status_params.py
  88. 105 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/upgrade.py
  89. 133 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2
  90. 131 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-RS.j2
  91. 44 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase-smoke.sh.j2
  92. 35 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase.conf.j2
  93. 23 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_client_jaas.conf.j2
  94. 39 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_grant_permissions.j2
  95. 36 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_master_jaas.conf.j2
  96. 26 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_queryserver_jaas.conf.j2
  97. 36 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_regionserver_jaas.conf.j2
  98. 79 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/input.config-hbase.json.j2
  99. 20 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/regionservers.j2
  100. 103 0
      ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/quicklinks/quicklinks.json

+ 1 - 0
.gitignore

@@ -2,6 +2,7 @@
 .project
 .settings
 .idea/
+.vscode/
 .iml/
 .DS_Store
 **/target/

+ 4 - 1
ambari-common/src/main/python/resource_management/libraries/functions/stack_select.py

@@ -182,7 +182,10 @@ def get_packages(scope, service_name = None, component_name = None):
 
   stack_packages_config = default("/configurations/cluster-env/stack_packages", None)
   if stack_packages_config is None:
-    raise Fail("The stack packages are not defined on the command. Unable to load packages for the stack-select tool")
+    # TODO temporary disabled, we need to re-enable the error after bigtop-select is provided.
+    Logger.error("Temporary disable: The stack packages are not defined on the command. Unable to load packages for the stack-select tool")
+    return None
+    # raise Fail("The stack packages are not defined on the command. Unable to load packages for the stack-select tool")
 
   data = json.loads(stack_packages_config)
 

+ 3 - 3
ambari-server/src/main/resources/stack-hooks/before-INSTALL/scripts/shared_initialization.py

@@ -29,9 +29,9 @@ def install_packages():
     return
 
   packages = ['unzip', 'curl']
-  if params.stack_version_formatted != "" and compare_versions(params.stack_version_formatted, '2.2') >= 0:
-    stack_selector_package = stack_tools.get_stack_tool_package(stack_tools.STACK_SELECTOR_NAME)
-    packages.append(stack_selector_package)
+  # if params.stack_version_formatted != "" and compare_versions(params.stack_version_formatted, '2.2') >= 0:
+  #   stack_selector_package = stack_tools.get_stack_tool_package(stack_tools.STACK_SELECTOR_NAME)
+  #   packages.append(stack_selector_package)
   Package(packages,
           retry_on_repo_unavailability=params.agent_stack_retry_on_unavailability,
           retry_count=params.agent_stack_retry_count)

+ 108 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/blueprints/multinode-default.json

@@ -0,0 +1,108 @@
+{
+    "configurations" : [
+    ],
+    "host_groups" : [
+        {
+            "name" : "master_1",
+            "components" : [
+                {
+                    "name" : "NAMENODE"
+                },
+                {
+                    "name" : "ZOOKEEPER_SERVER"
+                },
+                {
+                    "name" : "HDFS_CLIENT"
+                },
+                {
+                    "name" : "YARN_CLIENT"
+                }
+            ],
+            "cardinality" : "1"
+        },
+        {
+            "name" : "master_2",
+            "components" : [
+
+                {
+                    "name" : "ZOOKEEPER_CLIENT"
+                },
+                {
+                    "name" : "HISTORYSERVER"
+                },
+                {
+                    "name" : "SECONDARY_NAMENODE"
+                },
+                {
+                    "name" : "HDFS_CLIENT"
+                },
+                {
+                    "name" : "YARN_CLIENT"
+                },
+                {
+                    "name" : "POSTGRESQL_SERVER"
+                }
+            ],
+            "cardinality" : "1"
+        },
+        {
+            "name" : "master_3",
+            "components" : [
+                {
+                    "name" : "RESOURCEMANAGER"
+                },
+                {
+                    "name" : "ZOOKEEPER_SERVER"
+                }
+            ],
+            "cardinality" : "1"
+        },
+        {
+            "name" : "master_4",
+            "components" : [
+                {
+                    "name" : "ZOOKEEPER_SERVER"
+                }
+            ],
+            "cardinality" : "1"
+        },
+        {
+            "name" : "slave",
+            "components" : [
+                {
+                    "name" : "NODEMANAGER"
+                },
+                {
+                    "name" : "DATANODE"
+                }
+            ],
+            "cardinality" : "${slavesCount}"
+        },
+        {
+            "name" : "gateway",
+            "components" : [
+                {
+                    "name" : "AMBARI_SERVER"
+                },
+                {
+                    "name" : "ZOOKEEPER_CLIENT"
+                },
+                {
+                    "name" : "HDFS_CLIENT"
+                },
+                {
+                    "name" : "YARN_CLIENT"
+                },
+                {
+                    "name" : "MAPREDUCE2_CLIENT"
+                }
+            ],
+            "cardinality" : "1"
+        }
+    ],
+    "Blueprints" : {
+        "blueprint_name" : "blueprint-multinode-default",
+        "stack_name" : "BIGTOP",
+        "stack_version" : "3.2.0"
+    }
+}

+ 65 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/blueprints/singlenode-default.json

@@ -0,0 +1,65 @@
+{
+    "configurations" : [
+    ],
+    "host_groups" : [
+        {
+            "name" : "host_group_1",
+            "components" : [
+                {
+                    "name" : "HISTORYSERVER"
+                },
+                {
+                    "name" : "NAMENODE"
+                },
+                {
+                    "name" : "SUPERVISOR"
+                },
+                {
+                    "name" : "AMBARI_SERVER"
+                },
+                {
+                    "name" : "APP_TIMELINE_SERVER"
+                },
+                {
+                    "name" : "HDFS_CLIENT"
+                },
+                {
+                    "name" : "NODEMANAGER"
+                },
+                {
+                    "name" : "DATANODE"
+                },
+                {
+                    "name" : "RESOURCEMANAGER"
+                },
+                {
+                    "name" : "ZOOKEEPER_SERVER"
+                },
+                {
+                    "name" : "ZOOKEEPER_CLIENT"
+                },
+                {
+                    "name" : "SECONDARY_NAMENODE"
+                },
+                {
+                    "name" : "YARN_CLIENT"
+                },
+                {
+                    "name" : "MAPREDUCE2_CLIENT"
+                },
+                {
+                    "name" : "POSTGRESQL_SERVER"
+                },
+                {
+                    "name" : "DRPC_SERVER"
+                }
+            ],
+            "cardinality" : "1"
+        }
+    ],
+    "Blueprints" : {
+        "blueprint_name" : "blueprint-singlenode-default",
+        "stack_name" : "BIGTOP",
+        "stack_version" : "3.2.0"
+    }
+}

+ 250 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/configuration/cluster-env.xml

@@ -0,0 +1,250 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>recovery_enabled</name>
+    <value>true</value>
+    <description>Auto start enabled or not for this cluster.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>recovery_type</name>
+    <value>AUTO_START</value>
+    <description>Auto start type.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>recovery_lifetime_max_count</name>
+    <value>1024</value>
+    <description>Auto start lifetime maximum count of recovery attempt allowed per host component. This is reset when agent is restarted.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>recovery_max_count</name>
+    <value>6</value>
+    <description>Auto start maximum count of recovery attempt allowed per host component in a window. This is reset when agent is restarted.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>recovery_window_in_minutes</name>
+    <value>60</value>
+    <description>Auto start recovery window size in minutes.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>recovery_retry_interval</name>
+    <value>5</value>
+    <description>Auto start recovery retry gap between tries per host component.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>security_enabled</name>
+    <value>false</value>
+    <description>Hadoop Security</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>kerberos_domain</name>
+    <value>EXAMPLE.COM</value>
+    <description>Kerberos realm.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>ignore_groupsusers_create</name>
+    <display-name>Skip group modifications during install</display-name>
+    <value>false</value>
+    <property-type>ADDITIONAL_USER_PROPERTY</property-type>
+    <description>Whether to ignore failures on users and group creation</description>
+    <value-attributes>
+      <overridable>false</overridable>
+      <type>boolean</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>smokeuser</name>
+    <display-name>Smoke User</display-name>
+    <value>ambari-qa</value>
+    <property-type>USER</property-type>
+    <description>User executing service checks</description>
+    <value-attributes>
+      <type>user</type>
+      <overridable>false</overridable>
+      <user-groups>
+        <property>
+          <type>cluster-env</type>
+          <name>user_group</name>
+        </property>
+      </user-groups>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>smokeuser_keytab</name>
+    <value>/etc/security/keytabs/smokeuser.headless.keytab</value>
+    <description>Path to smoke test user keytab file</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>user_group</name>
+    <display-name>Hadoop Group</display-name>
+    <value>hadoop</value>
+    <property-type>GROUP</property-type>
+    <description>Hadoop user group.</description>
+    <value-attributes>
+      <type>user</type>
+      <overridable>false</overridable>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>repo_suse_rhel_template</name>
+    <value>[{{repo_id}}]
+name={{repo_id}}
+{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}
+
+path=/
+enabled=1
+gpgcheck=0</value>
+    <description>Template of repositories for rhel and suse.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>repo_ubuntu_template</name>
+    <value>{{package_type}} {{base_url}} {{components}}</value>
+    <description>Template of repositories for ubuntu.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>override_uid</name>
+    <value>true</value>
+    <property-type>ADDITIONAL_USER_PROPERTY</property-type>
+    <display-name>Have Ambari manage UIDs</display-name>
+    <description>Have Ambari manage UIDs</description>
+    <value-attributes>
+      <overridable>false</overridable>
+      <type>boolean</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>fetch_nonlocal_groups</name>
+    <value>true</value>
+    <display-name>Ambari fetch nonlocal groups</display-name>
+    <description>Ambari requires fetching all the groups. This can be slow
+        on envs with enabled ldap. Setting this option to false will enable Ambari,
+        to skip user/group management connected with ldap groups.</description>
+    <value-attributes>
+      <overridable>false</overridable>
+      <type>boolean</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>managed_hdfs_resource_property_names</name>
+    <value/>
+    <description>Comma separated list of property names with HDFS resource paths.
+        Resource from this list will be managed even if it is marked as not managed in the stack</description>
+    <value-attributes>
+      <overridable>false</overridable>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <!-- Define stack_tools property in the base stack. DO NOT override this property for each stack version -->
+  <property>
+    <name>stack_tools</name>
+    <value/>
+    <description>Stack specific tools</description>
+    <property-type>VALUE_FROM_PROPERTY_FILE</property-type>
+    <value-attributes>
+      <property-file-name>stack_tools.json</property-file-name>
+      <property-file-type>json</property-file-type>
+      <read-only>true</read-only>
+      <overridable>false</overridable>
+      <visible>false</visible>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <!-- Define stack_features property in the base stack. DO NOT override this property for each stack version -->
+  <property>
+    <name>stack_features</name>
+    <value/>
+    <description>List of features supported by the stack</description>
+    <property-type>VALUE_FROM_PROPERTY_FILE</property-type>
+    <value-attributes>
+      <property-file-name>stack_features.json</property-file-name>
+      <property-file-type>json</property-file-type>
+      <read-only>true</read-only>
+      <overridable>false</overridable>
+      <visible>false</visible>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>stack_root</name>
+    <value>{"BIGTOP":"/usr/bigtop"}</value>
+    <description>Stack root folder</description>
+    <value-attributes>
+      <read-only>true</read-only>
+      <overridable>false</overridable>
+      <visible>false</visible>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>alerts_repeat_tolerance</name>
+    <value>1</value>
+    <description>The number of consecutive alerts required to transition an alert from the SOFT to the HARD state.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>ignore_bad_mounts</name>
+    <value>false</value>
+    <description>For properties handled by handle_mounted_dirs this will make Ambari not to create any directories.</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>manage_dirs_on_root</name>
+    <value>true</value>
+    <description>For properties handled by handle_mounted_dirs this will make Ambari to manage (create and set permissions) unknown directories on / partition</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>one_dir_per_partition</name>
+    <value>false</value>
+    <description>For properties handled by handle_mounted_dirs this will make Ambari </description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>sysprep_skip_create_users_and_groups</name>
+    <display-name>Use Ambari to Manage Service Accounts and Groups</display-name>
+    <value>false</value>
+    <property-type>ADDITIONAL_USER_PROPERTY</property-type>
+    <description>Ambari will create the service accounts and groups that are required for each service if they do not exist in the /etc/password, and /etc/group of the Ambari Managed hosts.</description>
+    <value-attributes>
+      <overridable>true</overridable>
+      <type>boolean-inverted</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+</configuration>

+ 39 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/after-INSTALL/scripts/hook.py

@@ -0,0 +1,39 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management.libraries.script.hook import Hook
+from shared_initialization import link_configs
+from shared_initialization import setup_config
+from shared_initialization import setup_stack_symlinks
+
+
+class AfterInstallHook(Hook):
+
+  def hook(self, env):
+    import params
+
+    env.set_params(params)
+    setup_stack_symlinks(self.stroutfile)
+    setup_config()
+
+    link_configs(self.stroutfile)
+
+
+if __name__ == "__main__":
+  AfterInstallHook().execute()

+ 125 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/after-INSTALL/scripts/params.py

@@ -0,0 +1,125 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+
+from ambari_commons.constants import AMBARI_SUDO_BINARY
+from ambari_commons.constants import LOGFEEDER_CONF_DIR
+from resource_management.libraries.script import Script
+from resource_management.libraries.script.script import get_config_lock_file
+from resource_management.libraries.functions import default
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions import format_jvm_option
+from resource_management.libraries.functions.version import format_stack_version, get_major_version
+from resource_management.libraries.functions.format import format
+from string import lower
+
+config = Script.get_config()
+tmp_dir = Script.get_tmp_dir()
+
+dfs_type = default("/clusterLevelParams/dfs_type", "")
+
+is_parallel_execution_enabled = int(default("/agentLevelParams/agentConfigParams/agent/parallel_execution", 0)) == 1
+host_sys_prepped = default("/ambariLevelParams/host_sys_prepped", False)
+
+sudo = AMBARI_SUDO_BINARY
+
+stack_version_unformatted = config['clusterLevelParams']['stack_version']
+stack_version_formatted = format_stack_version(stack_version_unformatted)
+major_stack_version = get_major_version(stack_version_formatted)
+
+# service name
+service_name = config['serviceName']
+
+# logsearch configuration
+logsearch_logfeeder_conf = LOGFEEDER_CONF_DIR
+
+agent_cache_dir = config['agentLevelParams']['agentCacheDir']
+service_package_folder = config['serviceLevelParams']['service_package_folder']
+logsearch_service_name = service_name.lower().replace("_", "-")
+logsearch_config_file_name = 'input.config-' + logsearch_service_name + ".json"
+logsearch_config_file_path = agent_cache_dir + "/" + service_package_folder + "/templates/" + logsearch_config_file_name + ".j2"
+logsearch_config_file_exists = os.path.isfile(logsearch_config_file_path)
+
+# default hadoop params
+hadoop_libexec_dir = stack_select.get_hadoop_dir("libexec")
+
+mapreduce_libs_path = "/usr/hdp/current/hadoop-mapreduce-client/*"
+
+versioned_stack_root = '/usr/hdp/current'
+
+#security params
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+
+#java params
+java_home = config['ambariLevelParams']['java_home']
+
+#hadoop params
+hdfs_log_dir_prefix = config['configurations']['hadoop-env']['hdfs_log_dir_prefix']
+hadoop_pid_dir_prefix = config['configurations']['hadoop-env']['hadoop_pid_dir_prefix']
+hadoop_root_logger = config['configurations']['hadoop-env']['hadoop_root_logger']
+
+jsvc_path = "/usr/lib/bigtop-utils"
+
+hadoop_heapsize = config['configurations']['hadoop-env']['hadoop_heapsize']
+namenode_heapsize = config['configurations']['hadoop-env']['namenode_heapsize']
+namenode_opt_newsize = config['configurations']['hadoop-env']['namenode_opt_newsize']
+namenode_opt_maxnewsize = config['configurations']['hadoop-env']['namenode_opt_maxnewsize']
+namenode_opt_permsize = format_jvm_option("/configurations/hadoop-env/namenode_opt_permsize","128m")
+namenode_opt_maxpermsize = format_jvm_option("/configurations/hadoop-env/namenode_opt_maxpermsize","256m")
+
+jtnode_opt_newsize = "200m"
+jtnode_opt_maxnewsize = "200m"
+jtnode_heapsize =  "1024m"
+ttnode_heapsize = "1024m"
+
+dtnode_heapsize = config['configurations']['hadoop-env']['dtnode_heapsize']
+mapred_pid_dir_prefix = default("/configurations/mapred-env/mapred_pid_dir_prefix","/var/run/hadoop-mapreduce")
+mapred_log_dir_prefix = default("/configurations/mapred-env/mapred_log_dir_prefix","/var/log/hadoop-mapreduce")
+
+#users and groups
+hdfs_user = config['configurations']['hadoop-env']['hdfs_user']
+user_group = config['configurations']['cluster-env']['user_group']
+
+namenode_hosts = default("/clusterHostInfo/namenode_hosts", [])
+hdfs_client_hosts = default("/clusterHostInfo/hdfs_client_hosts", [])
+has_hdfs_clients = len(hdfs_client_hosts) > 0
+has_namenode = len(namenode_hosts) > 0
+has_hdfs = has_hdfs_clients or has_namenode
+
+if has_hdfs or dfs_type == 'HCFS':
+  hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
+
+  mount_table_xml_inclusion_file_full_path = None
+  mount_table_content = None
+  if 'viewfs-mount-table' in config['configurations']:
+    xml_inclusion_file_name = 'viewfs-mount-table.xml'
+    mount_table = config['configurations']['viewfs-mount-table']
+
+    if 'content' in mount_table and mount_table['content'].strip():
+      mount_table_xml_inclusion_file_full_path = os.path.join(hadoop_conf_dir, xml_inclusion_file_name)
+      mount_table_content = mount_table['content']
+
+link_configs_lock_file = get_config_lock_file()
+stack_select_lock_file = os.path.join(tmp_dir, "stack_select_lock_file")
+
+upgrade_suspended = default("/roleParams/upgrade_suspended", False)
+sysprep_skip_conf_select = default("/configurations/cluster-env/sysprep_skip_conf_select", False)
+conf_select_marker_file = format("{tmp_dir}/conf_select_done_marker")

+ 148 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/after-INSTALL/scripts/shared_initialization.py

@@ -0,0 +1,148 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import os
+
+import ambari_simplejson as json
+from ambari_jinja2 import Environment as JinjaEnvironment
+from resource_management.core.logger import Logger
+from resource_management.core.resources.system import Directory, File
+from resource_management.core.source import InlineTemplate, Template
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions.fcntl_based_process_lock import FcntlBasedProcessLock
+from resource_management.libraries.resources.xml_config import XmlConfig
+from resource_management.libraries.script import Script
+
+
+def setup_stack_symlinks(struct_out_file):
+  """
+  Invokes <stack-selector-tool> set all against a calculated fully-qualified, "normalized" version based on a
+  stack version, such as "2.3". This should always be called after a component has been
+  installed to ensure that all HDP pointers are correct. The stack upgrade logic does not
+  interact with this since it's done via a custom command and will not trigger this hook.
+  :return:
+  """
+  import params
+  if params.upgrade_suspended:
+    Logger.warning("Skipping running stack-selector-tool because there is a suspended upgrade")
+    return
+
+  if params.host_sys_prepped:
+    Logger.warning("Skipping running stack-selector-tool because this is a sys_prepped host. This may cause symlink pointers not to be created for HDP components installed later on top of an already sys_prepped host")
+    return
+
+  # get the packages which the stack-select tool should be used on
+  #stack_packages = stack_select.get_packages(stack_select.PACKAGE_SCOPE_INSTALL)
+  #if stack_packages is None:
+  #  return
+
+  json_version = load_version(struct_out_file)
+
+  if not json_version:
+    Logger.info("There is no advertised version for this component stored in {0}".format(struct_out_file))
+    return
+
+  # On parallel command execution this should be executed by a single process at a time.
+  with FcntlBasedProcessLock(params.stack_select_lock_file, enabled = params.is_parallel_execution_enabled, skip_fcntl_failures = True):
+    for package in stack_packages:
+      stack_select.select(package, json_version)
+
+
+def setup_config():
+  import params
+  stackversion = params.stack_version_unformatted
+  Logger.info("FS Type: {0}".format(params.dfs_type))
+
+  is_hadoop_conf_dir_present = False
+  if hasattr(params, "hadoop_conf_dir") and params.hadoop_conf_dir is not None and os.path.exists(params.hadoop_conf_dir):
+    is_hadoop_conf_dir_present = True
+  else:
+    Logger.warning("Parameter hadoop_conf_dir is missing or directory does not exist. This is expected if this host does not have any Hadoop components.")
+
+  if is_hadoop_conf_dir_present and (params.has_hdfs or stackversion.find('Gluster') >= 0 or params.dfs_type == 'HCFS'):
+    # create core-site only if the hadoop config directory exists
+    XmlConfig("core-site.xml",
+              conf_dir=params.hadoop_conf_dir,
+              configurations=params.config['configurations']['core-site'],
+              configuration_attributes=params.config['configurationAttributes']['core-site'],
+              owner=params.hdfs_user,
+              group=params.user_group,
+              only_if=format("ls {hadoop_conf_dir}"),
+              xml_include_file=params.mount_table_xml_inclusion_file_full_path
+              )
+
+    if params.mount_table_content:
+      File(os.path.join(params.hadoop_conf_dir, params.xml_inclusion_file_name),
+           owner=params.hdfs_user,
+           group=params.user_group,
+           content=params.mount_table_content
+           )
+
+  Directory(params.logsearch_logfeeder_conf,
+            mode=0755,
+            cd_access='a',
+            create_parents=True
+            )
+
+  if params.logsearch_config_file_exists:
+    File(format("{logsearch_logfeeder_conf}/" + params.logsearch_config_file_name),
+         content=Template(params.logsearch_config_file_path,extra_imports=[default])
+         )
+  else:
+    Logger.warning('No logsearch configuration exists at ' + params.logsearch_config_file_path)
+
+
+def load_version(struct_out_file):
+  """
+  Load version from file.  Made a separate method for testing
+  """
+  try:
+    with open(struct_out_file, 'r') as fp:
+      json_info = json.load(fp)
+
+    return json_info['version']
+  except (IOError, KeyError, TypeError):
+    return None
+
+
+def link_configs(struct_out_file):
+  """
+  Use the conf_select module to link configuration directories correctly.
+  """
+  import params
+
+  json_version = load_version(struct_out_file)
+
+  if not json_version:
+    Logger.info("Could not load 'version' from {0}".format(struct_out_file))
+    return
+
+  if not params.sysprep_skip_conf_select or not os.path.exists(params.conf_select_marker_file):
+    # On parallel command execution this should be executed by a single process at a time.
+    with FcntlBasedProcessLock(params.link_configs_lock_file, enabled = params.is_parallel_execution_enabled, skip_fcntl_failures = True):
+      for package_name, directories in conf_select.get_package_dirs().iteritems():
+        conf_select.convert_conf_directories_to_symlinks(package_name, json_version, directories)
+
+    # create a file to mark that conf-selects were already done
+    with open(params.conf_select_marker_file, "wb") as fp:
+      pass
+  else:
+    Logger.info(format("Skipping conf-select stage, since cluster-env/sysprep_skip_conf_select is set and mark file {conf_select_marker_file} exists"))

+ 64 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-ANY/files/changeToSecureUid.sh

@@ -0,0 +1,64 @@
+#!/usr/bin/env bash
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+username=$1
+directories=$2
+newUid=$3
+
+function find_available_uid() {
+ for ((i=1001; i<=2000; i++))
+ do
+   grep -q $i /etc/passwd
+   if [ "$?" -ne 0 ]
+   then
+    newUid=$i
+    break
+   fi
+ done
+}
+
+if [ -z $2 ]; then
+  test $(id -u ${username} 2>/dev/null)
+  if [ $? -ne 1 ]; then
+   newUid=`id -u ${username}`
+  else
+   find_available_uid
+  fi
+  echo $newUid
+  exit 0
+else
+  find_available_uid
+fi
+
+if [ $newUid -eq 0 ]
+then
+  echo "Failed to find Uid between 1000 and 2000"
+  exit 1
+fi
+
+set -e
+dir_array=($(echo $directories | sed 's/,/\n/g'))
+old_uid=$(id -u $username)
+sudo_prefix="/var/lib/ambari-agent/ambari-sudo.sh -H -E"
+echo "Changing uid of $username from $old_uid to $newUid"
+echo "Changing directory permisions for ${dir_array[@]}"
+$sudo_prefix usermod -u $newUid $username && for dir in ${dir_array[@]} ; do ls $dir 2> /dev/null && echo "Changing permission for $dir" && $sudo_prefix chown -Rh $newUid $dir ; done
+exit 0

+ 39 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-ANY/scripts/hook.py

@@ -0,0 +1,39 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+
+from shared_initialization import setup_users, setup_hadoop_env, setup_java
+from resource_management import Hook
+
+
+class BeforeAnyHook(Hook):
+
+  def hook(self, env):
+    import params
+    env.set_params(params)
+
+    setup_users()
+    if params.has_hdfs or params.dfs_type == 'HCFS':
+      setup_hadoop_env()
+    setup_java()
+
+
+if __name__ == "__main__":
+  BeforeAnyHook().execute()
+

+ 290 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-ANY/scripts/params.py

@@ -0,0 +1,290 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import collections
+import re
+import os
+import ast
+
+import ambari_simplejson as json # simplejson is much faster comparing to Python 2.6 json module and has the same functions set.
+
+from resource_management.libraries.script import Script
+from resource_management.libraries.functions import default
+from resource_management.libraries.functions import format
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions import format_jvm_option
+from resource_management.libraries.functions.is_empty import is_empty
+from resource_management.libraries.functions.version import format_stack_version
+from resource_management.libraries.functions.expect import expect
+from resource_management.libraries.functions import StackFeature
+from resource_management.libraries.functions.stack_features import check_stack_feature
+from resource_management.libraries.functions.stack_features import get_stack_feature_version
+from resource_management.libraries.functions.get_architecture import get_architecture
+from ambari_commons.constants import AMBARI_SUDO_BINARY
+from resource_management.libraries.functions.namenode_ha_utils import get_properties_for_all_nameservices, namenode_federation_enabled
+
+
+config = Script.get_config()
+tmp_dir = Script.get_tmp_dir()
+
+stack_root = Script.get_stack_root()
+
+architecture = get_architecture()
+
+dfs_type = default("/clusterLevelParams/dfs_type", "")
+
+artifact_dir = format("{tmp_dir}/AMBARI-artifacts/")
+jdk_name = default("/ambariLevelParams/jdk_name", None)
+java_home = config['ambariLevelParams']['java_home']
+java_version = expect("/ambariLevelParams/java_version", int)
+jdk_location = config['ambariLevelParams']['jdk_location']
+
+hadoop_custom_extensions_enabled = default("/configurations/core-site/hadoop.custom-extensions.enabled", False)
+
+sudo = AMBARI_SUDO_BINARY
+
+ambari_server_hostname = config['ambariLevelParams']['ambari_server_host']
+
+stack_version_unformatted = config['clusterLevelParams']['stack_version']
+stack_version_formatted = format_stack_version(stack_version_unformatted)
+
+upgrade_type = Script.get_upgrade_type(default("/commandParams/upgrade_type", ""))
+version = default("/commandParams/version", None)
+# Handle upgrade and downgrade
+if (upgrade_type is not None) and version:
+  stack_version_formatted = format_stack_version(version)
+ambari_java_home = default("/commandParams/ambari_java_home", None)
+ambari_jdk_name = default("/commandParams/ambari_jdk_name", None)
+
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+hdfs_user = config['configurations']['hadoop-env']['hdfs_user']
+
+# Some datanode settings
+dfs_dn_addr = default('/configurations/hdfs-site/dfs.datanode.address', None)
+dfs_dn_http_addr = default('/configurations/hdfs-site/dfs.datanode.http.address', None)
+dfs_dn_https_addr = default('/configurations/hdfs-site/dfs.datanode.https.address', None)
+dfs_http_policy = default('/configurations/hdfs-site/dfs.http.policy', None)
+secure_dn_ports_are_in_use = False
+
+def get_port(address):
+  """
+  Extracts port from the address like 0.0.0.0:1019
+  """
+  if address is None:
+    return None
+  m = re.search(r'(?:http(?:s)?://)?([\w\d.]*):(\d{1,5})', address)
+  if m is not None:
+    return int(m.group(2))
+  else:
+    return None
+
+def is_secure_port(port):
+  """
+  Returns True if port is root-owned at *nix systems
+  """
+  if port is not None:
+    return port < 1024
+  else:
+    return False
+
+# upgrades would cause these directories to have a version instead of "current"
+# which would cause a lot of problems when writing out hadoop-env.sh; instead
+# force the use of "current" in the hook
+hdfs_user_nofile_limit = default("/configurations/hadoop-env/hdfs_user_nofile_limit", "128000")
+hadoop_home = stack_select.get_hadoop_dir("home")
+hadoop_libexec_dir = stack_select.get_hadoop_dir("libexec")
+hadoop_lib_home = stack_select.get_hadoop_dir("lib")
+
+hadoop_dir = "/etc/hadoop"
+hadoop_java_io_tmpdir = os.path.join(tmp_dir, "hadoop_java_io_tmpdir")
+datanode_max_locked_memory = config['configurations']['hdfs-site']['dfs.datanode.max.locked.memory']
+is_datanode_max_locked_memory_set = not is_empty(config['configurations']['hdfs-site']['dfs.datanode.max.locked.memory'])
+
+mapreduce_libs_path = "/usr/hdp/current/hadoop-mapreduce-client/*"
+
+if not security_enabled:
+  hadoop_secure_dn_user = '""'
+else:
+  dfs_dn_port = get_port(dfs_dn_addr)
+  dfs_dn_http_port = get_port(dfs_dn_http_addr)
+  dfs_dn_https_port = get_port(dfs_dn_https_addr)
+  # We try to avoid inability to start datanode as a plain user due to usage of root-owned ports
+  if dfs_http_policy == "HTTPS_ONLY":
+    secure_dn_ports_are_in_use = is_secure_port(dfs_dn_port) or is_secure_port(dfs_dn_https_port)
+  elif dfs_http_policy == "HTTP_AND_HTTPS":
+    secure_dn_ports_are_in_use = is_secure_port(dfs_dn_port) or is_secure_port(dfs_dn_http_port) or is_secure_port(dfs_dn_https_port)
+  else:   # params.dfs_http_policy == "HTTP_ONLY" or not defined:
+    secure_dn_ports_are_in_use = is_secure_port(dfs_dn_port) or is_secure_port(dfs_dn_http_port)
+  if secure_dn_ports_are_in_use:
+    hadoop_secure_dn_user = hdfs_user
+  else:
+    hadoop_secure_dn_user = '""'
+
+#hadoop params
+hdfs_log_dir_prefix = config['configurations']['hadoop-env']['hdfs_log_dir_prefix']
+hadoop_pid_dir_prefix = config['configurations']['hadoop-env']['hadoop_pid_dir_prefix']
+hadoop_root_logger = config['configurations']['hadoop-env']['hadoop_root_logger']
+
+jsvc_path = "/usr/lib/bigtop-utils"
+
+hadoop_heapsize = config['configurations']['hadoop-env']['hadoop_heapsize']
+namenode_heapsize = config['configurations']['hadoop-env']['namenode_heapsize']
+namenode_opt_newsize = config['configurations']['hadoop-env']['namenode_opt_newsize']
+namenode_opt_maxnewsize = config['configurations']['hadoop-env']['namenode_opt_maxnewsize']
+namenode_opt_permsize = format_jvm_option("/configurations/hadoop-env/namenode_opt_permsize","128m")
+namenode_opt_maxpermsize = format_jvm_option("/configurations/hadoop-env/namenode_opt_maxpermsize","256m")
+
+jtnode_opt_newsize = "200m"
+jtnode_opt_maxnewsize = "200m"
+jtnode_heapsize =  "1024m"
+ttnode_heapsize = "1024m"
+
+dtnode_heapsize = config['configurations']['hadoop-env']['dtnode_heapsize']
+nfsgateway_heapsize = config['configurations']['hadoop-env']['nfsgateway_heapsize']
+mapred_pid_dir_prefix = default("/configurations/mapred-env/mapred_pid_dir_prefix","/var/run/hadoop-mapreduce")
+mapred_log_dir_prefix = default("/configurations/mapred-env/mapred_log_dir_prefix","/var/log/hadoop-mapreduce")
+hadoop_env_sh_template = config['configurations']['hadoop-env']['content']
+
+#users and groups
+hbase_user = config['configurations']['hbase-env']['hbase_user']
+smoke_user =  config['configurations']['cluster-env']['smokeuser']
+gmetad_user = config['configurations']['ganglia-env']["gmetad_user"]
+gmond_user = config['configurations']['ganglia-env']["gmond_user"]
+tez_user = config['configurations']['tez-env']["tez_user"]
+oozie_user = config['configurations']['oozie-env']["oozie_user"]
+falcon_user = config['configurations']['falcon-env']["falcon_user"]
+ranger_user = config['configurations']['ranger-env']["ranger_user"]
+zeppelin_user = config['configurations']['zeppelin-env']["zeppelin_user"]
+zeppelin_group = config['configurations']['zeppelin-env']["zeppelin_group"]
+
+user_group = config['configurations']['cluster-env']['user_group']
+
+ganglia_server_hosts = default("/clusterHostInfo/ganglia_server_hosts", [])
+namenode_hosts = default("/clusterHostInfo/namenode_hosts", [])
+hdfs_client_hosts = default("/clusterHostInfo/hdfs_client_hosts", [])
+hbase_master_hosts = default("/clusterHostInfo/hbase_master_hosts", [])
+oozie_servers = default("/clusterHostInfo/oozie_server", [])
+falcon_server_hosts = default("/clusterHostInfo/falcon_server_hosts", [])
+ranger_admin_hosts = default("/clusterHostInfo/ranger_admin_hosts", [])
+zeppelin_master_hosts = default("/clusterHostInfo/zeppelin_master_hosts", [])
+
+# get the correct version to use for checking stack features
+version_for_stack_feature_checks = get_stack_feature_version(config)
+
+
+has_namenode = len(namenode_hosts) > 0
+has_hdfs_clients = len(hdfs_client_hosts) > 0
+has_hdfs = has_hdfs_clients or has_namenode
+has_ganglia_server = not len(ganglia_server_hosts) == 0
+has_tez = 'tez-site' in config['configurations']
+has_hbase_masters = not len(hbase_master_hosts) == 0
+has_oozie_server = not len(oozie_servers) == 0
+has_falcon_server_hosts = not len(falcon_server_hosts) == 0
+has_ranger_admin = not len(ranger_admin_hosts) == 0
+has_zeppelin_master = not len(zeppelin_master_hosts) == 0
+stack_supports_zk_security = check_stack_feature(StackFeature.SECURE_ZOOKEEPER, version_for_stack_feature_checks)
+
+hostname = config['agentLevelParams']['hostname']
+hdfs_site = config['configurations']['hdfs-site']
+
+# HDFS High Availability properties
+dfs_ha_enabled = False
+dfs_ha_nameservices = default('/configurations/hdfs-site/dfs.internal.nameservices', None)
+if dfs_ha_nameservices is None:
+  dfs_ha_nameservices = default('/configurations/hdfs-site/dfs.nameservices', None)
+
+# on stacks without any filesystem there is no hdfs-site
+dfs_ha_namenode_ids_all_ns = get_properties_for_all_nameservices(hdfs_site, 'dfs.ha.namenodes') if 'hdfs-site' in config['configurations'] else {}
+dfs_ha_automatic_failover_enabled = default("/configurations/hdfs-site/dfs.ha.automatic-failover.enabled", False)
+
+# Values for the current Host
+namenode_id = None
+namenode_rpc = None
+
+dfs_ha_namemodes_ids_list = []
+other_namenode_id = None
+
+for ns, dfs_ha_namenode_ids in dfs_ha_namenode_ids_all_ns.iteritems():
+  found = False
+  if not is_empty(dfs_ha_namenode_ids):
+    dfs_ha_namemodes_ids_list = dfs_ha_namenode_ids.split(",")
+    dfs_ha_namenode_ids_array_len = len(dfs_ha_namemodes_ids_list)
+    if dfs_ha_namenode_ids_array_len > 1:
+      dfs_ha_enabled = True
+  if dfs_ha_enabled:
+    for nn_id in dfs_ha_namemodes_ids_list:
+      nn_host = config['configurations']['hdfs-site'][format('dfs.namenode.rpc-address.{ns}.{nn_id}')]
+      if hostname in nn_host:
+        namenode_id = nn_id
+        namenode_rpc = nn_host
+        found = True
+    # With HA enabled namenode_address is recomputed
+    namenode_address = format('hdfs://{ns}')
+
+    # Calculate the namenode id of the other namenode. This is needed during RU to initiate an HA failover using ZKFC.
+    if namenode_id is not None and len(dfs_ha_namemodes_ids_list) == 2:
+      other_namenode_id = list(set(dfs_ha_namemodes_ids_list) - set([namenode_id]))[0]
+
+  if found:
+    break
+
+if has_hdfs or dfs_type == 'HCFS':
+    hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
+    hadoop_conf_secure_dir = os.path.join(hadoop_conf_dir, "secure")
+
+hbase_tmp_dir = "/tmp/hbase-hbase"
+
+proxyuser_group = default("/configurations/hadoop-env/proxyuser_group","users")
+ranger_group = config['configurations']['ranger-env']['ranger_group']
+dfs_cluster_administrators_group = config['configurations']['hdfs-site']["dfs.cluster.administrators"]
+
+sysprep_skip_create_users_and_groups = default("/configurations/cluster-env/sysprep_skip_create_users_and_groups", False)
+ignore_groupsusers_create = default("/configurations/cluster-env/ignore_groupsusers_create", False)
+fetch_nonlocal_groups = config['configurations']['cluster-env']["fetch_nonlocal_groups"]
+
+smoke_user_dirs = format("/tmp/hadoop-{smoke_user},/tmp/hsperfdata_{smoke_user},/home/{smoke_user},/tmp/{smoke_user}")
+if has_hbase_masters:
+  hbase_user_dirs = format("/home/{hbase_user},/tmp/{hbase_user},/usr/bin/{hbase_user},/var/log/{hbase_user},{hbase_tmp_dir}")
+#repo params
+repo_info = config['hostLevelParams']['repoInfo']
+service_repo_info = default("/hostLevelParams/service_repo_info",None)
+
+user_to_groups_dict = {}
+
+#Append new user-group mapping to the dict
+try:
+  user_group_map = ast.literal_eval(config['clusterLevelParams']['user_groups'])
+  for key in user_group_map.iterkeys():
+    user_to_groups_dict[key] = user_group_map[key]
+except ValueError:
+  print('User Group mapping (user_group) is missing in the hostLevelParams')
+
+user_to_gid_dict = collections.defaultdict(lambda:user_group)
+
+user_list = json.loads(config['clusterLevelParams']['user_list'])
+group_list = json.loads(config['clusterLevelParams']['group_list'])
+host_sys_prepped = default("/ambariLevelParams/host_sys_prepped", False)
+
+tez_am_view_acls = config['configurations']['tez-site']["tez.am.view-acls"]
+override_uid = str(default("/configurations/cluster-env/override_uid", "true")).lower()
+
+# if NN HA on secure clutser, access Zookeper securely
+if stack_supports_zk_security and dfs_ha_enabled and security_enabled:
+    hadoop_zkfc_opts=format("-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.auth.login.config={hadoop_conf_secure_dir}/hdfs_jaas.conf -Dzookeeper.sasl.clientconfig=Client")

+ 273 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-ANY/scripts/shared_initialization.py

@@ -0,0 +1,273 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+import re
+import getpass
+import tempfile
+from copy import copy
+from resource_management.libraries.functions.version import compare_versions
+from resource_management import *
+from resource_management.core import shell
+
+def setup_users():
+  """
+  Creates users before cluster installation
+  """
+  import params
+
+  should_create_users_and_groups = False
+  if params.host_sys_prepped:
+    should_create_users_and_groups = not params.sysprep_skip_create_users_and_groups
+  else:
+    should_create_users_and_groups = not params.ignore_groupsusers_create
+
+  if should_create_users_and_groups:
+    for group in params.group_list:
+      Group(group,
+      )
+
+    for user in params.user_list:
+      User(user,
+           uid = get_uid(user) if params.override_uid == "true" else None,
+           gid = params.user_to_gid_dict[user],
+           groups = params.user_to_groups_dict[user],
+           fetch_nonlocal_groups = params.fetch_nonlocal_groups,
+           )
+
+    if params.override_uid == "true":
+      set_uid(params.smoke_user, params.smoke_user_dirs)
+    else:
+      Logger.info('Skipping setting uid for smoke user as host is sys prepped')
+  else:
+    Logger.info('Skipping creation of User and Group as host is sys prepped or ignore_groupsusers_create flag is on')
+    pass
+
+
+  if params.has_hbase_masters:
+    Directory (params.hbase_tmp_dir,
+               owner = params.hbase_user,
+               mode=0775,
+               create_parents = True,
+               cd_access="a",
+    )
+
+    if params.override_uid == "true":
+      set_uid(params.hbase_user, params.hbase_user_dirs)
+    else:
+      Logger.info('Skipping setting uid for hbase user as host is sys prepped')
+
+  if should_create_users_and_groups:
+    if params.has_hdfs:
+      create_dfs_cluster_admins()
+    if params.has_tez and params.stack_version_formatted != "" and compare_versions(params.stack_version_formatted, '2.3') >= 0:
+      create_tez_am_view_acls()
+  else:
+    Logger.info('Skipping setting dfs cluster admin and tez view acls as host is sys prepped')
+
+def create_dfs_cluster_admins():
+  """
+  dfs.cluster.administrators support format <comma-delimited list of usernames><space><comma-delimited list of group names>
+  """
+  import params
+
+  groups_list = create_users_and_groups(params.dfs_cluster_administrators_group)
+
+  User(params.hdfs_user,
+    groups = params.user_to_groups_dict[params.hdfs_user] + groups_list,
+    fetch_nonlocal_groups = params.fetch_nonlocal_groups
+  )
+
+def create_tez_am_view_acls():
+
+  """
+  tez.am.view-acls support format <comma-delimited list of usernames><space><comma-delimited list of group names>
+  """
+  import params
+
+  if not params.tez_am_view_acls.startswith("*"):
+    create_users_and_groups(params.tez_am_view_acls)
+
+def create_users_and_groups(user_and_groups):
+
+  import params
+
+  parts = re.split('\s+', user_and_groups)
+  if len(parts) == 1:
+    parts.append("")
+
+  users_list = parts[0].strip(",").split(",") if parts[0] else []
+  groups_list = parts[1].strip(",").split(",") if parts[1] else []
+
+  # skip creating groups and users if * is provided as value.
+  users_list = filter(lambda x: x != '*' , users_list)
+  groups_list = filter(lambda x: x != '*' , groups_list)
+
+  if users_list:
+    User(users_list,
+          fetch_nonlocal_groups = params.fetch_nonlocal_groups
+    )
+
+  if groups_list:
+    Group(copy(groups_list),
+    )
+  return groups_list
+
+def set_uid(user, user_dirs):
+  """
+  user_dirs - comma separated directories
+  """
+  import params
+
+  File(format("{tmp_dir}/changeUid.sh"),
+       content=StaticFile("changeToSecureUid.sh"),
+       mode=0555)
+  ignore_groupsusers_create_str = str(params.ignore_groupsusers_create).lower()
+  uid = get_uid(user, return_existing=True)
+  Execute(format("{tmp_dir}/changeUid.sh {user} {user_dirs} {new_uid}", new_uid=0 if uid is None else uid),
+          not_if = format("(test $(id -u {user}) -gt 1000) || ({ignore_groupsusers_create_str})"))
+
+def get_uid(user, return_existing=False):
+  """
+  Tries to get UID for username. It will try to find UID in custom properties in *cluster_env* and, if *return_existing=True*,
+  it will try to return UID of existing *user*.
+
+  :param user: username to get UID for
+  :param return_existing: return UID for existing user
+  :return:
+  """
+  import params
+  user_str = str(user) + "_uid"
+  service_env = [ serviceEnv for serviceEnv in params.config['configurations'] if user_str in params.config['configurations'][serviceEnv]]
+
+  if service_env and params.config['configurations'][service_env[0]][user_str]:
+    service_env_str = str(service_env[0])
+    uid = params.config['configurations'][service_env_str][user_str]
+    if len(service_env) > 1:
+      Logger.warning("Multiple values found for %s, using %s"  % (user_str, uid))
+    return uid
+  else:
+    if return_existing:
+      # pick up existing UID or try to find available UID in /etc/passwd, see changeToSecureUid.sh for more info
+      if user == params.smoke_user:
+        return None
+      File(format("{tmp_dir}/changeUid.sh"),
+           content=StaticFile("changeToSecureUid.sh"),
+           mode=0555)
+      code, newUid = shell.call(format("{tmp_dir}/changeUid.sh {user}"))
+      return int(newUid)
+    else:
+      # do not return UID for existing user, used in User resource call to let OS to choose UID for us
+      return None
+
+def setup_hadoop_env():
+  import params
+  stackversion = params.stack_version_unformatted
+  Logger.info("FS Type: {0}".format(params.dfs_type))
+  if params.has_hdfs or stackversion.find('Gluster') >= 0 or params.dfs_type == 'HCFS':
+    if params.security_enabled:
+      tc_owner = "root"
+    else:
+      tc_owner = params.hdfs_user
+
+    # create /etc/hadoop
+    Directory(params.hadoop_dir, mode=0755)
+
+    # write out hadoop-env.sh, but only if the directory exists
+    if os.path.exists(params.hadoop_conf_dir):
+      File(os.path.join(params.hadoop_conf_dir, 'hadoop-env.sh'), owner=tc_owner,
+        group=params.user_group,
+        content=InlineTemplate(params.hadoop_env_sh_template))
+
+    # Create tmp dir for java.io.tmpdir
+    # Handle a situation when /tmp is set to noexec
+    Directory(params.hadoop_java_io_tmpdir,
+              owner=params.hdfs_user,
+              group=params.user_group,
+              mode=01777
+    )
+
+def setup_java():
+  """
+  Install jdk using specific params.
+  Install ambari jdk as well if the stack and ambari jdk are different.
+  """
+  import params
+  __setup_java(custom_java_home=params.java_home, custom_jdk_name=params.jdk_name)
+  if params.ambari_java_home and params.ambari_java_home != params.java_home:
+    __setup_java(custom_java_home=params.ambari_java_home, custom_jdk_name=params.ambari_jdk_name)
+
+def __setup_java(custom_java_home, custom_jdk_name):
+  """
+  Installs jdk using specific params, that comes from ambari-server
+  """
+  import params
+  java_exec = format("{custom_java_home}/bin/java")
+
+  if not os.path.isfile(java_exec):
+    if not params.jdk_name: # if custom jdk is used.
+      raise Fail(format("Unable to access {java_exec}. Confirm you have copied jdk to this host."))
+
+    jdk_curl_target = format("{tmp_dir}/{custom_jdk_name}")
+    java_dir = os.path.dirname(params.java_home)
+
+    Directory(params.artifact_dir,
+              create_parents = True,
+              )
+
+    File(jdk_curl_target,
+         content = DownloadSource(format("{jdk_location}/{custom_jdk_name}")),
+         not_if = format("test -f {jdk_curl_target}")
+         )
+
+    File(jdk_curl_target,
+         mode = 0755,
+         )
+
+    tmp_java_dir = tempfile.mkdtemp(prefix="jdk_tmp_", dir=params.tmp_dir)
+
+    try:
+      if params.jdk_name.endswith(".bin"):
+        chmod_cmd = ("chmod", "+x", jdk_curl_target)
+        install_cmd = format("cd {tmp_java_dir} && echo A | {jdk_curl_target} -noregister && {sudo} cp -rp {tmp_java_dir}/* {java_dir}")
+      elif params.jdk_name.endswith(".gz"):
+        chmod_cmd = ("chmod","a+x", java_dir)
+        install_cmd = format("cd {tmp_java_dir} && tar -xf {jdk_curl_target} && {sudo} cp -rp {tmp_java_dir}/* {java_dir}")
+
+      Directory(java_dir
+                )
+
+      Execute(chmod_cmd,
+              sudo = True,
+              )
+
+      Execute(install_cmd,
+              )
+
+    finally:
+      Directory(tmp_java_dir, action="delete")
+
+    File(format("{custom_java_home}/bin/java"),
+         mode=0755,
+         cd_access="a",
+         )
+    Execute(('chmod', '-R', '755', params.java_home),
+            sudo = True,
+            )
+

+ 37 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-INSTALL/scripts/hook.py

@@ -0,0 +1,37 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from resource_management import Hook
+from shared_initialization import install_packages
+from repo_initialization import install_repos
+
+
+class BeforeInstallHook(Hook):
+
+  def hook(self, env):
+    import params
+
+    self.run_custom_hook('before-ANY')
+    env.set_params(params)
+    
+    install_repos()
+    install_packages()
+
+
+if __name__ == "__main__":
+  BeforeInstallHook().execute()

+ 114 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-INSTALL/scripts/params.py

@@ -0,0 +1,114 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from ambari_commons.constants import AMBARI_SUDO_BINARY
+from resource_management.libraries.functions.version import format_stack_version, compare_versions
+from resource_management.core.system import System
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions import default, format
+from resource_management.libraries.functions.expect import expect
+
+config = Script.get_config()
+tmp_dir = Script.get_tmp_dir()
+sudo = AMBARI_SUDO_BINARY
+
+stack_version_unformatted = config['clusterLevelParams']['stack_version']
+agent_stack_retry_on_unavailability = config['ambariLevelParams']['agent_stack_retry_on_unavailability']
+agent_stack_retry_count = expect("/ambariLevelParams/agent_stack_retry_count", int)
+stack_version_formatted = format_stack_version(stack_version_unformatted)
+
+#users and groups
+hbase_user = config['configurations']['hbase-env']['hbase_user']
+smoke_user =  config['configurations']['cluster-env']['smokeuser']
+gmetad_user = config['configurations']['ganglia-env']["gmetad_user"]
+gmond_user = config['configurations']['ganglia-env']["gmond_user"]
+tez_user = config['configurations']['tez-env']["tez_user"]
+
+user_group = config['configurations']['cluster-env']['user_group']
+proxyuser_group = default("/configurations/hadoop-env/proxyuser_group","users")
+
+hdfs_log_dir_prefix = config['configurations']['hadoop-env']['hdfs_log_dir_prefix']
+
+# repo templates
+repo_rhel_suse =  config['configurations']['cluster-env']['repo_suse_rhel_template']
+repo_ubuntu =  config['configurations']['cluster-env']['repo_ubuntu_template']
+
+#hosts
+hostname = config['agentLevelParams']['hostname']
+ambari_server_hostname = config['ambariLevelParams']['ambari_server_host']
+rm_host = default("/clusterHostInfo/resourcemanager_hosts", [])
+slave_hosts = default("/clusterHostInfo/datanode_hosts", [])
+oozie_servers = default("/clusterHostInfo/oozie_server", [])
+hcat_server_hosts = default("/clusterHostInfo/webhcat_server_hosts", [])
+hive_server_host =  default("/clusterHostInfo/hive_server_hosts", [])
+hbase_master_hosts = default("/clusterHostInfo/hbase_master_hosts", [])
+hs_host = default("/clusterHostInfo/historyserver_hosts", [])
+jtnode_host = default("/clusterHostInfo/jtnode_hosts", [])
+namenode_hosts = default("/clusterHostInfo/namenode_hosts", [])
+zk_hosts = default("/clusterHostInfo/zookeeper_server_hosts", [])
+ganglia_server_hosts = default("/clusterHostInfo/ganglia_server_hosts", [])
+storm_server_hosts = default("/clusterHostInfo/nimbus_hosts", [])
+falcon_host =  default('/clusterHostInfo/falcon_server_hosts', [])
+
+has_namenode = len(namenode_hosts) > 0
+has_hs = not len(hs_host) == 0
+has_resourcemanager = not len(rm_host) == 0
+has_slaves = not len(slave_hosts) == 0
+has_oozie_server = not len(oozie_servers)  == 0
+has_hcat_server_host = not len(hcat_server_hosts)  == 0
+has_hive_server_host = not len(hive_server_host)  == 0
+has_hbase_masters = not len(hbase_master_hosts) == 0
+has_zk_host = not len(zk_hosts) == 0
+has_ganglia_server = not len(ganglia_server_hosts) == 0
+has_storm_server = not len(storm_server_hosts) == 0
+has_falcon_server = not len(falcon_host) == 0
+has_tez = 'tez-site' in config['configurations']
+
+is_namenode_master = hostname in namenode_hosts
+is_jtnode_master = hostname in jtnode_host
+is_rmnode_master = hostname in rm_host
+is_hsnode_master = hostname in hs_host
+is_hbase_master = hostname in hbase_master_hosts
+is_slave = hostname in slave_hosts
+if has_ganglia_server:
+  ganglia_server_host = ganglia_server_hosts[0]
+
+hbase_tmp_dir = "/tmp/hbase-hbase"
+
+#security params
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+
+#java params
+java_home = config['ambariLevelParams']['java_home']
+artifact_dir = format("{tmp_dir}/AMBARI-artifacts/")
+jdk_name = default("/ambariLevelParams/jdk_name", None) # None when jdk is already installed by user
+jce_policy_zip = default("/ambariLevelParams/jce_name", None) # None when jdk is already installed by user
+jce_location = config['ambariLevelParams']['jdk_location']
+jdk_location = config['ambariLevelParams']['jdk_location']
+ignore_groupsusers_create = default("/configurations/cluster-env/ignore_groupsusers_create", False)
+host_sys_prepped = default("/ambariLevelParams/host_sys_prepped", False)
+
+smoke_user_dirs = format("/tmp/hadoop-{smoke_user},/tmp/hsperfdata_{smoke_user},/home/{smoke_user},/tmp/{smoke_user}")
+if has_hbase_masters:
+  hbase_user_dirs = format("/home/{hbase_user},/tmp/{hbase_user},/usr/bin/{hbase_user},/var/log/{hbase_user},{hbase_tmp_dir}")
+#repo params
+repo_info = config['hostLevelParams']['repoInfo']
+service_repo_info = default("/hostLevelParams/service_repo_info",None)
+
+repo_file = default("/repositoryFile", None)

+ 76 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-INSTALL/scripts/repo_initialization.py

@@ -0,0 +1,76 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from ambari_commons.os_check import OSCheck
+from resource_management.libraries.resources.repository import Repository
+from resource_management.libraries.functions.repository_util import CommandRepository, UBUNTU_REPO_COMPONENTS_POSTFIX
+from resource_management.libraries.script.script import Script
+from resource_management.core.logger import Logger
+import ambari_simplejson as json
+
+
+def _alter_repo(action, repo_dicts, repo_template):
+  """
+  @param action: "delete" or "create"
+  @param repo_dicts: e.g. "[{\"baseUrl\":\"http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.0\",\"osType\":\"centos6\",\"repoId\":\"HDP-2.0._\",\"repoName\":\"HDP\",\"defaultBaseUrl\":\"http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.0\"}]"
+  """
+  if not isinstance(repo_dicts, list):
+    repo_dicts = [repo_dicts]
+
+  if 0 == len(repo_dicts):
+    Logger.info("Repository list is empty. Ambari may not be managing the repositories.")
+  else:
+    Logger.info("Initializing {0} repositories".format(str(len(repo_dicts))))
+
+  for repo in repo_dicts:
+    if not 'baseUrl' in repo:
+      repo['baseUrl'] = None
+    if not 'mirrorsList' in repo:
+      repo['mirrorsList'] = None
+
+    ubuntu_components = [ repo['distribution'] if 'distribution' in repo and repo['distribution'] else repo['repoName'] ] \
+                        + [repo['components'].replace(",", " ") if 'components' in repo and repo['components'] else UBUNTU_REPO_COMPONENTS_POSTFIX]
+
+    Repository(repo['repoId'],
+               action = "prepare",
+               base_url = repo['baseUrl'],
+               mirror_list = repo['mirrorsList'],
+               repo_file_name = repo['repoName'],
+               repo_template = repo_template,
+               components = ubuntu_components) # ubuntu specific
+
+  Repository(None, action = "create")
+
+
+def install_repos():
+  import params
+  if params.host_sys_prepped:
+    return
+
+  # use this newer way of specifying repositories, if available
+  if params.repo_file is not None:
+    Script.repository_util.create_repo_files()
+    return
+
+  template = params.repo_rhel_suse if OSCheck.is_suse_family() or OSCheck.is_redhat_family() else params.repo_ubuntu
+
+  _alter_repo("create", params.repo_info, template)
+
+  if params.service_repo_info:
+    _alter_repo("create", params.service_repo_info, template)

+ 37 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-INSTALL/scripts/shared_initialization.py

@@ -0,0 +1,37 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+
+from resource_management.libraries.functions import stack_tools
+from resource_management.libraries.functions.version import compare_versions
+from resource_management.core.resources.packaging import Package
+
+def install_packages():
+  import params
+  if params.host_sys_prepped:
+    return
+
+  packages = ['unzip', 'curl']
+  # if params.stack_version_formatted != "" and compare_versions(params.stack_version_formatted, '2.2') >= 0:
+  #   stack_selector_package = stack_tools.get_stack_tool_package(stack_tools.STACK_SELECTOR_NAME)
+  #   packages.append(stack_selector_package)
+  Package(packages,
+          retry_on_repo_unavailability=params.agent_stack_retry_on_unavailability,
+          retry_count=params.agent_stack_retry_count)

+ 30 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-RESTART/scripts/hook.py

@@ -0,0 +1,30 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from resource_management import Hook
+
+
+class BeforeRestartHook(Hook):
+
+  def hook(self, env):
+    self.run_custom_hook('before-START')
+
+
+if __name__ == "__main__":
+  BeforeRestartHook().execute()
+

+ 39 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-SET_KEYTAB/scripts/hook.py

@@ -0,0 +1,39 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from resource_management import Hook
+
+
+class BeforeSetKeytabHook(Hook):
+
+  def hook(self, env):
+    """
+    This will invoke the before-ANY hook which contains all of the user and group creation logic.
+    Keytab regeneration requires all users are already created, which is usually done by the
+    before-INSTALL hook. However, if the keytab regeneration is executed as part of an upgrade,
+    then the before-INSTALL hook never ran.
+
+    :param env:
+    :return:
+    """
+    self.run_custom_hook('before-ANY')
+
+
+if __name__ == "__main__":
+  BeforeSetKeytabHook().execute()
+

+ 65 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/files/checkForFormat.sh

@@ -0,0 +1,65 @@
+#!/usr/bin/env bash
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#
+
+export hdfs_user=$1
+shift
+export conf_dir=$1
+shift
+export bin_dir=$1
+shift
+export mark_dir=$1
+shift
+export name_dirs=$*
+
+export EXIT_CODE=0
+export command="namenode -format"
+export list_of_non_empty_dirs=""
+
+mark_file=/var/run/hadoop/hdfs/namenode-formatted
+if [[ -f ${mark_file} ]] ; then
+  /var/lib/ambari-agent/ambari-sudo.sh rm -f ${mark_file}
+  /var/lib/ambari-agent/ambari-sudo.sh mkdir -p ${mark_dir}
+fi
+
+if [[ ! -d $mark_dir ]] ; then
+  for dir in `echo $name_dirs | tr ',' ' '` ; do
+    echo "NameNode Dirname = $dir"
+    cmd="ls $dir | wc -l  | grep -q ^0$"
+    eval $cmd
+    if [[ $? -ne 0 ]] ; then
+      (( EXIT_CODE = $EXIT_CODE + 1 ))
+      list_of_non_empty_dirs="$list_of_non_empty_dirs $dir"
+    fi
+  done
+
+  if [[ $EXIT_CODE == 0 ]] ; then
+    /var/lib/ambari-agent/ambari-sudo.sh su ${hdfs_user} - -s /bin/bash -c "export PATH=$PATH:$bin_dir ; yes Y | hdfs --config ${conf_dir} ${command}"
+    (( EXIT_CODE = $EXIT_CODE | $? ))
+  else
+    echo "ERROR: Namenode directory(s) is non empty. Will not format the namenode. List of non-empty namenode dirs ${list_of_non_empty_dirs}"
+  fi
+else
+  echo "${mark_dir} exists. Namenode DFS already formatted"
+fi
+
+exit $EXIT_CODE
+

BIN
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/files/fast-hdfs-resource.jar


+ 134 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/files/task-log4j.properties

@@ -0,0 +1,134 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#
+
+
+# Define some default values that can be overridden by system properties
+hadoop.root.logger=INFO,console
+hadoop.log.dir=.
+hadoop.log.file=hadoop.log
+
+#
+# Job Summary Appender 
+#
+# Use following logger to send summary to separate file defined by 
+# hadoop.mapreduce.jobsummary.log.file rolled daily:
+# hadoop.mapreduce.jobsummary.logger=INFO,JSA
+# 
+hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
+hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
+
+# Define the root logger to the system property "hadoop.root.logger".
+log4j.rootLogger=${hadoop.root.logger}, EventCounter
+
+# Logging Threshold
+log4j.threshhold=ALL
+
+#
+# Daily Rolling File Appender
+#
+
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+#log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+# Debugging Pattern format
+#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
+
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this 
+#
+
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
+
+#
+# TaskLog Appender
+#
+
+#Default values
+hadoop.tasklog.taskid=null
+hadoop.tasklog.iscleanup=false
+hadoop.tasklog.noKeepSplits=4
+hadoop.tasklog.totalLogFileSize=100
+hadoop.tasklog.purgeLogSplits=true
+hadoop.tasklog.logsRetainHours=12
+
+log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
+log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
+log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
+log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
+
+log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
+log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+
+#
+# Rolling File Appender
+#
+
+#log4j.appender.RFA=org.apache.log4j.RollingFileAppender
+#log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
+
+# Logfile size and and 30-day backups
+#log4j.appender.RFA.MaxFileSize=1MB
+#log4j.appender.RFA.MaxBackupIndex=30
+
+#log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
+#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n
+#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
+
+
+# Custom Logging levels
+
+hadoop.metrics.log.level=INFO
+#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
+#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
+#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
+log4j.logger.org.apache.hadoop.metrics2=${hadoop.metrics.log.level}
+
+# Jets3t library
+log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
+
+#
+# Null Appender
+# Trap security logger on the hadoop client side
+#
+log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
+
+#
+# Event Counter Appender
+# Sends counts of logging messages at different severity levels to Hadoop Metrics.
+#
+log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
+ 
+# Removes "deprecated" messages
+log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN

+ 66 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/files/topology_script.py

@@ -0,0 +1,66 @@
+#!/usr/bin/env python
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+import sys, os
+from string import join
+import ConfigParser
+
+
+DEFAULT_RACK = "/default-rack"
+DATA_FILE_NAME =  os.path.dirname(os.path.abspath(__file__)) + "/topology_mappings.data"
+SECTION_NAME = "network_topology"
+
+class TopologyScript():
+
+  def load_rack_map(self):
+    try:
+      #RACK_MAP contains both host name vs rack and ip vs rack mappings
+      mappings = ConfigParser.ConfigParser()
+      mappings.read(DATA_FILE_NAME)
+      return dict(mappings.items(SECTION_NAME))
+    except ConfigParser.NoSectionError:
+      return {}
+
+  def get_racks(self, rack_map, args):
+    if len(args) == 1:
+      return DEFAULT_RACK
+    else:
+      return join([self.lookup_by_hostname_or_ip(input_argument, rack_map) for input_argument in args[1:]],)
+
+  def lookup_by_hostname_or_ip(self, hostname_or_ip, rack_map):
+    #try looking up by hostname
+    rack = rack_map.get(hostname_or_ip)
+    if rack is not None:
+      return rack
+    #try looking up by ip
+    rack = rack_map.get(self.extract_ip(hostname_or_ip))
+    #try by localhost since hadoop could be passing in 127.0.0.1 which might not be mapped
+    return rack if rack is not None else rack_map.get("localhost.localdomain", DEFAULT_RACK)
+
+  #strips out port and slashes in case hadoop passes in something like 127.0.0.1/127.0.0.1:50010
+  def extract_ip(self, container_string):
+    return container_string.split("/")[0].split(":")[0]
+
+  def execute(self, args):
+    rack_map = self.load_rack_map()
+    rack = self.get_racks(rack_map, args)
+    print rack
+
+if __name__ == "__main__":
+  TopologyScript().execute(sys.argv)

+ 173 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/custom_extensions.py

@@ -0,0 +1,173 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+
+from resource_management.core.resources import Directory
+from resource_management.core.resources import Execute
+from resource_management.libraries.functions import default
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions import format
+
+
+DEFAULT_HADOOP_HDFS_EXTENSION_DIR = "/hdp/ext/{0}/hadoop"
+DEFAULT_HADOOP_HIVE_EXTENSION_DIR = "/hdp/ext/{0}/hive"
+DEFAULT_HADOOP_HBASE_EXTENSION_DIR = "/hdp/ext/{0}/hbase"
+
+def setup_extensions():
+  """
+  The goal of this method is to distribute extensions (for example jar files) from
+  HDFS (/hdp/ext/{major_stack_version}/{service_name}) to all nodes which contain related
+  components of service (YARN, HIVE or HBASE). Extensions should be added to HDFS by
+  user manually.
+  """
+
+  import params
+
+  # Hadoop Custom extensions
+  hadoop_custom_extensions_enabled = default("/configurations/core-site/hadoop.custom-extensions.enabled", False)
+  hadoop_custom_extensions_services = default("/configurations/core-site/hadoop.custom-extensions.services", "")
+  hadoop_custom_extensions_owner = default("/configurations/core-site/hadoop.custom-extensions.owner", params.hdfs_user)
+  hadoop_custom_extensions_hdfs_dir = get_config_formatted_value(default("/configurations/core-site/hadoop.custom-extensions.root",
+                                                 DEFAULT_HADOOP_HDFS_EXTENSION_DIR.format(params.major_stack_version)))
+  hadoop_custom_extensions_services = [ service.strip().upper() for service in hadoop_custom_extensions_services.split(",") ]
+  hadoop_custom_extensions_services.append("YARN")
+
+  hadoop_custom_extensions_local_dir = "{0}/current/ext/hadoop".format(Script.get_stack_root())
+
+  if params.current_service in hadoop_custom_extensions_services:
+    clean_extensions(hadoop_custom_extensions_local_dir)
+    if hadoop_custom_extensions_enabled:
+      download_extensions(hadoop_custom_extensions_owner, params.user_group,
+                          hadoop_custom_extensions_hdfs_dir,
+                          hadoop_custom_extensions_local_dir)
+
+  setup_extensions_hive()
+
+  hbase_custom_extensions_services = []
+  hbase_custom_extensions_services.append("HBASE")
+  if params.current_service in hbase_custom_extensions_services:
+    setup_hbase_extensions()
+
+
+def setup_hbase_extensions():
+  import params
+
+  # HBase Custom extensions
+  hbase_custom_extensions_enabled = default("/configurations/hbase-site/hbase.custom-extensions.enabled", False)
+  hbase_custom_extensions_owner = default("/configurations/hbase-site/hbase.custom-extensions.owner", params.hdfs_user)
+  hbase_custom_extensions_hdfs_dir = get_config_formatted_value(default("/configurations/hbase-site/hbase.custom-extensions.root",
+                                                DEFAULT_HADOOP_HBASE_EXTENSION_DIR.format(params.major_stack_version)))
+  hbase_custom_extensions_local_dir = "{0}/current/ext/hbase".format(Script.get_stack_root())
+
+  impacted_components = ['HBASE_MASTER', 'HBASE_REGIONSERVER', 'PHOENIX_QUERY_SERVER'];
+  role = params.config.get('role','')
+
+  if role in impacted_components:
+    clean_extensions(hbase_custom_extensions_local_dir)
+    if hbase_custom_extensions_enabled:
+      download_extensions(hbase_custom_extensions_owner, params.user_group,
+                          hbase_custom_extensions_hdfs_dir,
+                          hbase_custom_extensions_local_dir)
+
+
+def setup_extensions_hive():
+  import params
+
+  hive_custom_extensions_enabled = default("/configurations/hive-site/hive.custom-extensions.enabled", False)
+  hive_custom_extensions_owner = default("/configurations/hive-site/hive.custom-extensions.owner", params.hdfs_user)
+  hive_custom_extensions_hdfs_dir = DEFAULT_HADOOP_HIVE_EXTENSION_DIR.format(params.major_stack_version)
+
+  hive_custom_extensions_local_dir = "{0}/current/ext/hive".format(Script.get_stack_root())
+
+  impacted_components = ['HIVE_SERVER', 'HIVE_CLIENT'];
+  role = params.config.get('role','')
+
+  # Run copying for HIVE_SERVER and HIVE_CLIENT
+  if params.current_service == 'HIVE' and role in impacted_components:
+    clean_extensions(hive_custom_extensions_local_dir)
+    if hive_custom_extensions_enabled:
+      download_extensions(hive_custom_extensions_owner, params.user_group,
+                          hive_custom_extensions_hdfs_dir,
+                          hive_custom_extensions_local_dir)
+
+def download_extensions(owner_user, owner_group, hdfs_source_dir, local_target_dir):
+  """
+  :param owner_user: user owner of the HDFS directory
+  :param owner_group: group owner of the HDFS directory
+  :param hdfs_source_dir: the HDFS directory from where the files are being pull
+  :param local_target_dir: the location of where to download the files
+  :return: Will return True if successful, otherwise, False.
+  """
+  import params
+
+  if not os.path.isdir(local_target_dir):
+    extensions_tmp_dir=format("{tmp_dir}/custom_extensions")
+    Directory(local_target_dir,
+              owner="root",
+              mode=0755,
+              group="root",
+              create_parents=True)
+
+    params.HdfsResource(hdfs_source_dir,
+                        type="directory",
+                        action="create_on_execute",
+                        owner=owner_user,
+                        group=owner_group,
+                        mode=0755)
+
+    Directory(extensions_tmp_dir,
+              owner=params.hdfs_user,
+              mode=0755,
+              create_parents=True)
+
+    # copy from hdfs to /tmp
+    params.HdfsResource(extensions_tmp_dir,
+                        type="directory",
+                        action="download_on_execute",
+                        source=hdfs_source_dir,
+                        user=params.hdfs_user,
+                        mode=0644,
+                        replace_existing_files=True)
+
+    # Execute command is not quoting correctly.
+    cmd = format("{sudo} mv {extensions_tmp_dir}/* {local_target_dir}")
+    only_if_cmd = "ls -d {extensions_tmp_dir}/*".format(extensions_tmp_dir=extensions_tmp_dir)
+    Execute(cmd, only_if=only_if_cmd)
+
+    only_if_local = 'ls -d "{local_target_dir}"'.format(local_target_dir=local_target_dir)
+    Execute(("chown", "-R", "root:root", local_target_dir),
+            sudo=True,
+            only_if=only_if_local)
+
+    params.HdfsResource(None,action="execute")
+  return True
+
+def clean_extensions(local_dir):
+  """
+  :param local_dir: The local directory where the extensions are stored.
+  :return: Will return True if successful, otherwise, False.
+  """
+  if os.path.isdir(local_dir):
+    Directory(local_dir,
+              action="delete")
+  return True
+
+def get_config_formatted_value(property_value):
+  return format(property_value.replace("{{", "{").replace("}}", "}"))

+ 43 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/hook.py

@@ -0,0 +1,43 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from rack_awareness import create_topology_script_and_mapping
+from shared_initialization import setup_hadoop, setup_configs, create_javahome_symlink, setup_unlimited_key_jce_policy, \
+  Hook
+from custom_extensions import setup_extensions
+
+
+class BeforeStartHook(Hook):
+
+  def hook(self, env):
+    import params
+
+    self.run_custom_hook('before-ANY')
+    env.set_params(params)
+
+    setup_hadoop()
+    setup_configs()
+    create_javahome_symlink()
+    create_topology_script_and_mapping()
+    setup_unlimited_key_jce_policy()
+    if params.stack_supports_hadoop_custom_extensions:
+      setup_extensions()
+
+
+if __name__ == "__main__":
+  BeforeStartHook().execute()

+ 382 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/params.py

@@ -0,0 +1,382 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions import default
+from resource_management.libraries.functions import format_jvm_option
+from resource_management.libraries.functions import format
+from resource_management.libraries.functions.version import format_stack_version, compare_versions, get_major_version
+from ambari_commons.os_check import OSCheck
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions import get_kinit_path
+from resource_management.libraries.functions.get_not_managed_resources import get_not_managed_resources
+from resource_management.libraries.resources.hdfs_resource import HdfsResource
+from resource_management.libraries.functions.stack_features import check_stack_feature
+from resource_management.libraries.functions.stack_features import get_stack_feature_version
+from resource_management.libraries.functions import StackFeature
+from ambari_commons.constants import AMBARI_SUDO_BINARY
+
+config = Script.get_config()
+tmp_dir = Script.get_tmp_dir()
+artifact_dir = tmp_dir + "/AMBARI-artifacts"
+
+version_for_stack_feature_checks = get_stack_feature_version(config)
+stack_supports_hadoop_custom_extensions = check_stack_feature(StackFeature.HADOOP_CUSTOM_EXTENSIONS, version_for_stack_feature_checks)
+
+sudo = AMBARI_SUDO_BINARY
+
+# Global flag enabling or disabling the sysprep feature
+host_sys_prepped = default("/ambariLevelParams/host_sys_prepped", False)
+
+# Whether to skip copying fast-hdfs-resource.jar to /var/lib/ambari-agent/lib/
+# This is required if tarballs are going to be copied to HDFS, so set to False
+sysprep_skip_copy_fast_jar_hdfs = host_sys_prepped and default("/configurations/cluster-env/sysprep_skip_copy_fast_jar_hdfs", False)
+
+# Whether to skip setting up the unlimited key JCE policy
+sysprep_skip_setup_jce = host_sys_prepped and default("/configurations/cluster-env/sysprep_skip_setup_jce", False)
+
+stack_version_unformatted = config['clusterLevelParams']['stack_version']
+stack_version_formatted = format_stack_version(stack_version_unformatted)
+major_stack_version = get_major_version(stack_version_formatted)
+
+dfs_type = default("/clusterLevelParams/dfs_type", "")
+hadoop_conf_dir = "/etc/hadoop/conf"
+component_list = default("/localComponents", [])
+
+hdfs_tmp_dir = default("/configurations/hadoop-env/hdfs_tmp_dir", "/tmp")
+
+hadoop_metrics2_properties_content = None
+if 'hadoop-metrics2.properties' in config['configurations']:
+  hadoop_metrics2_properties_content = config['configurations']['hadoop-metrics2.properties']['content']
+
+hadoop_libexec_dir = stack_select.get_hadoop_dir("libexec")
+hadoop_lib_home = stack_select.get_hadoop_dir("lib")
+hadoop_bin = stack_select.get_hadoop_dir("sbin")
+
+mapreduce_libs_path = "/usr/hdp/current/hadoop-mapreduce-client/*"
+hadoop_home = stack_select.get_hadoop_dir("home")
+create_lib_snappy_symlinks = False
+  
+current_service = config['serviceName']
+
+#security params
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+
+ambari_server_resources_url = default("/ambariLevelParams/jdk_location", None)
+if ambari_server_resources_url is not None and ambari_server_resources_url.endswith('/'):
+  ambari_server_resources_url = ambari_server_resources_url[:-1]
+
+# Unlimited key JCE policy params
+jce_policy_zip = default("/ambariLevelParams/jce_name", None) # None when jdk is already installed by user
+unlimited_key_jce_required = default("/componentLevelParams/unlimited_key_jce_required", False)
+jdk_name = default("/ambariLevelParams/jdk_name", None)
+java_home = default("/ambariLevelParams/java_home", None)
+java_exec = "{0}/bin/java".format(java_home) if java_home is not None else "/bin/java"
+
+#users and groups
+has_hadoop_env = 'hadoop-env' in config['configurations']
+mapred_user = config['configurations']['mapred-env']['mapred_user']
+hdfs_user = config['configurations']['hadoop-env']['hdfs_user']
+yarn_user = config['configurations']['yarn-env']['yarn_user']
+
+user_group = config['configurations']['cluster-env']['user_group']
+
+#hosts
+hostname = config['agentLevelParams']['hostname']
+ambari_server_hostname = config['ambariLevelParams']['ambari_server_host']
+rm_host = default("/clusterHostInfo/resourcemanager_hosts", [])
+slave_hosts = default("/clusterHostInfo/datanode_hosts", [])
+oozie_servers = default("/clusterHostInfo/oozie_server", [])
+hcat_server_hosts = default("/clusterHostInfo/webhcat_server_hosts", [])
+hive_server_host =  default("/clusterHostInfo/hive_server_hosts", [])
+hbase_master_hosts = default("/clusterHostInfo/hbase_master_hosts", [])
+hs_host = default("/clusterHostInfo/historyserver_hosts", [])
+jtnode_host = default("/clusterHostInfo/jtnode_hosts", [])
+namenode_hosts = default("/clusterHostInfo/namenode_hosts", [])
+hdfs_client_hosts = default("/clusterHostInfo/hdfs_client_hosts", [])
+zk_hosts = default("/clusterHostInfo/zookeeper_server_hosts", [])
+ganglia_server_hosts = default("/clusterHostInfo/ganglia_server_hosts", [])
+cluster_name = config["clusterName"]
+set_instanceId = "false"
+if 'cluster-env' in config['configurations'] and \
+    'metrics_collector_external_hosts' in config['configurations']['cluster-env']:
+  ams_collector_hosts = config['configurations']['cluster-env']['metrics_collector_external_hosts']
+  set_instanceId = "true"
+else:
+  ams_collector_hosts = ",".join(default("/clusterHostInfo/metrics_collector_hosts", []))
+
+has_namenode = len(namenode_hosts) > 0
+has_hdfs_clients = len(hdfs_client_hosts) > 0
+has_hdfs = has_hdfs_clients or has_namenode
+has_resourcemanager = not len(rm_host) == 0
+has_slaves = not len(slave_hosts) == 0
+has_oozie_server = not len(oozie_servers) == 0
+has_hcat_server_host = not len(hcat_server_hosts) == 0
+has_hive_server_host = not len(hive_server_host) == 0
+has_hbase_masters = not len(hbase_master_hosts) == 0
+has_zk_host = not len(zk_hosts) == 0
+has_ganglia_server = not len(ganglia_server_hosts) == 0
+has_metric_collector = not len(ams_collector_hosts) == 0
+
+is_namenode_master = hostname in namenode_hosts
+is_jtnode_master = hostname in jtnode_host
+is_rmnode_master = hostname in rm_host
+is_hsnode_master = hostname in hs_host
+is_hbase_master = hostname in hbase_master_hosts
+is_slave = hostname in slave_hosts
+
+if has_ganglia_server:
+  ganglia_server_host = ganglia_server_hosts[0]
+
+metric_collector_port = None
+if has_metric_collector:
+  if 'cluster-env' in config['configurations'] and \
+      'metrics_collector_external_port' in config['configurations']['cluster-env']:
+    metric_collector_port = config['configurations']['cluster-env']['metrics_collector_external_port']
+  else:
+    metric_collector_web_address = default("/configurations/ams-site/timeline.metrics.service.webapp.address", "0.0.0.0:6188")
+    if metric_collector_web_address.find(':') != -1:
+      metric_collector_port = metric_collector_web_address.split(':')[1]
+    else:
+      metric_collector_port = '6188'
+  if default("/configurations/ams-site/timeline.metrics.service.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+    metric_collector_protocol = 'https'
+  else:
+    metric_collector_protocol = 'http'
+  metric_truststore_path= default("/configurations/ams-ssl-client/ssl.client.truststore.location", "")
+  metric_truststore_type= default("/configurations/ams-ssl-client/ssl.client.truststore.type", "")
+  metric_truststore_password= default("/configurations/ams-ssl-client/ssl.client.truststore.password", "")
+  metric_legacy_hadoop_sink = check_stack_feature(StackFeature.AMS_LEGACY_HADOOP_SINK, version_for_stack_feature_checks)
+
+  pass
+
+metrics_report_interval = default("/configurations/ams-site/timeline.metrics.sink.report.interval", 60)
+metrics_collection_period = default("/configurations/ams-site/timeline.metrics.sink.collection.period", 10)
+
+host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
+host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
+
+# Cluster Zookeeper quorum
+zookeeper_quorum = None
+if has_zk_host:
+  if 'zoo.cfg' in config['configurations'] and 'clientPort' in config['configurations']['zoo.cfg']:
+    zookeeper_clientPort = config['configurations']['zoo.cfg']['clientPort']
+  else:
+    zookeeper_clientPort = '2181'
+  zookeeper_quorum = (':' + zookeeper_clientPort + ',').join(config['clusterHostInfo']['zookeeper_server_hosts'])
+  # last port config
+  zookeeper_quorum += ':' + zookeeper_clientPort
+
+#hadoop params
+
+if has_namenode or dfs_type == 'HCFS':
+  hadoop_tmp_dir = format("/tmp/hadoop-{hdfs_user}")
+  hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
+  task_log4j_properties_location = os.path.join(hadoop_conf_dir, "task-log4j.properties")
+
+hadoop_pid_dir_prefix = config['configurations']['hadoop-env']['hadoop_pid_dir_prefix']
+hdfs_log_dir_prefix = config['configurations']['hadoop-env']['hdfs_log_dir_prefix']
+hbase_tmp_dir = "/tmp/hbase-hbase"
+#db params
+oracle_driver_symlink_url = format("{ambari_server_resources_url}/oracle-jdbc-driver.jar")
+mysql_driver_symlink_url = format("{ambari_server_resources_url}/mysql-jdbc-driver.jar")
+
+if has_namenode and 'rca_enabled' in config['configurations']['hadoop-env']:
+  rca_enabled =  config['configurations']['hadoop-env']['rca_enabled']
+else:
+  rca_enabled = False
+rca_disabled_prefix = "###"
+if rca_enabled == True:
+  rca_prefix = ""
+else:
+  rca_prefix = rca_disabled_prefix
+
+#hadoop-env.sh
+
+jsvc_path = "/usr/lib/bigtop-utils"
+
+hadoop_heapsize = config['configurations']['hadoop-env']['hadoop_heapsize']
+namenode_heapsize = config['configurations']['hadoop-env']['namenode_heapsize']
+namenode_opt_newsize = config['configurations']['hadoop-env']['namenode_opt_newsize']
+namenode_opt_maxnewsize = config['configurations']['hadoop-env']['namenode_opt_maxnewsize']
+namenode_opt_permsize = format_jvm_option("/configurations/hadoop-env/namenode_opt_permsize","128m")
+namenode_opt_maxpermsize = format_jvm_option("/configurations/hadoop-env/namenode_opt_maxpermsize","256m")
+
+jtnode_opt_newsize = "200m"
+jtnode_opt_maxnewsize = "200m"
+jtnode_heapsize =  "1024m"
+ttnode_heapsize = "1024m"
+
+dtnode_heapsize = config['configurations']['hadoop-env']['dtnode_heapsize']
+mapred_pid_dir_prefix = default("/configurations/mapred-env/mapred_pid_dir_prefix","/var/run/hadoop-mapreduce")
+mapred_log_dir_prefix = default("/configurations/mapred-env/mapred_log_dir_prefix","/var/log/hadoop-mapreduce")
+
+#log4j.properties
+
+yarn_log_dir_prefix = default("/configurations/yarn-env/yarn_log_dir_prefix","/var/log/hadoop-yarn")
+
+dfs_hosts = default('/configurations/hdfs-site/dfs.hosts', None)
+
+# Hdfs log4j settings
+hadoop_log_max_backup_size = default('configurations/hdfs-log4j/hadoop_log_max_backup_size', 256)
+hadoop_log_number_of_backup_files = default('configurations/hdfs-log4j/hadoop_log_number_of_backup_files', 10)
+hadoop_security_log_max_backup_size = default('configurations/hdfs-log4j/hadoop_security_log_max_backup_size', 256)
+hadoop_security_log_number_of_backup_files = default('configurations/hdfs-log4j/hadoop_security_log_number_of_backup_files', 20)
+
+# Yarn log4j settings
+yarn_rm_summary_log_max_backup_size = default('configurations/yarn-log4j/yarn_rm_summary_log_max_backup_size', 256)
+yarn_rm_summary_log_number_of_backup_files = default('configurations/yarn-log4j/yarn_rm_summary_log_number_of_backup_files', 20)
+
+#log4j.properties
+if (('hdfs-log4j' in config['configurations']) and ('content' in config['configurations']['hdfs-log4j'])):
+  log4j_props = config['configurations']['hdfs-log4j']['content']
+  if (('yarn-log4j' in config['configurations']) and ('content' in config['configurations']['yarn-log4j'])):
+    log4j_props += config['configurations']['yarn-log4j']['content']
+else:
+  log4j_props = None
+
+refresh_topology = False
+command_params = config["commandParams"] if "commandParams" in config else None
+if command_params is not None:
+  refresh_topology = bool(command_params["refresh_topology"]) if "refresh_topology" in command_params else False
+
+ambari_java_home = default("/commandParams/ambari_java_home", None)
+ambari_jdk_name = default("/commandParams/ambari_jdk_name", None)
+ambari_jce_name = default("/commandParams/ambari_jce_name", None)
+  
+ambari_libs_dir = "/var/lib/ambari-agent/lib"
+is_webhdfs_enabled = config['configurations']['hdfs-site']['dfs.webhdfs.enabled']
+default_fs = config['configurations']['core-site']['fs.defaultFS']
+
+#host info
+all_hosts = default("/clusterHostInfo/all_hosts", [])
+all_racks = default("/clusterHostInfo/all_racks", [])
+all_ipv4_ips = default("/clusterHostInfo/all_ipv4_ips", [])
+slave_hosts = default("/clusterHostInfo/datanode_hosts", [])
+
+#topology files
+net_topology_script_file_path = "/etc/hadoop/conf/topology_script.py"
+net_topology_script_dir = os.path.dirname(net_topology_script_file_path)
+net_topology_mapping_data_file_name = 'topology_mappings.data'
+net_topology_mapping_data_file_path = os.path.join(net_topology_script_dir, net_topology_mapping_data_file_name)
+
+#Added logic to create /tmp and /user directory for HCFS stack.  
+has_core_site = 'core-site' in config['configurations']
+hdfs_user_keytab = config['configurations']['hadoop-env']['hdfs_user_keytab']
+kinit_path_local = get_kinit_path()
+stack_version_unformatted = config['clusterLevelParams']['stack_version']
+stack_version_formatted = format_stack_version(stack_version_unformatted)
+hadoop_bin_dir = stack_select.get_hadoop_dir("bin")
+hdfs_principal_name = default('/configurations/hadoop-env/hdfs_principal_name', None)
+hdfs_site = config['configurations']['hdfs-site']
+smoke_user =  config['configurations']['cluster-env']['smokeuser']
+smoke_hdfs_user_dir = format("/user/{smoke_user}")
+smoke_hdfs_user_mode = 0770
+
+
+##### Namenode RPC ports - metrics config section start #####
+
+# Figure out the rpc ports for current namenode
+nn_rpc_client_port = None
+nn_rpc_dn_port = None
+nn_rpc_healthcheck_port = None
+
+namenode_id = None
+namenode_rpc = None
+
+dfs_ha_enabled = False
+dfs_ha_nameservices = default('/configurations/hdfs-site/dfs.internal.nameservices', None)
+if dfs_ha_nameservices is None:
+  dfs_ha_nameservices = default('/configurations/hdfs-site/dfs.nameservices', None)
+dfs_ha_namenode_ids = default(format("/configurations/hdfs-site/dfs.ha.namenodes.{dfs_ha_nameservices}"), None)
+
+dfs_ha_namemodes_ids_list = []
+other_namenode_id = None
+
+if dfs_ha_namenode_ids:
+ dfs_ha_namemodes_ids_list = dfs_ha_namenode_ids.split(",")
+ dfs_ha_namenode_ids_array_len = len(dfs_ha_namemodes_ids_list)
+ if dfs_ha_namenode_ids_array_len > 1:
+   dfs_ha_enabled = True
+
+if dfs_ha_enabled:
+ for nn_id in dfs_ha_namemodes_ids_list:
+   nn_host = config['configurations']['hdfs-site'][format('dfs.namenode.rpc-address.{dfs_ha_nameservices}.{nn_id}')]
+   if hostname.lower() in nn_host.lower():
+     namenode_id = nn_id
+     namenode_rpc = nn_host
+   pass
+ pass
+else:
+  namenode_rpc = default('/configurations/hdfs-site/dfs.namenode.rpc-address', default_fs)
+
+# if HDFS is not installed in the cluster, then don't try to access namenode_rpc
+if has_namenode and namenode_rpc and "core-site" in config['configurations']:
+  port_str = namenode_rpc.split(':')[-1].strip()
+  try:
+    nn_rpc_client_port = int(port_str)
+  except ValueError:
+    nn_rpc_client_port = None
+
+if dfs_ha_enabled:
+ dfs_service_rpc_address = default(format('/configurations/hdfs-site/dfs.namenode.servicerpc-address.{dfs_ha_nameservices}.{namenode_id}'), None)
+ dfs_lifeline_rpc_address = default(format('/configurations/hdfs-site/dfs.namenode.lifeline.rpc-address.{dfs_ha_nameservices}.{namenode_id}'), None)
+else:
+ dfs_service_rpc_address = default('/configurations/hdfs-site/dfs.namenode.servicerpc-address', None)
+ dfs_lifeline_rpc_address = default(format('/configurations/hdfs-site/dfs.namenode.lifeline.rpc-address'), None)
+
+if dfs_service_rpc_address:
+ nn_rpc_dn_port = dfs_service_rpc_address.split(':')[1].strip()
+
+if dfs_lifeline_rpc_address:
+ nn_rpc_healthcheck_port = dfs_lifeline_rpc_address.split(':')[1].strip()
+
+is_nn_client_port_configured = False if nn_rpc_client_port is None else True
+is_nn_dn_port_configured = False if nn_rpc_dn_port is None else True
+is_nn_healthcheck_port_configured = False if nn_rpc_healthcheck_port is None else True
+
+##### end #####
+
+import functools
+#create partial functions with common arguments for every HdfsResource call
+#to create/delete/copyfromlocal hdfs directories/files we need to call params.HdfsResource in code
+HdfsResource = functools.partial(
+  HdfsResource,
+  user=hdfs_user,
+  hdfs_resource_ignore_file = "/var/lib/ambari-agent/data/.hdfs_resource_ignore",
+  security_enabled = security_enabled,
+  keytab = hdfs_user_keytab,
+  kinit_path_local = kinit_path_local,
+  hadoop_bin_dir = hadoop_bin_dir,
+  hadoop_conf_dir = hadoop_conf_dir,
+  principal_name = hdfs_principal_name,
+  hdfs_site = hdfs_site,
+  default_fs = default_fs,
+  immutable_paths = get_not_managed_resources(),
+  dfs_type = dfs_type
+)

+ 48 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/rack_awareness.py

@@ -0,0 +1,48 @@
+#!/usr/bin/env python
+
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+from resource_management.core.resources import File
+from resource_management.core.source import StaticFile, Template
+from resource_management.libraries.functions import format
+
+
+def create_topology_mapping():
+  import params
+
+  File(params.net_topology_mapping_data_file_path,
+       content=Template("topology_mappings.data.j2"),
+       owner=params.hdfs_user,
+       group=params.user_group,
+       mode=0644,
+       only_if=format("test -d {net_topology_script_dir}"))
+
+def create_topology_script():
+  import params
+
+  File(params.net_topology_script_file_path,
+       content=StaticFile('topology_script.py'),
+       mode=0755,
+       only_if=format("test -d {net_topology_script_dir}"))
+
+def create_topology_script_and_mapping():
+  import params
+  if params.has_hadoop_env:
+    create_topology_mapping()
+    create_topology_script()

+ 262 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/scripts/shared_initialization.py

@@ -0,0 +1,262 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+from resource_management.libraries.providers.hdfs_resource import WebHDFSUtil
+from resource_management.core.resources.jcepolicyinfo import JcePolicyInfo
+
+from resource_management import *
+
+def setup_hadoop():
+  """
+  Setup hadoop files and directories
+  """
+  import params
+
+  Execute(("setenforce","0"),
+          only_if="test -f /selinux/enforce",
+          not_if="(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)",
+          sudo=True,
+  )
+
+  #directories
+  if params.has_namenode or params.dfs_type == 'HCFS':
+    Directory(params.hdfs_log_dir_prefix,
+              create_parents = True,
+              owner='root',
+              group=params.user_group,
+              mode=0775,
+              cd_access='a',
+    )
+    if params.has_namenode:
+      Directory(params.hadoop_pid_dir_prefix,
+              create_parents = True,
+              owner='root',
+              group='root',
+              cd_access='a',
+      )
+      Directory(format("{hadoop_pid_dir_prefix}/{hdfs_user}"),
+              owner=params.hdfs_user,
+              cd_access='a',
+      )
+
+    Directory(params.hadoop_tmp_dir,
+              create_parents = True,
+              owner=params.hdfs_user,
+              cd_access='a',
+              )
+  #files
+    if params.security_enabled:
+      tc_owner = "root"
+    else:
+      tc_owner = params.hdfs_user
+      
+    if os.path.exists(params.hadoop_conf_dir):
+      File(os.path.join(params.hadoop_conf_dir, 'commons-logging.properties'),
+           owner=tc_owner,
+           content=Template('commons-logging.properties.j2')
+      )
+
+      health_check_template_name = "health_check"
+      File(os.path.join(params.hadoop_conf_dir, health_check_template_name),
+           owner=tc_owner,
+           content=Template(health_check_template_name + ".j2")
+      )
+
+      log4j_filename = os.path.join(params.hadoop_conf_dir, "log4j.properties")
+      if (params.log4j_props != None):
+        File(log4j_filename,
+             mode=0644,
+             group=params.user_group,
+             owner=params.hdfs_user,
+             content=InlineTemplate(params.log4j_props)
+        )
+      elif (os.path.exists(format("{params.hadoop_conf_dir}/log4j.properties"))):
+        File(log4j_filename,
+             mode=0644,
+             group=params.user_group,
+             owner=params.hdfs_user,
+        )
+
+    create_microsoft_r_dir()
+
+  if params.has_hdfs or params.dfs_type == 'HCFS':
+    # if WebHDFS is not enabled we need this jar to create hadoop folders and copy tarballs to HDFS.
+    if params.sysprep_skip_copy_fast_jar_hdfs:
+      print "Skipping copying of fast-hdfs-resource.jar as host is sys prepped"
+    elif params.dfs_type == 'HCFS' or not WebHDFSUtil.is_webhdfs_available(params.is_webhdfs_enabled, params.dfs_type):
+      # for source-code of jar goto contrib/fast-hdfs-resource
+      File(format("{ambari_libs_dir}/fast-hdfs-resource.jar"),
+           mode=0644,
+           content=StaticFile("fast-hdfs-resource.jar")
+           )
+    if os.path.exists(params.hadoop_conf_dir):
+      if params.hadoop_metrics2_properties_content:
+        File(os.path.join(params.hadoop_conf_dir, "hadoop-metrics2.properties"),
+             owner=params.hdfs_user,
+             group=params.user_group,
+             content=InlineTemplate(params.hadoop_metrics2_properties_content)
+             )
+      else:
+        File(os.path.join(params.hadoop_conf_dir, "hadoop-metrics2.properties"),
+             owner=params.hdfs_user,
+             group=params.user_group,
+             content=Template("hadoop-metrics2.properties.j2")
+             )
+
+    if params.dfs_type == 'HCFS' and params.has_core_site and 'ECS_CLIENT' in params.component_list:
+      create_dirs()
+
+
+def setup_configs():
+  """
+  Creates configs for services HDFS mapred
+  """
+  import params
+
+  if params.has_namenode or params.dfs_type == 'HCFS':
+    if os.path.exists(params.hadoop_conf_dir):
+      File(params.task_log4j_properties_location,
+           content=StaticFile("task-log4j.properties"),
+           mode=0755
+      )
+
+    if os.path.exists(os.path.join(params.hadoop_conf_dir, 'configuration.xsl')):
+      File(os.path.join(params.hadoop_conf_dir, 'configuration.xsl'),
+           owner=params.hdfs_user,
+           group=params.user_group
+      )
+    if os.path.exists(os.path.join(params.hadoop_conf_dir, 'masters')):
+      File(os.path.join(params.hadoop_conf_dir, 'masters'),
+                owner=params.hdfs_user,
+                group=params.user_group
+      )
+
+def create_javahome_symlink():
+  if os.path.exists("/usr/jdk/jdk1.6.0_31") and not os.path.exists("/usr/jdk64/jdk1.6.0_31"):
+    Directory("/usr/jdk64/",
+         create_parents = True,
+    )
+    Link("/usr/jdk/jdk1.6.0_31",
+         to="/usr/jdk64/jdk1.6.0_31",
+    )
+
+def create_dirs():
+   import params
+   params.HdfsResource(params.hdfs_tmp_dir,
+                       type="directory",
+                       action="create_on_execute",
+                       owner=params.hdfs_user,
+                       mode=0777
+   )
+   params.HdfsResource(params.smoke_hdfs_user_dir,
+                       type="directory",
+                       action="create_on_execute",
+                       owner=params.smoke_user,
+                       mode=params.smoke_hdfs_user_mode
+   )
+   params.HdfsResource(None,
+                      action="execute"
+   )
+
+def create_microsoft_r_dir():
+  import params
+  if 'MICROSOFT_R_NODE_CLIENT' in params.component_list and params.default_fs:
+    directory = '/user/RevoShare'
+    try:
+      params.HdfsResource(directory,
+                          type="directory",
+                          action="create_on_execute",
+                          owner=params.hdfs_user,
+                          mode=0777)
+      params.HdfsResource(None, action="execute")
+    except Exception as exception:
+      Logger.warning("Could not check the existence of {0} on DFS while starting {1}, exception: {2}".format(directory, params.current_service, str(exception)))
+
+def setup_unlimited_key_jce_policy():
+  """
+  Sets up the unlimited key JCE policy if needed. (sets up ambari JCE as well if ambari and the  stack use different JDK)
+  """
+  import params
+  __setup_unlimited_key_jce_policy(custom_java_home=params.java_home, custom_jdk_name=params.jdk_name, custom_jce_name = params.jce_policy_zip)
+  if params.ambari_jce_name and params.ambari_jce_name != params.jce_policy_zip:
+    __setup_unlimited_key_jce_policy(custom_java_home=params.ambari_java_home, custom_jdk_name=params.ambari_jdk_name, custom_jce_name = params.ambari_jce_name)
+
+def __setup_unlimited_key_jce_policy(custom_java_home, custom_jdk_name, custom_jce_name):
+  """
+  Sets up the unlimited key JCE policy if needed.
+
+  The following criteria must be met:
+
+    * The cluster has not been previously prepared (sys preped) - cluster-env/sysprep_skip_setup_jce = False
+    * Ambari is managing the host's JVM - /ambariLevelParams/jdk_name is set
+    * Either security is enabled OR a service requires it - /componentLevelParams/unlimited_key_jce_required = True
+    * The unlimited key JCE policy has not already been installed
+
+  If the conditions are met, the following steps are taken to install the unlimited key JCE policy JARs
+
+    1. The unlimited key JCE policy ZIP file is downloaded from the Ambari server and stored in the
+        Ambari agent's temporary directory
+    2. The existing JCE policy JAR files are deleted
+    3. The downloaded ZIP file is unzipped into the proper JCE policy directory
+
+  :return: None
+  """
+  import params
+
+  if params.sysprep_skip_setup_jce:
+    Logger.info("Skipping unlimited key JCE policy check and setup since the host is sys prepped")
+
+  elif not custom_jdk_name:
+    Logger.info("Skipping unlimited key JCE policy check and setup since the Java VM is not managed by Ambari")
+
+  elif not params.unlimited_key_jce_required:
+    Logger.info("Skipping unlimited key JCE policy check and setup since it is not required")
+
+  else:
+    jcePolicyInfo = JcePolicyInfo(custom_java_home)
+
+    if jcePolicyInfo.is_unlimited_key_jce_policy():
+      Logger.info("The unlimited key JCE policy is required, and appears to have been installed.")
+
+    elif custom_jce_name is None:
+      raise Fail("The unlimited key JCE policy needs to be installed; however the JCE policy zip is not specified.")
+
+    else:
+      Logger.info("The unlimited key JCE policy is required, and needs to be installed.")
+
+      jce_zip_target = format("{artifact_dir}/{custom_jce_name}")
+      jce_zip_source = format("{ambari_server_resources_url}/{custom_jce_name}")
+      java_security_dir = format("{custom_java_home}/jre/lib/security")
+
+      Logger.debug("Downloading the unlimited key JCE policy files from {0} to {1}.".format(jce_zip_source, jce_zip_target))
+      Directory(params.artifact_dir, create_parents=True)
+      File(jce_zip_target, content=DownloadSource(jce_zip_source))
+
+      Logger.debug("Removing existing JCE policy JAR files: {0}.".format(java_security_dir))
+      File(format("{java_security_dir}/US_export_policy.jar"), action="delete")
+      File(format("{java_security_dir}/local_policy.jar"), action="delete")
+
+      Logger.debug("Unzipping the unlimited key JCE policy files from {0} into {1}.".format(jce_zip_target, java_security_dir))
+      extract_cmd = ("unzip", "-o", "-j", "-q", jce_zip_target, "-d", java_security_dir)
+      Execute(extract_cmd,
+              only_if=format("test -e {java_security_dir} && test -f {jce_zip_target}"),
+              path=['/bin/', '/usr/bin'],
+              sudo=True
+              )

+ 43 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/commons-logging.properties.j2

@@ -0,0 +1,43 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+#/*
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+#Logging Implementation
+
+#Log4J
+org.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger
+
+#JDK Logger
+#org.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger

+ 21 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/exclude_hosts_list.j2

@@ -0,0 +1,21 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+{% for host in hdfs_exclude_file %}
+{{host}}
+{% endfor %}

+ 114 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/hadoop-metrics2.properties.j2

@@ -0,0 +1,114 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# syntax: [prefix].[source|sink|jmx].[instance].[options]
+# See package.html for org.apache.hadoop.metrics2 for details
+
+{% if has_ganglia_server %}
+*.period=60
+
+*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
+*.sink.ganglia.period=10
+
+# default for supportsparse is false
+*.sink.ganglia.supportsparse=true
+
+.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
+.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
+
+# Hook up to the server
+namenode.sink.ganglia.servers={{ganglia_server_host}}:8661
+datanode.sink.ganglia.servers={{ganglia_server_host}}:8659
+jobtracker.sink.ganglia.servers={{ganglia_server_host}}:8662
+tasktracker.sink.ganglia.servers={{ganglia_server_host}}:8658
+maptask.sink.ganglia.servers={{ganglia_server_host}}:8660
+reducetask.sink.ganglia.servers={{ganglia_server_host}}:8660
+resourcemanager.sink.ganglia.servers={{ganglia_server_host}}:8664
+nodemanager.sink.ganglia.servers={{ganglia_server_host}}:8657
+historyserver.sink.ganglia.servers={{ganglia_server_host}}:8666
+journalnode.sink.ganglia.servers={{ganglia_server_host}}:8654
+nimbus.sink.ganglia.servers={{ganglia_server_host}}:8649
+supervisor.sink.ganglia.servers={{ganglia_server_host}}:8650
+
+resourcemanager.sink.ganglia.tagsForPrefix.yarn=Queue
+
+{% endif %}
+
+{% if has_metric_collector %}
+
+*.period={{metrics_collection_period}}
+{% if metric_legacy_hadoop_sink %}
+*.sink.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink-legacy.jar
+{% else %}
+*.sink.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar
+{% endif %}
+*.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+*.sink.timeline.period={{metrics_collection_period}}
+*.sink.timeline.sendInterval={{metrics_report_interval}}000
+*.sink.timeline.slave.host.name={{hostname}}
+*.sink.timeline.zookeeper.quorum={{zookeeper_quorum}}
+*.sink.timeline.protocol={{metric_collector_protocol}}
+*.sink.timeline.port={{metric_collector_port}}
+*.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+*.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+*.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
+
+# HTTPS properties
+*.sink.timeline.truststore.path = {{metric_truststore_path}}
+*.sink.timeline.truststore.type = {{metric_truststore_type}}
+*.sink.timeline.truststore.password = {{metric_truststore_password}}
+
+datanode.sink.timeline.collector.hosts={{ams_collector_hosts}}
+namenode.sink.timeline.collector.hosts={{ams_collector_hosts}}
+resourcemanager.sink.timeline.collector.hosts={{ams_collector_hosts}}
+nodemanager.sink.timeline.collector.hosts={{ams_collector_hosts}}
+jobhistoryserver.sink.timeline.collector.hosts={{ams_collector_hosts}}
+journalnode.sink.timeline.collector.hosts={{ams_collector_hosts}}
+applicationhistoryserver.sink.timeline.collector.hosts={{ams_collector_hosts}}
+
+resourcemanager.sink.timeline.tagsForPrefix.yarn=Queue
+
+{% if is_nn_client_port_configured %}
+# Namenode rpc ports customization
+namenode.sink.timeline.metric.rpc.client.port={{nn_rpc_client_port}}
+{% endif %}
+{% if is_nn_dn_port_configured %}
+namenode.sink.timeline.metric.rpc.datanode.port={{nn_rpc_dn_port}}
+{% endif %}
+{% if is_nn_healthcheck_port_configured %}
+namenode.sink.timeline.metric.rpc.healthcheck.port={{nn_rpc_healthcheck_port}}
+{% endif %}
+
+{% endif %}

+ 81 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/health_check.j2

@@ -0,0 +1,81 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+#!/bin/bash
+#
+#/*
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+err=0;
+
+function check_disks {
+
+  for m in `awk '$3~/ext3/ {printf" %s ",$2}' /etc/fstab` ; do
+    fsdev=""
+    fsdev=`awk -v m=$m '$2==m {print $1}' /proc/mounts`;
+    if [ -z "$fsdev" -a "$m" != "/mnt" ] ; then
+      msg_="$msg_ $m(u)"
+    else
+      msg_="$msg_`awk -v m=$m '$2==m { if ( $4 ~ /^ro,/ ) {printf"%s(ro)",$2 } ; }' /proc/mounts`"
+    fi
+  done
+
+  if [ -z "$msg_" ] ; then
+    echo "disks ok" ; exit 0
+  else
+    echo "$msg_" ; exit 2
+  fi
+
+}
+
+# Run all checks
+for check in disks ; do
+  msg=`check_${check}` ;
+  if [ $? -eq 0 ] ; then
+    ok_msg="$ok_msg$msg,"
+  else
+    err_msg="$err_msg$msg,"
+  fi
+done
+
+if [ ! -z "$err_msg" ] ; then
+  echo -n "ERROR $err_msg "
+fi
+if [ ! -z "$ok_msg" ] ; then
+  echo -n "OK: $ok_msg"
+fi
+
+echo
+
+# Success!
+exit 0

+ 21 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/include_hosts_list.j2

@@ -0,0 +1,21 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+{% for host in slave_hosts %}
+{{host}}
+{% endfor %}

+ 24 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/hooks/before-START/templates/topology_mappings.data.j2

@@ -0,0 +1,24 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+    #
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+[network_topology]
+{% for host in all_hosts %}
+{% if host in slave_hosts %}
+{{host}}={{all_racks[loop.index-1]}}
+{{all_ipv4_ips[loop.index-1]}}={{all_racks[loop.index-1]}}
+{% endif %}
+{% endfor %}

+ 60 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/kerberos.json

@@ -0,0 +1,60 @@
+{
+  "properties": {
+    "realm": "${kerberos-env/realm}",
+    "keytab_dir": "/etc/security/keytabs",
+    "additional_realms": ""
+  },
+  "identities": [
+    {
+      "name": "spnego",
+      "principal": {
+        "value": "HTTP/_HOST@${realm}",
+        "type" : "service"
+      },
+      "keytab": {
+        "file": "${keytab_dir}/spnego.service.keytab",
+        "owner": {
+          "name": "root",
+          "access": "r"
+        },
+        "group": {
+          "name": "${cluster-env/user_group}",
+          "access": "r"
+        }
+      }
+    },
+    {
+      "name": "smokeuser",
+      "principal": {
+        "value": "${cluster-env/smokeuser}-${cluster_name|toLower()}@${realm}",
+        "type" : "user",
+        "configuration": "cluster-env/smokeuser_principal_name",
+        "local_username" : "${cluster-env/smokeuser}"
+      },
+      "keytab": {
+        "file": "${keytab_dir}/smokeuser.headless.keytab",
+        "owner": {
+          "name": "${cluster-env/smokeuser}",
+          "access": "r"
+        },
+        "group": {
+          "name": "${cluster-env/user_group}",
+          "access": "r"
+        },
+        "configuration": "cluster-env/smokeuser_keytab"
+      }
+    },
+    {
+      "name": "ambari-server",
+      "principal": {
+        "value": "ambari-server-${cluster_name|toLower()}@${realm}",
+        "type" : "user",
+        "configuration": "cluster-env/ambari_principal_name"
+      },
+      "keytab": {
+        "file": "${keytab_dir}/ambari.server.keytab"
+      }
+    }
+  ]
+
+}

+ 22 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/metainfo.xml

@@ -0,0 +1,22 @@
+<?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<metainfo>
+    <versions>
+	  <active>true</active>
+    </versions>
+</metainfo>

+ 58 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/properties/stack_features.json

@@ -0,0 +1,58 @@
+{
+"BIGTOP": {
+  "stack_features": [
+    {
+      "name": "snappy",
+      "description": "Snappy compressor/decompressor support",
+      "min_version": "2.0.0.0",
+      "max_version": "2.2.0.0"
+    },
+    {
+      "name": "lzo",
+      "description": "LZO libraries support",
+      "min_version": "2.2.1.0"
+    },
+    {
+      "name": "copy_tarball_to_hdfs",
+      "description": "Copy tarball to HDFS support (AMBARI-12113)",
+      "min_version": "2.2.0.0"
+    },
+    {
+      "name": "hive_metastore_upgrade_schema",
+      "description": "Hive metastore upgrade schema support (AMBARI-11176)",
+      "min_version": "2.3.0.0"
+     },
+    {
+      "name": "hive_webhcat_specific_configs",
+      "description": "Hive webhcat specific configurations support (AMBARI-12364)",
+      "min_version": "2.3.0.0"
+     },
+    {
+      "name": "hive_purge_table",
+      "description": "Hive purge table support (AMBARI-12260)",
+      "min_version": "2.3.0.0"
+     },
+    {
+      "name": "hive_server2_kerberized_env",
+      "description": "Hive server2 working on kerberized environment (AMBARI-13749)",
+      "min_version": "2.2.3.0",
+      "max_version": "2.2.5.0"
+     },
+    {
+      "name": "hive_env_heapsize",
+      "description": "Hive heapsize property defined in hive-env (AMBARI-12801)",
+      "min_version": "2.2.0.0"
+    },
+    {
+      "name": "hive_metastore_site_support",
+      "description": "Hive Metastore site support",
+      "min_version": "2.5.0.0"
+    },
+    {
+      "name": "kafka_kerberos",
+      "description": "Kafka Kerberos support (AMBARI-10984)",
+      "min_version": "1.0.0.0"
+    }
+  ]
+}
+}

+ 14 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/properties/stack_tools.json

@@ -0,0 +1,14 @@
+{
+  "BIGTOP": {
+    "stack_selector": [
+      "distro-select",
+      "/usr/bin/distro-select",
+      "distro-select"
+    ],
+    "conf_selector": [
+      "conf-select",
+      "/usr/bin/conf-select",
+      "conf-select"
+    ]
+  }
+}

+ 26 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/repos/repoinfo.xml

@@ -0,0 +1,26 @@
+<?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<reposinfo>
+  <os family="redhat7">
+    <repo>
+      <baseurl>https://bigtop-snapshot.s3.amazonaws.com/centos-7/$basearch</baseurl>
+      <repoid>BIGTOP-3.2.0</repoid>
+      <reponame>bigtop</reponame>
+    </repo>
+  </os>
+</reposinfo>

+ 75 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/role_command_order.json

@@ -0,0 +1,75 @@
+{
+  "_comment" : "Record format:",
+  "_comment" : "blockedRole-blockedCommand: [blockerRole1-blockerCommand1, blockerRole2-blockerCommand2, ...]",
+  "general_deps" : {
+    "_comment" : "dependencies for all cases",
+    "HBASE_MASTER-START": ["ZOOKEEPER_SERVER-START"],
+    "HBASE_REGIONSERVER-START": ["HBASE_MASTER-START"],
+    "APP_TIMELINE_SERVER-START": ["NAMENODE-START", "DATANODE-START"],
+    "OOZIE_SERVER-START": ["NODEMANAGER-START", "RESOURCEMANAGER-START"],
+    "WEBHCAT_SERVER-START": ["NODEMANAGER-START", "HIVE_SERVER-START"],
+    "WEBHCAT_SERVER-RESTART": ["NODEMANAGER-RESTART", "HIVE_SERVER-RESTART"],
+    "HIVE_METASTORE-START": ["MYSQL_SERVER-START", "NAMENODE-START"],
+    "HIVE_METASTORE-RESTART": ["MYSQL_SERVER-RESTART", "NAMENODE-RESTART"],
+    "HIVE_SERVER-START": ["NODEMANAGER-START", "MYSQL_SERVER-START"],
+    "HIVE_SERVER-RESTART": ["NODEMANAGER-RESTART", "MYSQL_SERVER-RESTART", "ZOOKEEPER_SERVER-RESTART"],
+    "HUE_SERVER-START": ["HIVE_SERVER-START", "HCAT-START", "OOZIE_SERVER-START"],
+    "FLUME_HANDLER-START": ["OOZIE_SERVER-START"],
+    "MAPREDUCE_SERVICE_CHECK-SERVICE_CHECK": ["NODEMANAGER-START", "RESOURCEMANAGER-START"],
+    "OOZIE_SERVICE_CHECK-SERVICE_CHECK": ["OOZIE_SERVER-START", "MAPREDUCE2_SERVICE_CHECK-SERVICE_CHECK"],
+    "HBASE_SERVICE_CHECK-SERVICE_CHECK": ["HBASE_MASTER-START", "HBASE_REGIONSERVER-START"],
+    "HIVE_SERVICE_CHECK-SERVICE_CHECK": ["HIVE_SERVER-START", "HIVE_METASTORE-START", "WEBHCAT_SERVER-START"],
+    "PIG_SERVICE_CHECK-SERVICE_CHECK": ["NODEMANAGER-START", "RESOURCEMANAGER-START"],
+    "SQOOP_SERVICE_CHECK-SERVICE_CHECK": ["NODEMANAGER-START", "RESOURCEMANAGER-START"],
+    "ZOOKEEPER_SERVICE_CHECK-SERVICE_CHECK": ["ZOOKEEPER_SERVER-START"],
+    "ZOOKEEPER_QUORUM_SERVICE_CHECK-SERVICE_CHECK": ["ZOOKEEPER_SERVER-START"],
+    "ZOOKEEPER_SERVER-STOP" : ["HBASE_MASTER-STOP", "HBASE_REGIONSERVER-STOP", "METRICS_COLLECTOR-STOP"],
+    "HBASE_MASTER-STOP": ["HBASE_REGIONSERVER-STOP"]
+  },
+  "_comment" : "GLUSTERFS-specific dependencies",
+  "optional_glusterfs": {
+    "HBASE_MASTER-START": ["PEERSTATUS-START"],
+    "GLUSTERFS_SERVICE_CHECK-SERVICE_CHECK": ["PEERSTATUS-START"]
+  },
+  "_comment" : "Dependencies that are used when GLUSTERFS is not present in cluster",
+  "optional_no_glusterfs": {
+    "METRICS_COLLECTOR-START": ["NAMENODE-START", "DATANODE-START", "SECONDARY_NAMENODE-START", "ZOOKEEPER_SERVER-START"],
+    "AMBARI_METRICS_SERVICE_CHECK-SERVICE_CHECK": ["METRICS_COLLECTOR-START", "HDFS_SERVICE_CHECK-SERVICE_CHECK"],
+    "SECONDARY_NAMENODE-START": ["NAMENODE-START"],
+    "SECONDARY_NAMENODE-RESTART": ["NAMENODE-RESTART"],
+    "RESOURCEMANAGER-START": ["NAMENODE-START", "DATANODE-START"],
+    "NODEMANAGER-START": ["NAMENODE-START", "DATANODE-START", "RESOURCEMANAGER-START"],
+    "HISTORYSERVER-START": ["NAMENODE-START", "DATANODE-START"],
+    "HBASE_MASTER-START": ["NAMENODE-START", "DATANODE-START"],
+    "HIVE_SERVER-START": ["DATANODE-START"],
+    "WEBHCAT_SERVER-START": ["DATANODE-START"],
+    "HISTORYSERVER-RESTART": ["NAMENODE-RESTART"],
+    "RESOURCEMANAGER-RESTART": ["NAMENODE-RESTART"],
+    "NODEMANAGER-RESTART": ["NAMENODE-RESTART"],
+    "OOZIE_SERVER-RESTART": ["NAMENODE-RESTART"],
+    "HDFS_SERVICE_CHECK-SERVICE_CHECK": ["NAMENODE-START", "DATANODE-START",
+        "SECONDARY_NAMENODE-START"],
+    "MAPREDUCE2_SERVICE_CHECK-SERVICE_CHECK": ["NODEMANAGER-START",
+        "RESOURCEMANAGER-START", "HISTORYSERVER-START", "YARN_SERVICE_CHECK-SERVICE_CHECK"],
+    "YARN_SERVICE_CHECK-SERVICE_CHECK": ["NODEMANAGER-START", "RESOURCEMANAGER-START"],
+    "RESOURCEMANAGER_SERVICE_CHECK-SERVICE_CHECK": ["RESOURCEMANAGER-START"],
+    "PIG_SERVICE_CHECK-SERVICE_CHECK": ["RESOURCEMANAGER-START", "NODEMANAGER-START"],
+    "NAMENODE-STOP": ["RESOURCEMANAGER-STOP", "NODEMANAGER-STOP",
+        "HISTORYSERVER-STOP", "HBASE_MASTER-STOP", "METRICS_COLLECTOR-STOP"],
+    "DATANODE-STOP": ["RESOURCEMANAGER-STOP", "NODEMANAGER-STOP",
+        "HISTORYSERVER-STOP", "HBASE_MASTER-STOP"],
+    "METRICS_GRAFANA-START": ["METRICS_COLLECTOR-START"],
+    "METRICS_COLLECTOR-STOP": ["METRICS_GRAFANA-STOP"]
+  },
+  "_comment" : "Dependencies that are used in HA NameNode cluster",
+  "namenode_optional_ha": {
+    "NAMENODE-START": ["ZKFC-START", "JOURNALNODE-START", "ZOOKEEPER_SERVER-START"],
+    "ZKFC-START": ["ZOOKEEPER_SERVER-START"],
+    "ZKFC-STOP": ["NAMENODE-STOP"],
+    "JOURNALNODE-STOP": ["NAMENODE-STOP"]
+  },
+  "_comment" : "Dependencies that are used in ResourceManager HA cluster",
+  "resourcemanager_optional_ha" : {
+    "RESOURCEMANAGER-START": ["ZOOKEEPER_SERVER-START"]
+  }
+}

+ 32 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/alerts.json

@@ -0,0 +1,32 @@
+{
+  "FLINK": {
+    "service": [],
+    "FLINK_HISTORYSERVER": [
+      {
+        "name": "FLINK_HISTORYSERVER_PROCESS",
+        "label": "Flink History Server",
+        "description": "This host-level alert is triggered if the Flink History Server cannot be determined to be up.",
+        "interval": 1,
+        "scope": "ANY",
+        "source": {
+          "type": "PORT",
+          "uri": "{{flink-conf/historyserver.web.port}}",
+          "default_port": 8082,
+          "reporting": {
+            "ok": {
+              "text": "TCP OK - {0:.3f}s response on port {1}"
+            },
+            "warning": {
+              "text": "TCP OK - {0:.3f}s response on port {1}",
+              "value": 1.5
+            },
+            "critical": {
+              "text": "Connection failed: {0} to {1}:{2}",
+              "value": 5
+            }
+          }
+        }
+      }
+    ]
+  }
+}

+ 397 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-conf.xml

@@ -0,0 +1,397 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_adding_forbidden="true">
+  <property>
+    <name>jobmanager.archive.fs.dir</name>
+    <value>hdfs:///completed-jobs/</value>
+    <description>Directory for JobManager to store the archives of completed jobs.</description>
+    <on-ambari-upgrade add="true" />
+  </property>
+  <property>
+    <name>historyserver.archive.fs.dir</name>
+    <value>hdfs:///completed-jobs/</value>
+    <description>Comma separated list of directories to fetch archived jobs from.</description>
+    <on-ambari-upgrade add="true" />
+  </property>
+  <property>
+    <name>historyserver.web.port</name>
+    <value>8082</value>
+    <description>The port under which the web-based HistoryServer listens.</description>
+    <on-ambari-upgrade add="true" />
+  </property>
+  <property>
+    <name>historyserver.archive.fs.refresh-interval</name>
+    <value>10000</value>
+    <description>Interval in milliseconds for refreshing the monitored directories.</description>
+    <on-ambari-upgrade add="true" />
+  </property>
+  <property>
+    <name>security.kerberos.login.keytab</name>
+    <description>Flink keytab path</description>
+    <value>none</value>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>security.kerberos.login.principal</name>
+    <description>Flink principal name</description>
+    <property-type>KERBEROS_PRINCIPAL</property-type>
+    <value>none</value>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <!-- flink-conf.yaml -->
+  <property>
+    <name>content</name>
+    <display-name>flink-conf template</display-name>
+    <description>This is the jinja template for flink-conf.xml file</description>
+    <value>
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+
+#==============================================================================
+# Common
+#==============================================================================
+
+# The external address of the host on which the JobManager runs and can be
+# reached by the TaskManagers and any clients which want to connect. This setting
+# is only used in Standalone mode and may be overwritten on the JobManager side
+# by specifying the --host hostname parameter of the bin/jobmanager.sh executable.
+# In high availability mode, if you use the bin/start-cluster.sh script and setup
+# the conf/masters file, this will be taken care of automatically. Yarn
+# automatically configure the host name based on the hostname of the node where the
+# JobManager runs.
+
+jobmanager.rpc.address: localhost
+
+# The RPC port where the JobManager is reachable.
+
+jobmanager.rpc.port: 6123
+
+# The host interface the JobManager will bind to. My default, this is localhost, and will prevent
+# the JobManager from communicating outside the machine/container it is running on.
+# On YARN this setting will be ignored if it is set to 'localhost', defaulting to 0.0.0.0.
+# On Kubernetes this setting will be ignored, defaulting to 0.0.0.0.
+#
+# To enable this, set the bind-host address to one that has access to an outside facing network
+# interface, such as 0.0.0.0.
+
+jobmanager.bind-host: localhost
+
+
+# The total process memory size for the JobManager.
+#
+# Note this accounts for all memory usage within the JobManager process, including JVM metaspace and other overhead.
+
+jobmanager.memory.process.size: 1600m
+
+# The host interface the TaskManager will bind to. By default, this is localhost, and will prevent
+# the TaskManager from communicating outside the machine/container it is running on.
+# On YARN this setting will be ignored if it is set to 'localhost', defaulting to 0.0.0.0.
+# On Kubernetes this setting will be ignored, defaulting to 0.0.0.0.
+#
+# To enable this, set the bind-host address to one that has access to an outside facing network
+# interface, such as 0.0.0.0.
+
+taskmanager.bind-host: localhost
+
+# The address of the host on which the TaskManager runs and can be reached by the JobManager and
+# other TaskManagers. If not specified, the TaskManager will try different strategies to identify
+# the address.
+#
+# Note this address needs to be reachable by the JobManager and forward traffic to one of
+# the interfaces the TaskManager is bound to (see 'taskmanager.bind-host').
+#
+# Note also that unless all TaskManagers are running on the same machine, this address needs to be
+# configured separately for each TaskManager.
+
+taskmanager.host: localhost
+
+# The total process memory size for the TaskManager.
+#
+# Note this accounts for all memory usage within the TaskManager process, including JVM metaspace and other overhead.
+
+taskmanager.memory.process.size: 1728m
+
+# To exclude JVM metaspace and overhead, please, use total Flink memory size instead of 'taskmanager.memory.process.size'.
+# It is not recommended to set both 'taskmanager.memory.process.size' and Flink memory.
+#
+# taskmanager.memory.flink.size: 1280m
+
+# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.
+
+taskmanager.numberOfTaskSlots: 1
+
+# The parallelism used for programs that did not specify and other parallelism.
+
+parallelism.default: 1
+
+# The default file system scheme and authority.
+#
+# By default file paths without scheme are interpreted relative to the local
+# root file system 'file:///'. Use this to override the default and interpret
+# relative paths relative to a different file system,
+# for example 'hdfs://mynamenode:12345'
+#
+# fs.default-scheme
+
+#==============================================================================
+# JVM and Logging Options
+#==============================================================================
+#Java runtime to use
+env.java.home: {{java_home}}
+
+#Path to hadoop configuration directory. It is required to read HDFS and/or YARN configuration.
+#You can also set it via environment variable.
+env.hadoop.conf.dir: {{hadoop_conf_dir}}
+
+#Defines the directory where the flink-&lt;host&gt;-&lt;process&gt;.pid files are saved.
+env.pid.dir: {{flink_pid_dir}}
+
+#==============================================================================
+# High Availability
+#==============================================================================
+
+# The high-availability mode. Possible options are 'NONE' or 'zookeeper'.
+#
+# high-availability: zookeeper
+
+# The path where metadata for master recovery is persisted. While ZooKeeper stores
+# the small ground truth for checkpoint and leader election, this location stores
+# the larger objects, like persisted dataflow graphs.
+#
+# Must be a durable file system that is accessible from all nodes
+# (like HDFS, S3, Ceph, nfs, ...)
+#
+# high-availability.storageDir: hdfs:///flink/ha/
+
+# The list of ZooKeeper quorum peers that coordinate the high-availability
+# setup. This must be a list of the form:
+# "host1:clientPort,host2:clientPort,..." (default clientPort: 2181)
+#
+# high-availability.zookeeper.quorum: localhost:2181
+
+
+# ACL options are based on https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_BuiltinACLSchemes
+# It can be either "creator" (ZOO_CREATE_ALL_ACL) or "open" (ZOO_OPEN_ACL_UNSAFE)
+# The default value is "open" and it can be changed to "creator" if ZK security is enabled
+#
+# high-availability.zookeeper.client.acl: open
+
+#==============================================================================
+# Fault tolerance and checkpointing
+#==============================================================================
+
+# The backend that will be used to store operator state checkpoints if
+# checkpointing is enabled. Checkpointing is enabled when execution.checkpointing.interval > 0.
+#
+# Execution checkpointing related parameters. Please refer to CheckpointConfig and ExecutionCheckpointingOptions for more details.
+#
+# execution.checkpointing.interval: 3min
+# execution.checkpointing.externalized-checkpoint-retention: [DELETE_ON_CANCELLATION, RETAIN_ON_CANCELLATION]
+# execution.checkpointing.max-concurrent-checkpoints: 1
+# execution.checkpointing.min-pause: 0
+# execution.checkpointing.mode: [EXACTLY_ONCE, AT_LEAST_ONCE]
+# execution.checkpointing.timeout: 10min
+# execution.checkpointing.tolerable-failed-checkpoints: 0
+# execution.checkpointing.unaligned: false
+#
+# Supported backends are 'hashmap', 'rocksdb', or the
+# &lt;class-name-of-factory&gt;.
+#
+# state.backend: hashmap
+
+# Directory for checkpoints filesystem, when using any of the default bundled
+# state backends.
+#
+# state.checkpoints.dir: hdfs://namenode-host:port/flink-checkpoints
+
+# Default target directory for savepoints, optional.
+#
+# state.savepoints.dir: hdfs://namenode-host:port/flink-savepoints
+
+# Flag to enable/disable incremental checkpoints for backends that
+# support incremental checkpoints (like the RocksDB state backend).
+#
+# state.backend.incremental: false
+
+# The failover strategy, i.e., how the job computation recovers from task failures.
+# Only restart tasks that may have been affected by the task failure, which typically includes
+# downstream tasks and potentially upstream tasks if their produced data is no longer available for consumption.
+
+jobmanager.execution.failover-strategy: region
+
+#==============================================================================
+# REST &amp; web frontend
+#==============================================================================
+
+# The port to which the REST client connects to. If rest.bind-port has
+# not been specified, then the server will bind to this port as well.
+#
+#rest.port: 8081
+
+# The address to which the REST client will connect to
+#
+rest.address: localhost
+
+# Port range for the REST and web server to bind to.
+#
+#rest.bind-port: 8080-8090
+
+# The address that the REST &amp; web server binds to
+# By default, this is localhost, which prevents the REST &amp; web server from
+# being able to communicate outside of the machine/container it is running on.
+#
+# To enable this, set the bind address to one that has access to outside-facing
+# network interface, such as 0.0.0.0.
+#
+rest.bind-address: localhost
+
+# Flag to specify whether job submission is enabled from the web-based
+# runtime monitor. Uncomment to disable.
+
+#web.submit.enable: false
+
+# Flag to specify whether job cancellation is enabled from the web-based
+# runtime monitor. Uncomment to disable.
+
+#web.cancel.enable: false
+
+#==============================================================================
+# Advanced
+#==============================================================================
+
+# Override the directories for temporary files. If not specified, the
+# system-specific Java temporary directory (java.io.tmpdir property) is taken.
+#
+# For framework setups on Yarn, Flink will automatically pick up the
+# containers' temp directories without any need for configuration.
+#
+# Add a delimited list for multiple directories, using the system directory
+# delimiter (colon ':' on unix) or a comma, e.g.:
+#     /data1/tmp:/data2/tmp:/data3/tmp
+#
+# Note: Each directory entry is read from and written to by a different I/O
+# thread. You can include the same directory multiple times in order to create
+# multiple I/O threads against that directory. This is for example relevant for
+# high-throughput RAIDs.
+#
+# io.tmp.dirs: /tmp
+
+# The classloading resolve order. Possible values are 'child-first' (Flink's default)
+# and 'parent-first' (Java's default).
+#
+# Child first classloading allows users to use different dependency/library
+# versions in their application than those in the classpath. Switching back
+# to 'parent-first' may help with debugging dependency issues.
+#
+# classloader.resolve-order: child-first
+
+# The amount of memory going to the network stack. These numbers usually need
+# no tuning. Adjusting them may be necessary in case of an "Insufficient number
+# of network buffers" error. The default min is 64MB, the default max is 1GB.
+#
+# taskmanager.memory.network.fraction: 0.1
+# taskmanager.memory.network.min: 64mb
+# taskmanager.memory.network.max: 1gb
+
+#==============================================================================
+# Flink Cluster Security Configuration
+#==============================================================================
+
+# Kerberos authentication for various components - Hadoop, ZooKeeper, and connectors -
+# may be enabled in four steps:
+# 1. configure the local krb5.conf file
+# 2. provide Kerberos credentials (either a keytab or a ticket cache w/ kinit)
+# 3. make the credentials available to various JAAS login contexts
+# 4. configure the connector to use JAAS/SASL
+
+# The below configure how Kerberos credentials are provided. A keytab will be used instead of
+# a ticket cache if the keytab path and principal are set.
+
+{% if security_enabled %}
+security.kerberos.login.use-ticket-cache: true
+security.kerberos.login.keytab: {{security_kerberos_login_keytab}}
+security.kerberos.login.principal: {{security_kerberos_login_principal}}
+{% else %}
+# security.kerberos.login.use-ticket-cache: true
+# security.kerberos.login.keytab: /path/to/kerberos/keytab
+# security.kerberos.login.principal: flink-user
+{% endif %}
+# The configuration below defines which JAAS login contexts
+
+# security.kerberos.login.contexts: Client,KafkaClient
+
+#==============================================================================
+# ZK Security Configuration
+#==============================================================================
+
+# Below configurations are applicable if ZK ensemble is configured for security
+
+# Override below configuration to provide custom ZK service name if configured
+# zookeeper.sasl.service-name: zookeeper
+
+# The configuration below must match one of the values set in "security.kerberos.login.contexts"
+# zookeeper.sasl.login-context-name: Client
+
+#==============================================================================
+# HistoryServer
+#==============================================================================
+
+# The HistoryServer is started and stopped via bin/historyserver.sh (start|stop)
+
+# Directory to upload completed jobs to. Add this directory to the list of
+# monitored directories of the HistoryServer as well (see below).
+jobmanager.archive.fs.dir: {{jobmanager_archive_fs_dir}}
+
+# The address under which the web-based HistoryServer listens.
+#historyserver.web.address: 0.0.0.0
+
+# The port under which the web-based HistoryServer listens.
+historyserver.web.port: {{historyserver_web_port}}
+
+# Comma separated list of directories to monitor for completed jobs.
+historyserver.archive.fs.dir: {{historyserver_archive_fs_dir}}
+
+# Interval in milliseconds for refreshing the monitored directories.
+historyserver.archive.fs.refresh-interval: {{historyserver_archive_fs_refresh_interval}}
+    </value>
+    <value-attributes>
+      <type>content</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+</configuration>

+ 75 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-env.xml

@@ -0,0 +1,75 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_adding_forbidden="true">
+  <property>
+    <name>flink_user</name>
+    <display-name>Flink User</display-name>
+    <value>flink</value>
+    <property-type>USER</property-type>
+    <description/>
+    <value-attributes>
+      <type>user</type>
+      <overridable>false</overridable>
+      <user-groups>
+        <property>
+          <type>cluster-env</type>
+          <name>user_group</name>
+        </property>
+        <property>
+          <type>flink-env</type>
+          <name>flink_group</name>
+        </property>
+      </user-groups>
+    </value-attributes>
+  </property>
+  <property>
+    <name>flink_group</name>
+    <display-name>Flink Group</display-name>
+    <value>flink</value>
+    <property-type>GROUP</property-type>
+    <description>flink group</description>
+    <value-attributes>
+      <type>user</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>flink_log_dir</name>
+    <value>/var/log/flink</value>
+    <display-name>Flink Log Dir Prefix</display-name>
+    <description>Log Directories for Flink.</description>
+    <value-attributes>
+      <type>directory</type>
+      <overridable>false</overridable>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>flink_pid_dir</name>
+    <display-name>Flink PID directory</display-name>
+    <value>/var/run/flink</value>
+    <value-attributes>
+      <type>directory</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+</configuration>

+ 100 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-log4j-cli-properties.xml

@@ -0,0 +1,100 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_final="false" supports_adding_forbidden="true">
+  <property>
+    <name>content</name>
+    <description>Flink-log4j-cli-Properties</description>
+    <value>
+#########################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+# Allows this configuration to be modified at runtime. The file will be checked every 30 seconds.
+monitorInterval=30
+
+rootLogger.level = INFO
+rootLogger.appenderRef.file.ref = FileAppender
+
+# Log all infos in the given file
+appender.file.name = FileAppender
+appender.file.type = FILE
+appender.file.append = false
+appender.file.fileName = ${sys:log.file}
+appender.file.layout.type = PatternLayout
+appender.file.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
+
+# Log output from org.apache.flink.yarn to the console. This is used by the
+# CliFrontend class when using a per-job YARN cluster.
+logger.yarn.name = org.apache.flink.yarn
+logger.yarn.level = INFO
+logger.yarn.appenderRef.console.ref = ConsoleAppender
+logger.yarncli.name = org.apache.flink.yarn.cli.FlinkYarnSessionCli
+logger.yarncli.level = INFO
+logger.yarncli.appenderRef.console.ref = ConsoleAppender
+logger.hadoop.name = org.apache.hadoop
+logger.hadoop.level = INFO
+logger.hadoop.appenderRef.console.ref = ConsoleAppender
+
+# Make sure hive logs go to the file.
+logger.hive.name = org.apache.hadoop.hive
+logger.hive.level = INFO
+logger.hive.additivity = false
+logger.hive.appenderRef.file.ref = FileAppender
+
+# Log output from org.apache.flink.kubernetes to the console.
+logger.kubernetes.name = org.apache.flink.kubernetes
+logger.kubernetes.level = INFO
+logger.kubernetes.appenderRef.console.ref = ConsoleAppender
+
+appender.console.name = ConsoleAppender
+appender.console.type = CONSOLE
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
+
+# suppress the warning that hadoop native libraries are not loaded (irrelevant for the client)
+logger.hadoopnative.name = org.apache.hadoop.util.NativeCodeLoader
+logger.hadoopnative.level = OFF
+
+# Suppress the irrelevant (wrong) warnings from the Netty channel handler
+logger.netty.name = org.jboss.netty.channel.DefaultChannelPipeline
+logger.netty.level = OFF
+    </value>
+    <value-attributes>
+      <type>content</type>
+      <show-property-name>false</show-property-name>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+</configuration>

+ 101 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-log4j-console-properties.xml

@@ -0,0 +1,101 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_final="false" supports_adding_forbidden="true">
+  <property>
+    <name>content</name>
+    <description>Flink-log4j-console-Properties</description>
+    <value>
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+# Allows this configuration to be modified at runtime. The file will be checked every 30 seconds.
+monitorInterval=30
+
+# This affects logging for both user code and Flink
+rootLogger.level = INFO
+rootLogger.appenderRef.console.ref = ConsoleAppender
+rootLogger.appenderRef.rolling.ref = RollingFileAppender
+
+# Uncomment this if you want to _only_ change Flink's logging
+#logger.flink.name = org.apache.flink
+#logger.flink.level = INFO
+
+# The following lines keep the log level of common libraries/connectors on
+# log level INFO. The root logger does not override this. You have to manually
+# change the log levels here.
+logger.akka.name = akka
+logger.akka.level = INFO
+logger.kafka.name= org.apache.kafka
+logger.kafka.level = INFO
+logger.hadoop.name = org.apache.hadoop
+logger.hadoop.level = INFO
+logger.zookeeper.name = org.apache.zookeeper
+logger.zookeeper.level = INFO
+logger.shaded_zookeeper.name = org.apache.flink.shaded.zookeeper3
+logger.shaded_zookeeper.level = INFO
+
+# Log all infos to the console
+appender.console.name = ConsoleAppender
+appender.console.type = CONSOLE
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
+
+# Log all infos in the given rolling file
+appender.rolling.name = RollingFileAppender
+appender.rolling.type = RollingFile
+appender.rolling.append = true
+appender.rolling.fileName = ${sys:log.file}
+appender.rolling.filePattern = ${sys:log.file}.%i
+appender.rolling.layout.type = PatternLayout
+appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
+appender.rolling.policies.type = Policies
+appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
+appender.rolling.policies.size.size=100MB
+appender.rolling.policies.startup.type = OnStartupTriggeringPolicy
+appender.rolling.strategy.type = DefaultRolloverStrategy
+appender.rolling.strategy.max = ${env:MAX_LOG_FILE_NUMBER:-10}
+
+# Suppress the irrelevant (wrong) warnings from the Netty channel handler
+logger.netty.name = org.jboss.netty.channel.DefaultChannelPipeline
+logger.netty.level = OFF
+    </value>
+    <value-attributes>
+      <type>content</type>
+      <show-property-name>false</show-property-name>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+</configuration>

+ 90 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-log4j-properties.xml

@@ -0,0 +1,90 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_final="false" supports_adding_forbidden="true">
+  <property>
+    <name>content</name>
+    <description>Flink-log4j-Properties</description>
+    <value>
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+# Allows this configuration to be modified at runtime. The file will be checked every 30 seconds.
+monitorInterval=30
+
+# This affects logging for both user code and Flink
+rootLogger.level = INFO
+rootLogger.appenderRef.file.ref = MainAppender
+
+# Uncomment this if you want to _only_ change Flink's logging
+#logger.flink.name = org.apache.flink
+#logger.flink.level = INFO
+
+# The following lines keep the log level of common libraries/connectors on
+# log level INFO. The root logger does not override this. You have to manually
+# change the log levels here.
+logger.akka.name = akka
+logger.akka.level = INFO
+logger.kafka.name= org.apache.kafka
+logger.kafka.level = INFO
+logger.hadoop.name = org.apache.hadoop
+logger.hadoop.level = INFO
+logger.zookeeper.name = org.apache.zookeeper
+logger.zookeeper.level = INFO
+logger.shaded_zookeeper.name = org.apache.flink.shaded.zookeeper3
+logger.shaded_zookeeper.level = INFO
+
+# Log all infos in the given file
+appender.main.name = MainAppender
+appender.main.type = RollingFile
+appender.main.append = true
+appender.main.fileName = ${sys:log.file}
+appender.main.filePattern = ${sys:log.file}.%i
+appender.main.layout.type = PatternLayout
+appender.main.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
+appender.main.policies.type = Policies
+appender.main.policies.size.type = SizeBasedTriggeringPolicy
+appender.main.policies.size.size = 100MB
+appender.main.policies.startup.type = OnStartupTriggeringPolicy
+appender.main.strategy.type = DefaultRolloverStrategy
+appender.main.strategy.max = ${env:MAX_LOG_FILE_NUMBER:-10}
+    </value>
+    <value-attributes>
+      <type>content</type>
+      <show-property-name>false</show-property-name>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+</configuration>

+ 75 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/configuration/flink-log4j-session-properties.xml

@@ -0,0 +1,75 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_final="false" supports_adding_forbidden="true">
+  <property>
+    <name>content</name>
+    <description>Flink-log4j-session-Properties</description>
+    <value>
+################################################################################
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+# Allows this configuration to be modified at runtime. The file will be checked every 30 seconds.
+monitorInterval=30
+
+rootLogger.level = INFO
+rootLogger.appenderRef.console.ref = ConsoleAppender
+
+appender.console.name = ConsoleAppender
+appender.console.type = CONSOLE
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
+
+# Suppress the irrelevant (wrong) warnings from the Netty channel handler
+logger.netty.name = org.jboss.netty.channel.DefaultChannelPipeline
+logger.netty.level = OFF
+logger.zookeeper.name = org.apache.zookeeper
+logger.zookeeper.level = WARN
+logger.shaded_zookeeper.name = org.apache.flink.shaded.zookeeper3
+logger.shaded_zookeeper.level = WARN
+logger.curator.name = org.apache.flink.shaded.org.apache.curator.framework
+logger.curator.level = WARN
+logger.runtimeutils.name= org.apache.flink.runtime.util.ZooKeeperUtils
+logger.runtimeutils.level = WARN
+logger.runtimeleader.name = org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver
+logger.runtimeleader.level = WARN
+    </value>
+    <value-attributes>
+      <type>content</type>
+      <show-property-name>false</show-property-name>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+</configuration>

+ 50 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/kerberos.json

@@ -0,0 +1,50 @@
+{
+  "services": [
+    {
+      "name": "FLINK",
+      "identities": [
+        {
+          "name": "flink_service_identity",
+          "principal": {
+            "value": "${flink-env/flink_user}${principal_suffix}@${realm}",
+            "type": "user",
+            "local_username": "${flink-env/flink_user}",
+            "configuration": "flink-conf/security.kerberos.login.principal"
+          },
+          "keytab": {
+            "file": "${keytab_dir}/flink.headless.keytab",
+            "configuration": "flink-conf/security.kerberos.login.keytab",
+            "owner": {
+              "name": "${flink-env/flink_user}",
+              "access": "r"
+            },
+            "group": {
+              "name": "${cluster-env/user_group}",
+              "access": "r"
+            }
+          }
+        }
+      ],
+      "components": [
+        {
+          "name": "FLINK_HISTORYSERVER",
+          "identities": [
+            {
+              "name": "flink_historyserver_identity",
+              "reference": "/FLINK/flink_service_identity"
+            }
+          ]
+        },
+        {
+          "name": "FLINK_CLIENT",
+          "identities": [
+            {
+              "name": "flink_client_identity",
+              "reference": "/FLINK/flink_service_identity"
+            }
+          ]
+        }
+      ]
+    }
+  ]
+}

+ 144 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/metainfo.xml

@@ -0,0 +1,144 @@
+<?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<metainfo>
+  <schemaVersion>2.0</schemaVersion>
+  <services>
+    <service>
+      <name>FLINK</name>
+      <displayName>Flink</displayName>
+      <comment>Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams</comment>
+      <version>1.15.0-1</version>
+      <components>
+        <component>
+          <name>FLINK_HISTORYSERVER</name>
+          <displayName>Flink History Server</displayName>
+          <category>MASTER</category>
+          <cardinality>1+</cardinality>
+          <versionAdvertised>true</versionAdvertised>
+          <dependencies>
+            <dependency>
+              <name>HDFS/HDFS_CLIENT</name>
+              <scope>host</scope>
+              <auto-deploy>
+                <enabled>true</enabled>
+              </auto-deploy>
+            </dependency>
+            <dependency>
+              <name>MAPREDUCE2/MAPREDUCE2_CLIENT</name>
+              <scope>host</scope>
+              <auto-deploy>
+                <enabled>true</enabled>
+              </auto-deploy>
+            </dependency>
+            <dependency>
+              <name>YARN/YARN_CLIENT</name>
+              <scope>host</scope>
+              <auto-deploy>
+                <enabled>true</enabled>
+              </auto-deploy>
+            </dependency>
+          </dependencies>
+          <commandScript>
+            <script>scripts/flink_history_server.py</script>
+            <scriptType>PYTHON</scriptType>
+            <timeout>600</timeout>
+          </commandScript>
+        </component>
+        <component>
+          <name>FLINK_CLIENT</name>
+          <displayName>Flink Client</displayName>
+          <cardinality>1+</cardinality>
+          <versionAdvertised>true</versionAdvertised>
+          <category>CLIENT</category>
+          <commandScript>
+            <script>scripts/flink_client.py</script>
+            <scriptType>PYTHON</scriptType>
+            <timeout>1200</timeout>
+          </commandScript>
+          <configFiles>
+            <configFile>
+              <type>xml</type>
+              <fileName>flink-conf.xml</fileName>
+              <dictionaryName>flink-conf</dictionaryName>
+            </configFile>
+          </configFiles>
+          <dependencies>
+            <dependency>
+              <name>HDFS/HDFS_CLIENT</name>
+              <scope>host</scope>
+              <auto-deploy>
+                <enabled>true</enabled>
+              </auto-deploy>
+            </dependency>
+            <dependency>
+              <name>YARN/YARN_CLIENT</name>
+              <scope>host</scope>
+              <auto-deploy>
+                <enabled>true</enabled>
+              </auto-deploy>
+            </dependency>
+            <dependency>
+              <name>MAPREDUCE2/MAPREDUCE2_CLIENT</name>
+              <scope>host</scope>
+              <auto-deploy>
+                <enabled>true</enabled>
+              </auto-deploy>
+            </dependency>
+          </dependencies>
+        </component>
+      </components>
+      <osSpecifics>
+        <osSpecific>
+          <osFamily>any</osFamily>
+          <packages>
+            <package>
+              <name>flink</name>
+            </package>
+          </packages>
+        </osSpecific>
+      </osSpecifics>
+
+      <quickLinksConfigurations>
+        <quickLinksConfiguration>
+          <fileName>quicklinks.json</fileName>
+          <default>true</default>
+        </quickLinksConfiguration>
+      </quickLinksConfigurations>
+
+      <requiredServices>
+        <service>YARN</service>
+      </requiredServices>
+
+      <configuration-dependencies>
+        <config-type>flink-conf</config-type>
+        <config-type>flink-env</config-type>
+        <config-type>flink-log4j-cli-properties</config-type>
+        <config-type>flink-log4j-console-properties</config-type>
+        <config-type>flink-log4j-properties</config-type>
+        <config-type>flink-log4j-session-properties</config-type>
+      </configuration-dependencies>
+
+      <commandScript>
+        <script>scripts/service_check.py</script>
+        <scriptType>PYTHON</scriptType>
+        <timeout>300</timeout>
+      </commandScript>
+
+    </service>
+  </services>
+</metainfo>

+ 47 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/flink_client.py

@@ -0,0 +1,47 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import os
+
+from resource_management.core.exceptions import ClientComponentHasNoStatus
+from resource_management.core.logger import Logger
+from resource_management.libraries.script import Script
+
+from setup_flink import *
+
+class FlinkClient(Script):
+
+  def pre_install(self, env):
+    import params
+    env.set_params(params)
+
+  def configure(self, env, config_dir=None, upgrade_type=None):
+    import params
+    env.set_params(params)
+    setup_flink(env,"client",upgrade_type=upgrade_type, action = 'config')
+
+  def install(self, env):
+    import params
+    self.install_packages(env)
+    self.configure(env, config_dir=params.flink_config_dir)
+
+  def status(self, env):
+    raise ClientComponentHasNoStatus()
+
+if __name__ == "__main__":
+  FlinkClient().execute()

+ 88 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/flink_history_server.py

@@ -0,0 +1,88 @@
+#!/usr/bin/python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+import os
+
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.check_process_status import check_process_status
+from resource_management.libraries.functions.stack_features import check_stack_feature
+from resource_management.libraries.functions.constants import StackFeature
+from resource_management.core.logger import Logger
+from resource_management.core import shell
+from setup_flink import *
+from flink_service import flink_service
+
+class FlinkHistoryServer(Script):
+
+  def install(self, env):
+    import params
+    env.set_params(params)
+    
+    self.install_packages(env)
+    
+  def configure(self, env, upgrade_type=None, config_dir=None):
+    import params
+    env.set_params(params)
+    
+    setup_flink(env, 'historyserver', upgrade_type=upgrade_type, action = 'config')
+    
+  def start(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+    
+    self.configure(env)
+    flink_service('historyserver', upgrade_type=upgrade_type, action='start')
+
+  def stop(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+    
+    flink_service('historyserver', upgrade_type=upgrade_type, action='stop')
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+
+    check_process_status(status_params.flink_history_server_pid_file)
+    
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+    import params
+
+    env.set_params(params)
+    if params.version and check_stack_feature(StackFeature.ROLLING_UPGRADE, params.version):
+      Logger.info("Executing Flink History Server Stack Upgrade pre-restart")
+      stack_select.select_packages(params.version)
+
+  def get_log_folder(self):
+    import params
+    return params.flink_log_dir
+  
+  def get_user(self):
+    import params
+    return params.flink_user
+
+  def get_pid_files(self):
+    import status_params
+    return [status_params.flink_history_server_pid_file]
+
+if __name__ == "__main__":
+  FlinkHistoryServer().execute()

+ 77 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/flink_service.py

@@ -0,0 +1,77 @@
+#!/usr/bin/env python
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+import os
+import shutil
+import glob
+
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.resources.hdfs_resource import HdfsResource
+from resource_management.libraries.functions.copy_tarball import copy_to_hdfs, get_tarball_paths
+from resource_management.libraries.functions import format
+from resource_management.core.resources.system import File, Execute
+from resource_management.libraries.functions.version import format_stack_version
+from resource_management.libraries.functions.stack_features import check_stack_feature
+from resource_management.libraries.functions.check_process_status import check_process_status
+from resource_management.libraries.functions.constants import StackFeature
+from resource_management.libraries.functions.show_logs import show_logs
+from resource_management.core.shell import as_sudo
+from resource_management.core.exceptions import ComponentIsNotRunning
+from resource_management.core.logger import Logger
+
+def flink_service(name, upgrade_type=None, action=None):
+  import params
+
+  if action == 'start':
+    if name == 'historyserver':
+      # create flink history directory
+      params.HdfsResource(params.jobmanager_archive_fs_dir,
+                          type="directory",
+                          action="create_on_execute",
+                          owner=params.flink_user,
+                          group=params.user_group,
+                          mode=0777,
+                          recursive_chmod=True
+                          )
+      params.HdfsResource(None, action="execute")
+
+      historyserver_no_op_test = as_sudo(["test", "-f", params.flink_history_server_pid_file]) + " && " + as_sudo(["pgrep", "-F", params.flink_history_server_pid_file])
+      try:
+        Execute(params.flink_history_server_start,
+                user=params.flink_user,
+                environment={'JAVA_HOME': params.java_home},
+                not_if=historyserver_no_op_test)
+      except:
+        show_logs(params.flink_log_dir, user=params.flink_user)
+        raise
+
+  elif action == 'stop':
+    if name == 'historyserver':
+      try:
+        Execute(format('{flink_history_server_stop}'),
+                user=params.flink_user,
+                environment={'JAVA_HOME': params.java_home}
+        )
+      except:
+        show_logs(params.flink_log_dir, user=params.flink_user)
+        raise
+
+      File(params.flink_history_server_pid_file,
+        action="delete"
+      )

+ 115 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/params.py

@@ -0,0 +1,115 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import os
+import status_params
+
+from resource_management.libraries.functions import format
+from resource_management.libraries.resources import HdfsResource
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions import StackFeature
+from resource_management.libraries.functions.stack_features import check_stack_feature
+from resource_management.libraries.functions.version import format_stack_version
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions import get_kinit_path
+from resource_management.libraries.functions.get_not_managed_resources import get_not_managed_resources
+from resource_management.libraries.script.script import Script
+
+# server configurations
+config = Script.get_config()
+tmp_dir = Script.get_tmp_dir()
+
+stack_name = default("/clusterLevelParams/stack_name", None)
+stack_root = Script.get_stack_root()
+
+# This is expected to be of the form #.#.#.#
+stack_version_unformatted = config['clusterLevelParams']['stack_version']
+stack_version_formatted = format_stack_version(stack_version_unformatted)
+
+# New Cluster Stack Version that is defined during the RESTART of a Rolling Upgrade
+version = default("/commandParams/version", None)
+java_home = config['ambariLevelParams']['java_home']
+
+# default hadoop parameters
+hadoop_home = stack_select.get_hadoop_dir("home")
+hadoop_bin_dir = stack_select.get_hadoop_dir("bin")
+hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
+dfs_type = default("/clusterLevelParams/dfs_type", "")
+hdfs_user = config['configurations']['hadoop-env']['hdfs_user']
+hdfs_principal_name = config['configurations']['hadoop-env']['hdfs_principal_name']
+hdfs_user_keytab = config['configurations']['hadoop-env']['hdfs_user_keytab']
+default_fs = config['configurations']['core-site']['fs.defaultFS']
+hdfs_site = config['configurations']['hdfs-site']
+hdfs_resource_ignore_file = "/var/lib/ambari-agent/data/.hdfs_resource_ignore"
+
+flink_etc_dir = "/etc/flink"
+flink_config_dir = "/etc/flink/conf"
+flink_dir = "/usr/lib/flink"
+flink_bin_dir = "/usr/lib/flink/bin"
+flink_log_dir = config['configurations']['flink-env']['flink_log_dir']
+flink_pid_dir = config['configurations']['flink-env']['flink_pid_dir']
+
+kinit_path_local = get_kinit_path(default('/configurations/kerberos-env/executable_search_paths', None))
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+smokeuser = config['configurations']['cluster-env']['smokeuser']
+smokeuser_principal = config['configurations']['cluster-env']['smokeuser_principal_name']
+smoke_user_keytab = config['configurations']['cluster-env']['smokeuser_keytab']
+
+flink_user = config['configurations']['flink-env']['flink_user']
+user_group = config['configurations']['cluster-env']['user_group']
+flink_conf_template = config['configurations']['flink-conf']['content']
+flink_group = config['configurations']['flink-env']['flink_group']
+flink_hdfs_user_dir = format("/user/{flink_user}")
+
+flink_log4j_cli_properties = config['configurations']['flink-log4j-cli-properties']['content']
+flink_log4j_console_properties = config['configurations']['flink-log4j-console-properties']['content']
+flink_log4j_properties = config['configurations']['flink-log4j-properties']['content']
+flink_log4j_session_properties = config['configurations']['flink-log4j-session-properties']['content']
+
+jobmanager_archive_fs_dir = config['configurations']['flink-conf']['jobmanager.archive.fs.dir']
+historyserver_archive_fs_dir = config['configurations']['flink-conf']['historyserver.archive.fs.dir']
+historyserver_web_port = config['configurations']['flink-conf']['historyserver.web.port']
+historyserver_archive_fs_refresh_interval = config['configurations']['flink-conf']['historyserver.archive.fs.refresh-interval']
+
+flink_history_server_start = format("export HADOOP_CLASSPATH=`hadoop classpath`;{flink_dir}/bin/historyserver.sh start")
+flink_history_server_stop = format("{flink_dir}/bin/historyserver.sh stop")
+flink_history_server_pid_file = status_params.flink_history_server_pid_file
+
+security_kerberos_login_principal = config['configurations']['flink-conf']['security.kerberos.login.principal']
+security_kerberos_login_keytab = config['configurations']['flink-conf']['security.kerberos.login.keytab']
+
+import functools
+#create partial functions with common arguments for every HdfsResource call
+#to create/delete hdfs directory/file/copyfromlocal we need to call params.HdfsResource in code
+HdfsResource = functools.partial(
+  HdfsResource,
+  user=hdfs_user,
+  hdfs_resource_ignore_file = hdfs_resource_ignore_file,
+  security_enabled = security_enabled,
+  keytab = hdfs_user_keytab,
+  kinit_path_local = kinit_path_local,
+  hadoop_bin_dir = hadoop_bin_dir,
+  hadoop_conf_dir = hadoop_conf_dir,
+  principal_name = hdfs_principal_name,
+  hdfs_site = hdfs_site,
+  default_fs = default_fs,
+  immutable_paths = get_not_managed_resources(),
+  dfs_type = dfs_type
+)

+ 46 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/service_check.py

@@ -0,0 +1,46 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+
+from resource_management.libraries.functions.format import format
+from resource_management.core.resources import Execute
+from resource_management.libraries.script import Script
+from resource_management.core.resources.system import Directory
+
+class FlinkServiceCheck(Script):
+  def service_check(self, env):
+    import params
+    env.set_params(params)
+
+    if params.security_enabled:
+       flink_kinit_cmd = format("{kinit_path_local} -kt {smoke_user_keytab} {smokeuser_principal}; ")
+       Execute(flink_kinit_cmd, user=params.smokeuser)
+
+    job_cmd_opts= "-m yarn-cluster -yD classloader.check-leaked-classloader=false "
+    run_flink_wordcount_job = format("export HADOOP_CLASSPATH=`hadoop classpath`;{flink_bin_dir}/flink run {job_cmd_opts} {flink_bin_dir}/../examples/batch/WordCount.jar")
+
+    Execute(run_flink_wordcount_job,
+      logoutput=True,
+      environment={'JAVA_HOME':params.java_home,'HADOOP_CONF_DIR': params.hadoop_conf_dir},
+      user=params.smokeuser)
+            
+if __name__ == "__main__":
+  FlinkServiceCheck().execute()

+ 94 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/setup_flink.py

@@ -0,0 +1,94 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+# Python Imports
+import os
+
+# Local Imports
+from resource_management.core.resources.system import Directory, File,Link
+from resource_management.core.source import InlineTemplate
+from ambari_commons.os_family_impl import OsFamilyFuncImpl, OsFamilyImpl
+
+@OsFamilyFuncImpl(os_family=OsFamilyImpl.DEFAULT)
+def setup_flink(env, type, upgrade_type = None, action = None):
+  import params
+
+  Directory(params.flink_pid_dir,
+            owner=params.flink_user,
+            group=params.user_group,
+            mode=0775,
+            create_parents = True
+  )
+
+  Directory(params.flink_etc_dir, mode=0755)
+  Directory(params.flink_config_dir,
+            owner = params.flink_user,
+            group = params.user_group,
+            create_parents = True)
+
+  Directory(params.flink_log_dir,mode=0767)
+  Link(params.flink_dir + '/log',to=params.flink_log_dir)
+
+  if type == 'historyserver' and action == 'config':
+    params.HdfsResource(params.flink_hdfs_user_dir,
+                     type="directory",
+                     action="create_on_execute",
+                     owner=params.flink_user,
+                     mode=0775
+    )
+
+    params.HdfsResource(None, action="execute")
+
+  flink_conf_file_path = os.path.join(params.flink_config_dir, "flink-conf.yaml")
+  File(flink_conf_file_path,
+       owner=params.flink_user,
+       group = params.flink_group,
+       content=InlineTemplate(params.flink_conf_template),
+       mode=0755)
+
+  #create log4j.properties in /etc/conf dir
+  File(os.path.join(params.flink_config_dir, 'log4j.properties'),
+       owner=params.flink_user,
+       group=params.flink_group,
+       content=params.flink_log4j_properties,
+       mode=0644,
+  )
+
+  #create log4j-cli.properties in /etc/conf dir
+  File(os.path.join(params.flink_config_dir, 'log4j-cli.properties'),
+       owner=params.flink_user,
+       group=params.flink_group,
+       content=params.flink_log4j_cli_properties,
+       mode=0644,
+  )
+
+  #create log4j-console.properties in /etc/conf dir
+  File(os.path.join(params.flink_config_dir, 'log4j-console.properties'),
+       owner=params.flink_user,
+       group=params.flink_group,
+       content=params.flink_log4j_console_properties,
+       mode=0644,
+  )
+
+  #create log4j-session.properties in /etc/conf dir
+  File(os.path.join(params.flink_config_dir, 'log4j-session.properties'),
+       owner=params.flink_user,
+       group=params.flink_group,
+       content=params.flink_log4j_session_properties,
+       mode=0644,
+  )

+ 33 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/package/scripts/status_params.py

@@ -0,0 +1,33 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions.default import default
+
+config = Script.get_config()
+
+flink_user = config['configurations']['flink-env']['flink_user']
+flink_group = config['configurations']['flink-env']['flink_group']
+user_group = config['configurations']['cluster-env']['user_group']
+
+flink_pid_dir = config['configurations']['flink-env']['flink_pid_dir']
+flink_history_server_pid_file = format("{flink_pid_dir}/flink-{flink_user}-historyserver.pid")
+stack_name = default("/clusterLevelParams/stack_name", None)

+ 28 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/quicklinks/quicklinks.json

@@ -0,0 +1,28 @@
+{
+  "name": "default",
+  "description": "default quick links configuration",
+  "configuration": {
+    "protocol":
+    {
+      "type":"HTTP_ONLY"
+    },
+
+    "links": [
+      {
+        "name": "flink_history_server_ui",
+        "label": "Flink History Server UI",
+        "component_name": "FLINK_HISTORYSERVER",
+        "requires_user_name": "false",
+        "url": "%@://%@:%@",
+        "port":{
+          "http_property": "historyserver.web.port",
+          "http_default_port": "8082",
+          "https_property": "historyserver.web.port",
+          "https_default_port": "8082",
+          "regex": "^(\\d+)$",
+          "site": "flink-conf"
+        }
+      }
+    ]
+  }
+}

+ 7 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/FLINK/role_command_order.json

@@ -0,0 +1,7 @@
+{
+  "general_deps" : {
+    "_comment" : "dependencies for FLINK",
+    "FLINK_HISTORYSERVER-START": ["NAMENODE-START", "DATANODE-START"],
+    "FLINK_SERVICE_CHECK-SERVICE_CHECK": ["NODEMANAGER-START", "RESOURCEMANAGER-START", "FLINK_HISTORYSERVER-START"]
+  }
+}

+ 127 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/alerts.json

@@ -0,0 +1,127 @@
+{
+  "HBASE": {
+    "service": [
+      {
+        "name": "hbase_regionserver_process_percent",
+        "label": "Percent RegionServers Available",
+        "description": "This service-level alert is triggered if the configured percentage of RegionServer processes cannot be determined to be up and listening on the network for the configured warning and critical thresholds. It aggregates the results of RegionServer process down checks.",
+        "interval": 1,
+        "scope": "SERVICE",
+        "enabled": true,
+        "source": {
+          "type": "AGGREGATE",
+          "alert_name": "hbase_regionserver_process",
+          "reporting": {
+            "ok": {
+              "text": "affected: [{1}], total: [{0}]"
+            },
+            "warning": {
+              "text": "affected: [{1}], total: [{0}]",
+              "value": 10
+            },
+            "critical": {
+              "text": "affected: [{1}], total: [{0}]",
+              "value": 30
+            },
+            "units" : "%",
+            "type": "PERCENT"
+          }
+        }
+      }    
+    ],
+    "HBASE_MASTER": [
+      {
+        "name": "hbase_master_process",
+        "label": "HBase Master Process",
+        "description": "This alert is triggered if the HBase master processes cannot be confirmed to be up and listening on the network for the configured critical threshold, given in seconds.",
+        "interval": 1,
+        "scope": "ANY",
+        "source": {
+          "type": "PORT",
+          "uri": "{{hbase-site/hbase.master.port}}",
+          "default_port": 60000,
+          "reporting": {
+            "ok": {
+              "text": "TCP OK - {0:.3f}s response on port {1}"
+            },
+            "warning": {
+              "text": "TCP OK - {0:.3f}s response on port {1}",
+              "value": 1.5
+            },
+            "critical": {
+              "text": "Connection failed: {0} to {1}:{2}",
+              "value": 5.0
+            }
+          }
+        }
+      },
+      {
+        "name": "hbase_master_cpu",
+        "label": "HBase Master CPU Utilization",
+        "description": "This host-level alert is triggered if CPU utilization of the HBase Master exceeds certain warning and critical thresholds. It checks the HBase Master JMX Servlet for the SystemCPULoad property. The threshold values are in percent.",
+        "interval": 5,
+        "scope": "ANY",
+        "enabled": true,
+        "source": {
+          "type": "METRIC",
+          "uri": {
+            "http": "{{hbase-site/hbase.master.info.port}}",
+            "default_port": 60010,
+            "connection_timeout": 5.0,
+            "kerberos_keytab": "{{cluster-env/smokeuser_keytab}}",
+            "kerberos_principal": "{{cluster-env/smokeuser_principal_name}}"
+          },
+          "reporting": {
+            "ok": {
+              "text": "{1} CPU, load {0:.1%}"
+            },
+            "warning": {
+              "text": "{1} CPU, load {0:.1%}",
+              "value": 200
+            },
+            "critical": {
+              "text": "{1} CPU, load {0:.1%}",
+              "value": 250
+            },
+            "units" : "%",
+            "type": "PERCENT"
+          },
+          "jmx": {
+            "property_list": [
+              "java.lang:type=OperatingSystem/SystemCpuLoad",
+              "java.lang:type=OperatingSystem/AvailableProcessors"
+            ],
+            "value": "{0} * 100"
+          }
+        }
+      }
+    ],
+    "HBASE_REGIONSERVER": [
+      {
+        "name": "hbase_regionserver_process",
+        "label": "HBase RegionServer Process",
+        "description": "This host-level alert is triggered if the RegionServer processes cannot be confirmed to be up and listening on the network for the configured critical threshold, given in seconds.",
+        "interval": 1,
+        "scope": "HOST",
+        "source": {
+          "type": "PORT",
+          "uri": "{{hbase-site/hbase.regionserver.info.port}}",
+          "default_port": 60030,
+          "reporting": {
+            "ok": {
+              "text": "TCP OK - {0:.3f}s response on port {1}"
+            },
+            "warning": {
+              "text": "TCP OK - {0:.3f}s response on port {1}",
+              "value": 1.5
+            },
+            "critical": {
+              "text": "Connection failed: {0} to {1}:{2}",
+              "value": 5.0
+            }
+          }
+        }
+      }
+    ]
+  }
+}

+ 311 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/hbase-env.xml

@@ -0,0 +1,311 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_adding_forbidden="true">
+  <!-- Inherited from HBase in HDP 2.0.6. -->
+  <property>
+    <name>hbase_log_dir</name>
+    <value>/var/log/hbase</value>
+    <display-name>HBase Log Dir Prefix</display-name>
+    <description>Log Directories for HBase.</description>
+    <value-attributes>
+      <type>directory</type>
+      <overridable>false</overridable>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_pid_dir</name>
+    <value>/var/run/hbase</value>
+    <display-name>HBase PID Dir</display-name>
+    <description>Pid Directory for HBase.</description>
+    <value-attributes>
+      <type>directory</type>
+      <overridable>false</overridable>
+      <editable-only-at-install>true</editable-only-at-install>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_regionserver_heapsize</name>
+    <value>4096</value>
+    <description>Maximum amount of memory each HBase RegionServer can use.</description>
+    <display-name>HBase RegionServer Maximum Memory</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>0</minimum>
+      <maximum>6554</maximum>
+      <unit>MB</unit>
+      <increment-step>256</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_regionserver_xmn_max</name>
+    <value>4000</value>
+    <description>
+Sets the upper bound on HBase RegionServers' young generation size.
+This value is used in case the young generation size (-Xmn) calculated based on the max heapsize (hbase_regionserver_heapsize)
+and the -Xmn ratio (hbase_regionserver_xmn_ratio) exceeds this value.
+    </description>
+    <display-name>RegionServers maximum value for -Xmn</display-name>
+    <value-attributes>
+      <type>int</type>
+      <unit>MB</unit>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_regionserver_xmn_ratio</name>
+    <value>0.2</value>
+    <display-name>RegionServers -Xmn in -Xmx ratio</display-name>
+    <description>Percentage of max heap size (-Xmx) which used for young generation heap (-Xmn).</description>
+    <value-attributes>
+      <type>float</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_master_heapsize</name>
+    <value>4096</value>
+    <description>Maximum amount of memory each HBase Master can use.</description>
+    <display-name>HBase Master Maximum Memory</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>0</minimum>
+      <maximum>16384</maximum>
+      <unit>MB</unit>
+      <increment-step>256</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_parallel_gc_threads</name>
+    <value>8</value>
+    <description>The number of JVM parallel garbage collection threads (e.g. -XX:ParallelGCThreads)</description>
+    <display-name>HBase Parallel GC Threads</display-name>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_user</name>
+    <display-name>HBase User</display-name>
+    <value>hbase</value>
+    <property-type>USER</property-type>
+    <description>HBase User Name.</description>
+    <value-attributes>
+      <type>user</type>
+      <overridable>false</overridable>
+      <user-groups>
+        <property>
+          <type>cluster-env</type>
+          <name>user_group</name>
+        </property>
+      </user-groups>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_user_nofile_limit</name>
+    <value>32000</value>
+    <description>Max open files limit setting for HBASE user.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_user_nproc_limit</name>
+    <value>16000</value>
+    <description>Max number of processes limit setting for HBASE user.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_java_io_tmpdir</name>
+    <display-name>HBase Java IO Tmpdir</display-name>
+    <value>/tmp</value>
+    <description>Used in hbase-env.sh as HBASE_OPTS=-Djava.io.tmpdir=java_io_tmpdir</description>
+    <value-attributes>
+      <type>directory</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_principal_name</name>
+    <description>HBase principal name</description>
+    <property-type>KERBEROS_PRINCIPAL</property-type>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_user_keytab</name>
+    <description>HBase keytab path</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase_regionserver_shutdown_timeout</name>
+    <value>30</value>
+    <display-name>HBase RegionServer shutdown timeout</display-name>
+    <description>
+After this number of seconds waiting for graceful stop of HBase Master it will be forced to exit with SIGKILL.
+The timeout is introduced because there is a known bug when from time to time HBase RegionServer hangs forever on stop if NN safemode is on.
+    </description>
+    <value-attributes>
+      <type>int</type>
+      <overridable>false</overridable>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <!-- hbase-env.sh -->
+  <property>
+    <name>content</name>
+    <display-name>hbase-env template</display-name>
+    <description>This is the jinja template for hbase-env.sh file</description>
+    <value>
+# Set environment variables here.
+
+# The java implementation to use. Java 1.6 required.
+export JAVA_HOME={{java64_home}}
+
+# HBase Configuration directory
+export HBASE_CONF_DIR=${HBASE_CONF_DIR:-{{hbase_conf_dir}}}
+
+# Extra Java CLASSPATH elements. Optional.
+export HBASE_CLASSPATH=${HBASE_CLASSPATH}
+
+
+# The maximum amount of heap to use, in MB. Default is 1000.
+# export HBASE_HEAPSIZE=1000
+
+# Extra Java runtime options.
+# Below are what we set by default. May only work with SUN JVM.
+# For more on why as well as other possible settings,
+# see http://wiki.apache.org/hadoop/PerformanceTuning
+export SERVER_GC_OPTS="-verbose:gc -XX:-PrintGCCause -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{log_dir}}/gc.log-`date +'%Y%m%d%H%M'`"
+# Uncomment below to enable java garbage collection logging.
+# export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log"
+
+# Uncomment and adjust to enable JMX exporting
+# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
+# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
+#
+# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+# If you want to configure BucketCache, specify '-XX: MaxDirectMemorySize=' with proper direct memory size
+# export HBASE_THRIFT_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
+# export HBASE_ZOOKEEPER_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"
+
+# File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.
+export HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers
+
+# Extra ssh options. Empty by default.
+# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"
+
+# Where log files are stored. $HBASE_HOME/logs by default.
+export HBASE_LOG_DIR={{log_dir}}
+
+# A string representing this instance of hbase. $USER by default.
+# export HBASE_IDENT_STRING=$USER
+
+# The scheduling priority for daemon processes. See 'man nice'.
+# export HBASE_NICENESS=10
+
+# The directory where pid files are stored. /tmp by default.
+export HBASE_PID_DIR={{pid_dir}}
+
+# Seconds to sleep between slave commands. Unset by default. This
+# can be useful in large clusters, where, e.g., slave rsyncs can
+# otherwise arrive faster than the master can service them.
+# export HBASE_SLAVE_SLEEP=0.1
+
+# Tell HBase whether it should manage it's own instance of Zookeeper or not.
+export HBASE_MANAGES_ZK=false
+
+{% if java_version &lt; 8 %}
+JDK_DEPENDED_OPTS="-XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m"
+{% endif %}
+
+# Set common JVM configuration
+export HBASE_OPTS="$HBASE_OPTS -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:-ResizePLAB -XX:ErrorFile={{log_dir}}/hs_err_pid%p.log -Djava.io.tmpdir={{java_io_tmpdir}}"
+export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx{{master_heapsize}} -XX:ParallelGCThreads={{parallel_gc_threads}} $JDK_DEPENDED_OPTS "
+export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}} -XX:ParallelGCThreads={{parallel_gc_threads}} $JDK_DEPENDED_OPTS"
+
+# Add Kerberos authentication-related configuration
+{% if security_enabled %}
+export HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config={{client_jaas_config_file}} {{zk_security_opts}}"
+export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Djava.security.auth.login.config={{master_jaas_config_file}} -Djavax.security.auth.useSubjectCredsOnly=false"
+export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Djava.security.auth.login.config={{regionserver_jaas_config_file}} -Djavax.security.auth.useSubjectCredsOnly=false"
+{% endif %}
+
+# HBase off-heap MaxDirectMemorySize
+export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS {% if hbase_max_direct_memory_size %} -XX:MaxDirectMemorySize={{hbase_max_direct_memory_size}}m {% endif %}"
+export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS {% if hbase_max_direct_memory_size %} -XX:MaxDirectMemorySize={{hbase_max_direct_memory_size}}m {% endif %}"
+</value>
+    <value-attributes>
+      <type>content</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+
+  <!-- Inherited from HBase in HDP 2.2 -->
+  <property>
+    <name>hbase_max_direct_memory_size</name>
+    <value/>
+    <display-name>HBase off-heap MaxDirectMemorySize</display-name>
+    <description>If not empty, adds '-XX:MaxDirectMemorySize={{hbase_max_direct_memory_size}}m' to HBASE_REGIONSERVER_OPTS.</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>phoenix_sql_enabled</name>
+    <value>false</value>
+    <description>Enable Phoenix SQL</description>
+    <display-name>Enable Phoenix</display-name>
+    <value-attributes>
+      <type>value-list</type>
+      <entries>
+        <entry>
+          <value>true</value>
+          <label>Enabled</label>
+        </entry>
+        <entry>
+          <value>false</value>
+          <label>Disabled</label>
+        </entry>
+      </entries>
+      <selection-cardinality>1</selection-cardinality>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.atlas.hook</name>
+    <value>false</value>
+    <display-name>Enable Atlas Hook</display-name>
+    <description>Enable Atlas Hook</description>
+    <value-attributes>
+      <type>boolean</type>
+      <overridable>false</overridable>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+    <depends-on>
+      <property>
+        <type>application-properties</type>
+        <name>atlas.rest.address</name>
+      </property>
+    </depends-on>
+  </property>
+</configuration>

+ 188 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/hbase-log4j.xml

@@ -0,0 +1,188 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_final="false" supports_adding_forbidden="false">
+ <property>
+    <name>hbase_log_maxfilesize</name>
+    <value>256</value>
+    <description>The maximum size of backup file before the log is rotated</description>
+    <display-name>HBase Log: backup file size</display-name>
+    <value-attributes>
+        <unit>MB</unit>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+ </property>
+ <property>
+      <name>hbase_log_maxbackupindex</name>
+      <value>20</value>
+      <description>The number of backup files</description>
+      <display-name>HBase Log: # of backup files</display-name>
+      <value-attributes>
+        <type>int</type>
+        <minimum>0</minimum>
+      </value-attributes>
+      <on-ambari-upgrade add="false"/>
+ </property>
+ <property>
+    <name>hbase_security_log_maxfilesize</name>
+    <value>256</value>
+    <description>The maximum size of security backup file before the log is rotated</description>
+    <display-name>HBase Security Log: backup file size</display-name>
+    <value-attributes>
+        <unit>MB</unit>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+ </property>
+ <property>
+      <name>hbase_security_log_maxbackupindex</name>
+      <value>20</value>
+      <description>The number of security backup files</description>
+      <display-name>HBase Security Log: # of backup files</display-name>
+      <value-attributes>
+        <type>int</type>
+        <minimum>0</minimum>
+      </value-attributes>
+      <on-ambari-upgrade add="false"/>
+ </property>
+  <property>
+    <name>content</name>
+    <display-name>hbase-log4j template</display-name>
+    <description>Custom log4j.properties</description>
+    <value>
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Define some default values that can be overridden by system properties
+hbase.root.logger=INFO,console
+hbase.security.logger=INFO,console
+hbase.log.dir=.
+hbase.log.file=hbase.log
+
+# Define the root logger to the system property "hbase.root.logger".
+log4j.rootLogger=${hbase.root.logger}
+
+# Logging Threshold
+log4j.threshold=ALL
+
+#
+# Daily Rolling File Appender
+#
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+#log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
+
+# Rolling File Appender properties
+hbase.log.maxfilesize={{hbase_log_maxfilesize}}MB
+hbase.log.maxbackupindex={{hbase_log_maxbackupindex}}
+
+# Rolling File Appender
+log4j.appender.RFA=org.apache.log4j.RollingFileAppender
+log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}
+
+log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}
+log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}
+
+log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
+
+#
+# Security audit appender
+#
+hbase.security.log.file=SecurityAuth.audit
+hbase.security.log.maxfilesize={{hbase_security_log_maxfilesize}}MB
+hbase.security.log.maxbackupindex={{hbase_security_log_maxbackupindex}}
+log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}
+log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}
+log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}
+log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+log4j.category.SecurityLogger=${hbase.security.logger}
+log4j.additivity.SecurityLogger=false
+#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE
+
+#
+# Null Appender
+#
+log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this
+#
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
+
+# Custom Logging levels
+
+log4j.logger.org.apache.zookeeper=INFO
+#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
+log4j.logger.org.apache.hadoop.hbase=INFO
+# Make these two classes INFO-level. Make them DEBUG to see more zk debug.
+log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO
+log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO
+#log4j.logger.org.apache.hadoop.dfs=DEBUG
+# Set this class to log INFO only otherwise its OTT
+# Enable this to get detailed connection error/retry logging.
+# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE
+
+
+# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)
+#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG
+
+# Uncomment the below if you want to remove logging of client region caching'
+# and scan of .META. messages
+# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO
+# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO
+
+    </value>
+    <value-attributes>
+      <type>content</type>
+      <show-property-name>false</show-property-name>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+</configuration>

+ 53 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/hbase-policy.xml

@@ -0,0 +1,53 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_final="true">
+  <property>
+    <name>security.client.protocol.acl</name>
+    <value>*</value>
+    <description>ACL for HRegionInterface protocol implementations (ie. 
+    clients talking to HRegionServers)
+    The ACL is a comma-separated list of user and group names. The user and 
+    group list is separated by a blank. For e.g. "alice,bob users,wheel". 
+    A special value of "*" means all users are allowed.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>security.admin.protocol.acl</name>
+    <value>*</value>
+    <description>ACL for HMasterInterface protocol implementation (ie. 
+    clients talking to HMaster for admin operations).
+    The ACL is a comma-separated list of user and group names. The user and 
+    group list is separated by a blank. For e.g. "alice,bob users,wheel". 
+    A special value of "*" means all users are allowed.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>security.masterregion.protocol.acl</name>
+    <value>*</value>
+    <description>ACL for HMasterRegionInterface protocol implementations
+    (for HRegionServers communicating with HMaster)
+    The ACL is a comma-separated list of user and group names. The user and 
+    group list is separated by a blank. For e.g. "alice,bob users,wheel". 
+    A special value of "*" means all users are allowed.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+</configuration>

+ 808 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/hbase-site.xml

@@ -0,0 +1,808 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_final="true">
+  <!-- Inherited from HBase in HDP 2.0.6 -->
+  <property>
+    <name>hbase.rootdir</name>
+    <display-name>HBase root directory</display-name>
+    <value>/apps/hbase/data</value>
+    <description>The directory shared by region servers and into
+    which HBase persists.  The URL should be 'fully-qualified'
+    to include the filesystem scheme.  For example, to specify the
+    HDFS directory '/hbase' where the HDFS instance's namenode is
+    running at namenode.example.org on port 9000, set this value to:
+    hdfs://namenode.example.org:9000/hbase.  By default HBase writes
+    into /tmp.  Change this configuration else all data will be lost
+    on machine restart.
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.cluster.distributed</name>
+    <value>true</value>
+    <description>The mode the cluster will be in. Possible values are
+      false for standalone mode and true for distributed mode.  If
+      false, startup will run all HBase and ZooKeeper daemons together
+      in the one JVM.
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.master.port</name>
+    <value>16000</value>
+    <display-name>HBase Master Port</display-name>
+    <description>The port the HBase Master should bind to.</description>
+    <value-attributes>
+      <type>int</type>
+      <overridable>false</overridable>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.tmp.dir</name>
+    <value>/tmp/hbase-${user.name}</value>
+    <display-name>HBase tmp directory</display-name>
+    <description>Temporary directory on the local filesystem.
+    Change this setting to point to a location more permanent
+    than '/tmp' (The '/tmp' directory is often cleared on
+    machine restart).
+    </description>
+    <value-attributes>
+      <type>directory</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.local.dir</name>
+    <display-name>HBase Local directory</display-name>
+    <value>${hbase.tmp.dir}/local</value>
+    <description>Directory on the local filesystem to be used as a local storage
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.master.info.bindAddress</name>
+    <value>0.0.0.0</value>
+    <description>The bind address for the HBase Master web UI
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.master.info.port</name>
+    <value>16010</value>
+    <description>The port for the HBase Master web UI.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.regionserver.info.port</name>
+    <value>16030</value>
+    <description>The port for the HBase RegionServer web UI.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.regionserver.handler.count</name>
+    <value>30</value>
+    <description>
+      Count of RPC Listener instances spun up on RegionServers.
+      Same property is used by the Master for count of master handlers.
+    </description>
+    <display-name>Number of Handlers per RegionServer</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>5</minimum>
+      <maximum>240</maximum>
+      <increment-step>1</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>phoenix.rpc.index.handler.count</name>
+    <value>10</value>
+    <description>
+      Count of RPC Handlers used to service Phoenix secondary index writes
+      inside of each RegionServer.
+    </description>
+    <display-name>Number of Phoenix Index Handlers per RegionServer</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>5</minimum>
+      <maximum>100</maximum>
+      <increment-step>1</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.hregion.majorcompaction</name>
+    <value>604800000</value>
+    <description>Time between major compactions, expressed in milliseconds. Set to 0 to disable
+      time-based automatic major compactions. User-requested and size-based major compactions will
+      still run. This value is multiplied by hbase.hregion.majorcompaction.jitter to cause
+      compaction to start at a somewhat-random time during a given window of time. The default value
+      is 7 days, expressed in milliseconds. If major compactions are causing disruption in your
+      environment, you can configure them to run at off-peak times for your deployment, or disable
+      time-based major compactions by setting this parameter to 0, and run major compactions in a
+      cron job or by another external mechanism.</description>
+    <display-name>Major Compaction Interval</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>0</minimum>
+      <maximum>2592000000</maximum>
+      <unit>milliseconds</unit>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.hregion.memstore.block.multiplier</name>
+    <value>4</value>
+    <description>
+      Block updates if memstore has hbase.hregion.memstore.block.multiplier
+      times hbase.hregion.memstore.flush.size bytes.  Useful preventing
+      runaway memstore during spikes in update traffic.  Without an
+      upper-bound, memstore fills such that when it flushes the
+      resultant flush files take a long time to compact or split, or
+      worse, we OOME.
+    </description>
+    <display-name>HBase Region Block Multiplier</display-name>
+    <value-attributes>
+      <type>value-list</type>
+      <entries>
+        <entry>
+          <value>2</value>
+        </entry>
+        <entry>
+          <value>4</value>
+        </entry>
+        <entry>
+          <value>8</value>
+        </entry>
+      </entries>
+      <selection-cardinality>1</selection-cardinality>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.hregion.memstore.flush.size</name>
+    <value>134217728</value>
+    <description>
+      The size of an individual memstore. Each column familiy within each region is allocated its own memstore.
+    </description>
+    <display-name>Memstore Flush Size</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>33554432</minimum>
+      <maximum>268435456</maximum>
+      <increment-step>1048576</increment-step>
+      <unit>B</unit>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.hregion.memstore.mslab.enabled</name>
+    <value>true</value>
+    <description>
+      Enables the MemStore-Local Allocation Buffer,
+      a feature which works to prevent heap fragmentation under
+      heavy write loads. This can reduce the frequency of stop-the-world
+      GC pauses on large heaps.
+    </description>
+    <value-attributes>
+      <type>boolean</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.hregion.max.filesize</name>
+    <value>10737418240</value>
+    <description>
+      Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this
+      value, the region is split in two.
+    </description>
+    <display-name>Maximum Region File Size</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>1073741824</minimum>
+      <maximum>107374182400</maximum>
+      <unit>B</unit>
+      <increment-step>1073741824</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.client.scanner.caching</name>
+    <value>100</value>
+    <description>Number of rows that will be fetched when calling next
+    on a scanner if it is not served from (local, client) memory. Higher
+    caching values will enable faster scanners but will eat up more memory
+    and some calls of next may take longer and longer times when the cache is empty.
+    Do not set this value such that the time between invocations is greater
+    than the scanner timeout; i.e. hbase.regionserver.lease.period
+    </description>
+    <display-name>Number of Fetched Rows when Scanning from Disk</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>100</minimum>
+      <maximum>10000</maximum>
+      <increment-step>100</increment-step>
+      <unit>rows</unit>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>zookeeper.session.timeout</name>
+    <value>90000</value>
+    <description>ZooKeeper session timeout.
+      ZooKeeper session timeout in milliseconds. It is used in two different ways.
+      First, this value is used in the ZK client that HBase uses to connect to the ensemble.
+      It is also used by HBase when it starts a ZK server and it is passed as the 'maxSessionTimeout'. See
+      http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions.
+      For example, if a HBase region server connects to a ZK ensemble that's also managed by HBase, then the
+      session timeout will be the one specified by this configuration. But, a region server that connects
+      to an ensemble managed with a different configuration will be subjected that ensemble's maxSessionTimeout. So,
+      even though HBase might propose using 90 seconds, the ensemble can have a max timeout lower than this and
+      it will take precedence.
+    </description>
+    <display-name>Zookeeper Session Timeout</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>10000</minimum>
+      <maximum>180000</maximum>
+      <unit>milliseconds</unit>
+      <increment-step>10000</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.client.keyvalue.maxsize</name>
+    <value>1048576</value>
+    <description>
+      Specifies the combined maximum allowed size of a KeyValue
+      instance. This is to set an upper boundary for a single entry saved in a
+      storage file. Since they cannot be split it helps avoiding that a region
+      cannot be split any further because the data is too large. It seems wise
+      to set this to a fraction of the maximum region size. Setting it to zero
+      or less disables the check.
+    </description>
+    <display-name>Maximum Record Size</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>1048576</minimum>
+      <maximum>31457280</maximum>
+      <unit>B</unit>
+      <increment-step>262144</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.hstore.compactionThreshold</name>
+    <value>3</value>
+    <description>
+      The maximum number of StoreFiles which will be selected for a single minor
+      compaction, regardless of the number of eligible StoreFiles. Effectively, the value of
+      hbase.hstore.compaction.max controls the length of time it takes a single compaction to
+      complete. Setting it larger means that more StoreFiles are included in a compaction. For most
+      cases, the default value is appropriate.
+    </description>
+    <display-name>Maximum Store Files before Minor Compaction</display-name>
+    <value-attributes>
+      <type>int</type>
+      <entries>
+        <entry>
+          <value>2</value>
+        </entry>
+        <entry>
+          <value>3</value>
+        </entry>
+        <entry>
+          <value>4</value>
+        </entry>
+      </entries>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.hstore.blockingStoreFiles</name>
+    <display-name>hstore blocking storefiles</display-name>
+    <value>100</value>
+    <description>
+    If more than this number of StoreFiles in any one Store
+    (one StoreFile is written per flush of MemStore) then updates are
+    blocked for this HRegion until a compaction is completed, or
+    until hbase.hstore.blockingWaitTime has been exceeded.
+    </description>
+    <value-attributes>
+      <type>int</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hfile.block.cache.size</name>
+    <value>0.40</value>
+    <description>Percentage of RegionServer memory to allocate to read buffers.</description>
+    <display-name>% of RegionServer Allocated to Read Buffers</display-name>
+    <value-attributes>
+      <type>float</type>
+      <minimum>0</minimum>
+      <maximum>0.8</maximum>
+      <increment-step>0.01</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <!-- Additional configuration specific to HBase security -->
+  <property>
+    <name>hbase.superuser</name>
+    <value>hbase</value>
+    <description>List of users or groups (comma-separated), who are allowed
+    full privileges, regardless of stored ACLs, across the cluster.
+    Only used when HBase security is enabled.
+    </description>
+    <depends-on>
+      <property>
+        <type>hbase-env</type>
+        <name>hbase_user</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.security.authentication</name>
+    <value>simple</value>
+    <description>
+      Select Simple or Kerberos authentication. Note: Kerberos must be set up before the Kerberos option will take effect.
+    </description>
+    <display-name>Enable Authentication</display-name>
+    <value-attributes>
+      <type>value-list</type>
+      <entries>
+        <entry>
+          <label>Simple</label>
+          <value>simple</value>
+        </entry>
+        <entry>
+          <label>Kerberos</label>
+          <value>kerberos</value>
+        </entry>
+      </entries>
+      <selection-cardinality>1</selection-cardinality>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.security.authorization</name>
+    <value>false</value>
+    <description> Set Authorization Method.</description>
+    <display-name>Enable Authorization</display-name>
+    <value-attributes>
+      <type>value-list</type>
+      <entries>
+        <entry>
+          <value>true</value>
+          <label>Native</label>
+        </entry>
+        <entry>
+          <value>false</value>
+          <label>Off</label>
+        </entry>
+      </entries>
+      <selection-cardinality>1</selection-cardinality>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>ranger-hbase-plugin-properties</type>
+        <name>ranger-hbase-plugin-enabled</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.coprocessor.region.classes</name>
+    <value>org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint</value>
+    <description>A comma-separated list of Coprocessors that are loaded by
+      default on all tables. For any override coprocessor method, these classes
+      will be called in order. After implementing your own Coprocessor, just put
+      it in HBase's classpath and add the fully qualified class name here.
+      A coprocessor can also be loaded on demand by setting HTableDescriptor.
+    </description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>hbase-site</type>
+        <name>hbase.security.authorization</name>
+      </property>
+      <property>
+        <type>hbase-site</type>
+        <name>hbase.security.authentication</name>
+      </property>
+      <property>
+        <type>ranger-hbase-plugin-properties</type>
+        <name>ranger-hbase-plugin-enabled</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.coprocessor.master.classes</name>
+    <value/>
+    <description>A comma-separated list of
+      org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are
+      loaded by default on the active HMaster process. For any implemented
+      coprocessor methods, the listed classes will be called in order. After
+      implementing your own MasterObserver, just put it in HBase's classpath
+      and add the fully qualified class name here.
+    </description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>hbase-site</type>
+        <name>hbase.security.authorization</name>
+      </property>
+      <property>
+        <type>ranger-hbase-plugin-properties</type>
+        <name>ranger-hbase-plugin-enabled</name>
+      </property>
+      <property>
+        <type>hbase-env</type>
+        <name>hbase.atlas.hook</name>
+      </property>
+      <property>
+        <type>application-properties</type>
+        <name>atlas.rest.address</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.clientPort</name>
+    <value>2181</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    The port at which the clients will connect.
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <!--
+  The following three properties are used together to create the list of
+  host:peer_port:leader_port quorum servers for ZooKeeper.
+  -->
+  <property>
+    <name>hbase.zookeeper.quorum</name>
+    <value>localhost</value>
+    <description>Comma separated list of servers in the ZooKeeper Quorum.
+    For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
+    By default this is set to localhost for local and pseudo-distributed modes
+    of operation. For a fully-distributed setup, this should be set to a full
+    list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
+    this is the list of servers which we will start/stop ZooKeeper on.
+    </description>
+    <value-attributes>
+      <type>multiLine</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <!-- End of properties used to generate ZooKeeper host:port quorum list. -->
+  <property>
+    <name>hbase.zookeeper.useMulti</name>
+    <value>true</value>
+    <description>Instructs HBase to make use of ZooKeeper's multi-update functionality.
+    This allows certain ZooKeeper operations to complete more quickly and prevents some issues
+    with rare Replication failure scenarios (see the release note of HBASE-2611 for an example).&#xB7;
+    IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on version 3.4+
+    and will not be downgraded.  ZooKeeper versions before 3.4 do not support multi-update and will
+    not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495).
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>zookeeper.znode.parent</name>
+    <display-name>ZooKeeper Znode Parent</display-name>
+    <value>/hbase-unsecure</value>
+    <description>Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper
+      files that are configured with a relative path will go under this node.
+      By default, all of HBase's ZooKeeper file path are configured with a
+      relative path, so they will all go under this directory unless changed.
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.client.retries.number</name>
+    <value>35</value>
+    <description>Maximum retries.  Used as maximum for all retryable
+    operations such as the getting of a cell's value, starting a row update,
+    etc.  Retry interval is a rough function based on hbase.client.pause.  At
+    first we retry at this interval but then with backoff, we pretty quickly reach
+    retrying every ten seconds.  See HConstants#RETRY_BACKOFF for how the backup
+    ramps up.  Change this setting and hbase.client.pause to suit your workload.</description>
+    <display-name>Maximum Client Retries</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>5</minimum>
+      <maximum>50</maximum>
+      <increment-step>1</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.rpc.timeout</name>
+    <value>90000</value>
+    <description>
+      This is for the RPC layer to define how long HBase client applications
+      take for a remote call to time out. It uses pings to check connections
+      but will eventually throw a TimeoutException.
+    </description>
+    <display-name>HBase RPC Timeout</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>10000</minimum>
+      <maximum>180000</maximum>
+      <unit>milliseconds</unit>
+      <increment-step>10000</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.defaults.for.version.skip</name>
+    <value>true</value>
+    <description>Disables version verification.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>phoenix.query.timeoutMs</name>
+    <value>60000</value>
+    <description>Number of milliseconds after which a Phoenix query will timeout on the client.</description>
+    <display-name>Phoenix Query Timeout</display-name>
+    <value-attributes>
+      <type>int</type>
+      <minimum>30000</minimum>
+      <maximum>180000</maximum>
+      <unit>milliseconds</unit>
+      <increment-step>10000</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>dfs.domain.socket.path</name>
+    <value>/var/lib/hadoop-hdfs/dn_socket</value>
+    <description>Path to domain socket.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.rpc.protection</name>
+    <value>authentication</value>
+    <on-ambari-upgrade add="false"/>
+  </property>
+
+  <!-- Inherited from HBase in HDP 2.2 -->
+  <property>
+    <name>hbase.bulkload.staging.dir</name>
+    <display-name>HBase Bulkload Staging directory</display-name>
+    <value>/apps/hbase/staging</value>
+    <description>A staging directory in default file system (HDFS)
+      for bulk loading.
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.hregion.majorcompaction.jitter</name>
+    <value>0.50</value>
+    <description>A multiplier applied to hbase.hregion.majorcompaction to cause compaction to occur
+      a given amount of time either side of hbase.hregion.majorcompaction. The smaller the number,
+      the closer the compactions will happen to the hbase.hregion.majorcompaction
+      interval.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.bucketcache.ioengine</name>
+    <value/>
+    <description>Where to store the contents of the bucketcache. One of: onheap,
+      offheap, or file. If a file, set it to file:PATH_TO_FILE.</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.bucketcache.size</name>
+    <value/>
+    <description>The size of the buckets for the bucketcache if you only use a single size.</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.bucketcache.percentage.in.combinedcache</name>
+    <value/>
+    <description>Value to be set between 0.0 and 1.0</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.regionserver.wal.codec</name>
+    <display-name>RegionServer WAL Codec</display-name>
+    <value>org.apache.hadoop.hbase.regionserver.wal.WALCellCodec</value>
+    <depends-on>
+      <property>
+        <type>hbase-env</type>
+        <name>phoenix_sql_enabled</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.region.server.rpc.scheduler.factory.class</name>
+    <value/>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>hbase-env</type>
+        <name>phoenix_sql_enabled</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.rpc.controllerfactory.class</name>
+    <value/>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>hbase-env</type>
+        <name>phoenix_sql_enabled</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>phoenix.functions.allowUserDefinedFunctions</name>
+    <value> </value>
+    <depends-on>
+      <property>
+        <type>hbase-env</type>
+        <name>phoenix_sql_enabled</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.coprocessor.regionserver.classes</name>
+    <value/>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>hbase-site</type>
+        <name>hbase.security.authorization</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.hstore.compaction.max</name>
+    <value>10</value>
+    <description>The maximum number of StoreFiles which will be selected for a single minor
+      compaction, regardless of the number of eligible StoreFiles. Effectively, the value of
+      hbase.hstore.compaction.max controls the length of time it takes a single compaction to
+      complete. Setting it larger means that more StoreFiles are included in a compaction. For most
+      cases, the default value is appropriate.
+    </description>
+    <display-name>Maximum Files for Compaction</display-name>
+    <value-attributes>
+      <type>int</type>
+      <entries>
+        <entry>
+          <value>8</value>
+        </entry>
+        <entry>
+          <value>9</value>
+        </entry>
+        <entry>
+          <value>10</value>
+        </entry>
+        <entry>
+          <value>11</value>
+        </entry>
+        <entry>
+          <value>12</value>
+        </entry>
+        <entry>
+          <value>13</value>
+        </entry>
+        <entry>
+          <value>14</value>
+        </entry>
+        <entry>
+          <value>15</value>
+        </entry>
+      </entries>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.regionserver.global.memstore.size</name>
+    <value>0.4</value>
+    <description>Percentage of RegionServer memory to allocate to write buffers.
+      Each column family within each region is allocated a smaller pool (the memstore) within this shared write pool.
+      If this buffer is full, updates are blocked and data is flushed from memstores until a global low watermark
+      (hbase.regionserver.global.memstore.size.lower.limit) is reached.
+    </description>
+    <display-name>% of RegionServer Allocated to Write Buffers</display-name>
+    <value-attributes>
+      <type>float</type>
+      <minimum>0</minimum>
+      <maximum>0.8</maximum>
+      <increment-step>0.01</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+
+  <!-- Inherited from HBase in HDP 2.3 -->
+  <property>
+    <name>hbase.regionserver.port</name>
+    <value>16020</value>
+    <description>The port the HBase RegionServer binds to.</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+
+  <!-- Inherited from HBase in HDP 2.5 -->
+  <property>
+    <name>hbase.master.ui.readonly</name>
+    <value>false</value>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>zookeeper.recovery.retry</name>
+    <value>6</value>
+    <on-ambari-upgrade add="false"/>
+  </property>
+
+  <!-- Inherited from HBase in HDP 2.6 -->
+  <property>
+    <name>hbase.regionserver.executor.openregion.threads</name>
+    <value>20</value>
+    <description>The number of threads region server uses to open regions
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.master.namespace.init.timeout</name>
+    <value>2400000</value>
+    <description>The number of milliseconds master waits for hbase:namespace table to be initialized
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>hbase.master.wait.on.regionservers.timeout</name>
+    <value>30000</value>
+    <description>The number of milliseconds master waits for region servers to report in
+    </description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+</configuration>

+ 132 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/ranger-hbase-audit.xml

@@ -0,0 +1,132 @@
+<?xml version="1.0"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>xasecure.audit.is.enabled</name>
+    <value>true</value>
+    <description>Is Audit enabled?</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.audit.destination.hdfs</name>
+    <value>true</value>
+    <display-name>Audit to HDFS</display-name>
+    <description>Is Audit to HDFS enabled?</description>
+    <value-attributes>
+      <type>boolean</type>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>ranger-env</type>
+        <name>xasecure.audit.destination.hdfs</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.audit.destination.hdfs.dir</name>
+    <value>hdfs://NAMENODE_HOSTNAME:8020/ranger/audit</value>
+    <description>HDFS folder to write audit to, make sure the service user has requried permissions</description>
+    <depends-on>
+      <property>
+        <type>ranger-env</type>
+        <name>xasecure.audit.destination.hdfs.dir</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.audit.destination.hdfs.batch.filespool.dir</name>
+    <value>/var/log/hbase/audit/hdfs/spool</value>
+    <description>/var/log/hbase/audit/hdfs/spool</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.audit.destination.solr</name>
+    <value>false</value>
+    <display-name>Audit to SOLR</display-name>
+    <description>Is Solr audit enabled?</description>
+    <value-attributes>
+      <type>boolean</type>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>ranger-env</type>
+        <name>xasecure.audit.destination.solr</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.audit.destination.solr.urls</name>
+    <value/>
+    <description>Solr URL</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>ranger-admin-site</type>
+        <name>ranger.audit.solr.urls</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.audit.destination.solr.zookeepers</name>
+    <value>NONE</value>
+    <description>Solr Zookeeper string</description>
+    <depends-on>
+      <property>
+        <type>ranger-admin-site</type>
+        <name>ranger.audit.solr.zookeepers</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.audit.destination.solr.batch.filespool.dir</name>
+    <value>/var/log/hbase/audit/solr/spool</value>
+    <description>/var/log/hbase/audit/solr/spool</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.audit.provider.summary.enabled</name>
+    <value>true</value>
+    <display-name>Audit provider summary enabled</display-name>
+    <description>Enable Summary audit?</description>
+    <value-attributes>
+      <type>boolean</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+
+  <!-- Inherited from HBase in HDP 2.6 -->
+  <property>
+    <name>ranger.plugin.hbase.ambari.cluster.name</name>
+    <value>{{cluster_name}}</value>
+    <description>Capture cluster name from where Ranger hbase plugin is enabled.</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+</configuration>

+ 135 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/ranger-hbase-plugin-properties.xml

@@ -0,0 +1,135 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_final="true">
+  <property>
+    <name>common.name.for.certificate</name>
+    <value/>
+    <description>Common name for certificate, this value should match what is specified in repo within ranger admin</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>policy_user</name>
+    <value>ambari-qa</value>
+    <display-name>Policy user for HBASE</display-name>
+    <depends-on>
+      <property>
+        <type>ranger-env</type>
+        <name>ranger_user</name>
+      </property>
+    </depends-on>
+    <description>This user must be system user and also present at Ranger admin portal</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>ranger-hbase-plugin-enabled</name>
+    <value>No</value>
+    <display-name>Enable Ranger for HBASE</display-name>
+    <description>Enable ranger hbase plugin ?</description>
+    <value-attributes>
+      <type>boolean</type>
+      <overridable>false</overridable>
+    </value-attributes>
+    <depends-on>
+      <property>
+        <type>ranger-env</type>
+        <name>ranger-hbase-plugin-enabled</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>REPOSITORY_CONFIG_USERNAME</name>
+    <value>hbase</value>
+    <display-name>Ranger repository config user</display-name>
+    <description>Used for repository creation on ranger admin</description>
+    <depends-on>
+      <property>
+        <type>ranger-hbase-plugin-properties</type>
+        <name>ranger-hbase-plugin-enabled</name>
+      </property>
+      <property>
+        <type>hbase-env</type>
+        <name>hbase_user</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>REPOSITORY_CONFIG_PASSWORD</name>
+    <value>hbase</value>
+    <display-name>Ranger repository config password</display-name>
+    <property-type>PASSWORD</property-type>
+    <description>Used for repository creation on ranger admin</description>
+    <value-attributes>
+      <type>password</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+
+  <!-- Inherited from HBase in HDP 2.6 -->
+  <property>
+    <name>external_admin_username</name>
+    <value></value>
+    <display-name>External Ranger admin username</display-name>
+    <description>Add ranger default admin username if want to communicate to external ranger</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>external_admin_password</name>
+    <value></value>
+    <display-name>External Ranger admin password</display-name>
+    <property-type>PASSWORD</property-type>
+    <description>Add ranger default admin password if want to communicate to external ranger</description>
+    <value-attributes>
+      <type>password</type>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>external_ranger_admin_username</name>
+    <value></value>
+    <display-name>External Ranger Ambari admin username</display-name>
+    <description>Add ranger default ambari admin username if want to communicate to external ranger</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>external_ranger_admin_password</name>
+    <value></value>
+    <display-name>External Ranger Ambari admin password</display-name>
+    <property-type>PASSWORD</property-type>
+    <description>Add ranger default ambari admin password if want to communicate to external ranger</description>
+    <value-attributes>
+      <type>password</type>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+</configuration>

+ 72 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/ranger-hbase-policymgr-ssl.xml

@@ -0,0 +1,72 @@
+<?xml version="1.0"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>xasecure.policymgr.clientssl.keystore</name>
+    <value/>
+    <description>Java Keystore files</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.policymgr.clientssl.keystore.password</name>
+    <value>myKeyFilePassword</value>
+    <property-type>PASSWORD</property-type>
+    <description>password for keystore</description>
+    <value-attributes>
+      <type>password</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.policymgr.clientssl.truststore</name>
+    <value/>
+    <description>java truststore file</description>
+    <value-attributes>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.policymgr.clientssl.truststore.password</name>
+    <value>changeit</value>
+    <property-type>PASSWORD</property-type>
+    <description>java truststore password</description>
+    <value-attributes>
+      <type>password</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.policymgr.clientssl.keystore.credential.file</name>
+    <value>jceks://file{{credential_file}}</value>
+    <description>java keystore credential file</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.policymgr.clientssl.truststore.credential.file</name>
+    <value>jceks://file{{credential_file}}</value>
+    <description>java truststore credential file</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+</configuration>

+ 74 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/configuration/ranger-hbase-security.xml

@@ -0,0 +1,74 @@
+<?xml version="1.0"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>ranger.plugin.hbase.service.name</name>
+    <value>{{repo_name}}</value>
+    <description>Name of the Ranger service containing HBase policies</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>ranger.plugin.hbase.policy.source.impl</name>
+    <value>org.apache.ranger.admin.client.RangerAdminRESTClient</value>
+    <description>Class to retrieve policies from the source</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>ranger.plugin.hbase.policy.rest.url</name>
+    <value>{{policymgr_mgr_url}}</value>
+    <description>URL to Ranger Admin</description>
+    <on-ambari-upgrade add="false"/>
+    <depends-on>
+      <property>
+        <type>admin-properties</type>
+        <name>policymgr_external_url</name>
+      </property>
+    </depends-on>
+  </property>
+  <property>
+    <name>ranger.plugin.hbase.policy.rest.ssl.config.file</name>
+    <value>/etc/hbase/conf/ranger-policymgr-ssl.xml</value>
+    <description>Path to the file containing SSL details to contact Ranger Admin</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>ranger.plugin.hbase.policy.pollIntervalMs</name>
+    <value>30000</value>
+    <description>How often to poll for changes in policies?</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>ranger.plugin.hbase.policy.cache.dir</name>
+    <value>/etc/ranger/{{repo_name}}/policycache</value>
+    <description>Directory where Ranger policies are cached after successful retrieval from the source</description>
+    <on-ambari-upgrade add="false"/>
+  </property>
+  <property>
+    <name>xasecure.hbase.update.xapolicies.on.grant.revoke</name>
+    <value>true</value>
+    <display-name>Should HBase GRANT/REVOKE update XA policies</display-name>
+    <description>Should HBase plugin update Ranger policies for updates to permissions done using GRANT/REVOKE?</description>
+    <value-attributes>
+      <type>boolean</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+</configuration>

+ 150 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/kerberos.json

@@ -0,0 +1,150 @@
+{
+  "services": [
+    {
+      "name": "HBASE",
+      "identities": [
+        {
+          "name": "hbase_spnego",
+          "reference": "/spnego"
+        },
+        {
+          "name": "hbase",
+          "principal": {
+            "value": "${hbase-env/hbase_user}${principal_suffix}@${realm}",
+            "type" : "user",
+            "configuration": "hbase-env/hbase_principal_name",
+            "local_username": "${hbase-env/hbase_user}"
+          },
+          "keytab": {
+            "file": "${keytab_dir}/hbase.headless.keytab",
+            "owner": {
+              "name": "${hbase-env/hbase_user}",
+              "access": "r"
+            },
+            "group": {
+              "name": "${cluster-env/user_group}",
+              "access": "r"
+            },
+            "configuration": "hbase-env/hbase_user_keytab"
+          }
+        },
+        {
+          "name": "hbase_smokeuser",
+          "reference": "/smokeuser"
+        }
+      ],
+      "configurations": [
+        {
+          "hbase-site": {
+            "hbase.security.authentication": "kerberos",
+            "hbase.security.authorization": "true",
+            "zookeeper.znode.parent": "/hbase-secure",
+            "hbase.coprocessor.master.classes": "{{hbase_coprocessor_master_classes}}",
+            "hbase.coprocessor.region.classes": "{{hbase_coprocessor_region_classes}}",
+            "hbase.coprocessor.regionserver.classes": "{{hbase_coprocessor_regionserver_classes}}",
+            "hbase.bulkload.staging.dir": "/apps/hbase/staging",
+            "hbase.master.ui.readonly": "true"
+          }
+        },
+        {
+          "ranger-hbase-audit": {
+            "xasecure.audit.jaas.Client.loginModuleName": "com.sun.security.auth.module.Krb5LoginModule",
+            "xasecure.audit.jaas.Client.loginModuleControlFlag": "required",
+            "xasecure.audit.jaas.Client.option.useKeyTab": "true",
+            "xasecure.audit.jaas.Client.option.storeKey": "false",
+            "xasecure.audit.jaas.Client.option.serviceName": "solr",
+            "xasecure.audit.destination.solr.force.use.inmemory.jaas.config": "true"
+          }
+        }
+      ],
+      "components": [
+        {
+          "name": "HBASE_MASTER",
+          "identities": [
+            {
+              "name": "hbase_hbase_master_hdfs",
+              "reference": "/HDFS/NAMENODE/hdfs"
+            },
+            {
+              "name": "hbase_master_hbase",
+              "principal": {
+                "value": "hbase/_HOST@${realm}",
+                "type" : "service",
+                "configuration": "hbase-site/hbase.master.kerberos.principal",
+                "local_username": "${hbase-env/hbase_user}"
+              },
+              "keytab": {
+                "file": "${keytab_dir}/hbase.service.keytab",
+                "owner": {
+                  "name": "${hbase-env/hbase_user}",
+                  "access": "r"
+                },
+                "group": {
+                  "name": "${cluster-env/user_group}",
+                  "access": ""
+                },
+                "configuration": "hbase-site/hbase.master.keytab.file"
+              }
+            },
+            {
+              "name": "hbase_hbase_master_spnego",
+              "reference": "/spnego",
+              "principal": {
+                "configuration": "hbase-site/hbase.security.authentication.spnego.kerberos.principal"
+              },
+              "keytab": {
+                "configuration": "hbase-site/hbase.security.authentication.spnego.kerberos.keytab"
+              }
+            },
+            {
+              "name" : "ranger_hbase_audit",
+              "reference": "/HBASE/HBASE_MASTER/hbase_master_hbase",
+              "principal": {
+                "configuration": "ranger-hbase-audit/xasecure.audit.jaas.Client.option.principal"
+              },
+              "keytab": {
+                "configuration": "ranger-hbase-audit/xasecure.audit.jaas.Client.option.keyTab"
+              }
+            }
+          ]
+        },
+        {
+          "name": "HBASE_REGIONSERVER",
+          "identities": [
+            {
+              "name": "hbase_regionserver_hbase",
+              "principal": {
+                "value": "hbase/_HOST@${realm}",
+                "type" : "service",
+                "configuration": "hbase-site/hbase.regionserver.kerberos.principal",
+                "local_username": "${hbase-env/hbase_user}"
+              },
+              "keytab": {
+                "file": "${keytab_dir}/hbase.service.keytab",
+                "owner": {
+                  "name": "${hbase-env/hbase_user}",
+                  "access": "r"
+                },
+                "group": {
+                  "name": "${cluster-env/user_group}",
+                  "access": ""
+                },
+                "configuration": "hbase-site/hbase.regionserver.keytab.file"
+              }
+            },
+            {
+              "name": "hbase_hbase_regionserver_spnego",
+              "reference": "/spnego",
+              "principal": {
+                "configuration": "hbase-site/hbase.security.authentication.spnego.kerberos.principal"
+              },
+              "keytab": {
+                "configuration": "hbase-site/hbase.security.authentication.spnego.kerberos.keytab"
+              }
+            }
+          ]
+        }
+      ]
+    }
+  ]
+}

+ 192 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/metainfo.xml

@@ -0,0 +1,192 @@
+<?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<metainfo>
+  <schemaVersion>2.0</schemaVersion>
+  <services>
+    <service>
+      <name>HBASE</name>
+      <displayName>HBase</displayName>
+      <comment>Non-relational distributed database and centralized service for configuration management &amp;
+        synchronization
+      </comment>
+      <version>2.4.13-1</version>
+      <components>
+        <component>
+          <name>HBASE_MASTER</name>
+          <displayName>HBase Master</displayName>
+          <category>MASTER</category>
+          <cardinality>1+</cardinality>
+          <versionAdvertised>true</versionAdvertised>
+          <timelineAppid>hbase</timelineAppid>
+          <dependencies>
+            <dependency>
+              <name>HDFS/HDFS_CLIENT</name>
+              <scope>host</scope>
+              <auto-deploy>
+                <enabled>true</enabled>
+              </auto-deploy>
+            </dependency>
+            <dependency>
+              <name>ZOOKEEPER/ZOOKEEPER_SERVER</name>
+              <scope>cluster</scope>
+              <auto-deploy>
+                <enabled>true</enabled>
+                <co-locate>HBASE/HBASE_MASTER</co-locate>
+              </auto-deploy>
+            </dependency>
+          </dependencies>
+          <commandScript>
+            <script>scripts/hbase_master.py</script>
+            <scriptType>PYTHON</scriptType>
+            <timeout>1200</timeout>
+          </commandScript>
+          <logs>
+            <log>
+              <logId>hbase_master</logId>
+              <primary>true</primary>
+            </log>
+          </logs>
+          <customCommands>
+            <customCommand>
+              <name>DECOMMISSION</name>
+              <commandScript>
+                <script>scripts/hbase_master.py</script>
+                <scriptType>PYTHON</scriptType>
+                <timeout>600</timeout>
+              </commandScript>
+            </customCommand>
+          </customCommands>
+        </component>
+
+        <component>
+          <name>HBASE_REGIONSERVER</name>
+          <displayName>RegionServer</displayName>
+          <category>SLAVE</category>
+          <cardinality>1+</cardinality>
+          <versionAdvertised>true</versionAdvertised>
+          <decommissionAllowed>true</decommissionAllowed>
+          <timelineAppid>hbase</timelineAppid>
+          <commandScript>
+            <script>scripts/hbase_regionserver.py</script>
+            <scriptType>PYTHON</scriptType>
+          </commandScript>
+          <bulkCommands>
+            <displayName>RegionServers</displayName>
+            <!-- Used by decommission and recommission -->
+            <masterComponent>HBASE_MASTER</masterComponent>
+          </bulkCommands>
+          <logs>
+            <log>
+              <logId>hbase_regionserver</logId>
+              <primary>true</primary>
+            </log>
+          </logs>
+        </component>
+
+        <component>
+          <name>HBASE_CLIENT</name>
+          <displayName>HBase Client</displayName>
+          <category>CLIENT</category>
+          <cardinality>1+</cardinality>
+          <versionAdvertised>true</versionAdvertised>
+          <commandScript>
+            <script>scripts/hbase_client.py</script>
+            <scriptType>PYTHON</scriptType>
+          </commandScript>
+          <configFiles>
+            <configFile>
+              <type>xml</type>
+              <fileName>hbase-site.xml</fileName>
+              <dictionaryName>hbase-site</dictionaryName>
+            </configFile>
+            <configFile>
+              <type>env</type>
+              <fileName>hbase-env.sh</fileName>
+              <dictionaryName>hbase-env</dictionaryName>
+            </configFile>
+            <configFile>
+              <type>xml</type>
+              <fileName>hbase-policy.xml</fileName>
+              <dictionaryName>hbase-policy</dictionaryName>
+            </configFile>
+            <configFile>
+              <type>env</type>
+              <fileName>log4j.properties</fileName>
+              <dictionaryName>hbase-log4j</dictionaryName>
+            </configFile>            
+          </configFiles>
+        </component>
+
+      </components>
+
+      <commandScript>
+        <script>scripts/service_check.py</script>
+        <scriptType>PYTHON</scriptType>
+        <timeout>300</timeout>
+      </commandScript>
+      
+      <requiredServices>
+        <service>ZOOKEEPER</service>
+        <service>HDFS</service>
+      </requiredServices>
+
+      <configuration-dependencies>
+        <config-type>core-site</config-type> <!-- hbase puts core-site in it's folder -->
+        <config-type>viewfs-mount-table</config-type>
+        <config-type>hbase-policy</config-type>
+        <config-type>hbase-site</config-type>
+        <config-type>hbase-env</config-type>
+        <config-type>hbase-log4j</config-type>
+        <config-type>ranger-hbase-plugin-properties</config-type>
+        <config-type>ranger-hbase-audit</config-type>
+        <config-type>ranger-hbase-policymgr-ssl</config-type>
+        <config-type>ranger-hbase-security</config-type>
+      </configuration-dependencies>
+
+      <quickLinksConfigurations>
+        <quickLinksConfiguration>
+          <fileName>quicklinks.json</fileName>
+          <default>true</default>
+        </quickLinksConfiguration>
+      </quickLinksConfigurations>
+
+      <osSpecifics>
+        <osSpecific>
+          <osFamily>any</osFamily>
+          <packages>
+            <package>
+              <name>hbase</name>
+            </package>
+          </packages>
+        </osSpecific>
+      </osSpecifics>
+
+      <themes>
+        <theme>
+          <fileName>theme.json</fileName>
+          <default>true</default>
+        </theme>
+        <theme>
+          <fileName>directories.json</fileName>
+          <default>true</default>
+        </theme>
+      </themes>
+
+    </service>
+  </services>
+</metainfo>

+ 9394 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/metrics.json

@@ -0,0 +1,9394 @@
+{
+  "HBASE_REGIONSERVER": {
+    "Component": [
+      {
+        "type": "ganglia",
+        "metrics": {
+          "default": {
+            "metrics/cpu/cpu_idle":{
+              "metric":"cpu_idle",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/cpu/cpu_nice":{
+              "metric":"cpu_nice",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/cpu/cpu_system":{
+              "metric":"cpu_system",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/cpu/cpu_user":{
+              "metric":"cpu_user",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/cpu/cpu_wio":{
+              "metric":"cpu_wio",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/disk_free":{
+              "metric":"disk_free",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/disk_total":{
+              "metric":"disk_total",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/read_bps":{
+              "metric":"read_bps",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/write_bps":{
+              "metric":"write_bps",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/load/load_fifteen":{
+              "metric":"load_fifteen",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/load/load_five":{
+              "metric":"load_five",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/load/load_one":{
+              "metric":"load_one",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/memory/mem_buffers":{
+              "metric":"mem_buffers",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/memory/mem_cached":{
+              "metric":"mem_cached",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/memory/mem_free":{
+              "metric":"mem_free",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/memory/mem_shared":{
+              "metric":"mem_shared",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/memory/mem_total":{
+              "metric":"mem_total",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/memory/swap_free":{
+              "metric":"swap_free",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/memory/swap_total":{
+              "metric":"swap_total",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/network/bytes_in":{
+              "metric":"bytes_in",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/network/bytes_out":{
+              "metric":"bytes_out",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/network/pkts_in":{
+              "metric":"pkts_in",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/network/pkts_out":{
+              "metric":"pkts_out",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/process/proc_run":{
+              "metric":"proc_run",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/process/proc_total":{
+              "metric":"proc_total",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/read_count":{
+              "metric":"read_count",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/write_count":{
+              "metric":"write_count",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/read_bytes":{
+              "metric":"read_bytes",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/write_bytes":{
+              "metric":"write_bytes",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/read_time":{
+              "metric":"read_time",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/disk/write_time":{
+              "metric":"write_time",
+              "pointInTime":true,
+              "temporal":true
+            },
+            "metrics/hbase/regionserver/mutationsWithoutWALSize": {
+              "metric": "regionserver.Server.mutationsWithoutWALSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowAppendCount": {
+              "metric": "regionserver.Server.slowAppendCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/disk/part_max_used": {
+              "metric": "part_max_used",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheCount": {
+              "metric": "regionserver.Server.blockCacheCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/loginSuccess_num_ops": {
+              "metric": "ugi.UgiMetrics.LoginSuccessNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memHeapCommittedM": {
+              "metric": "jvm.JvmMetrics.MemHeapCommittedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsRunnable": {
+              "metric": "jvm.JvmMetrics.ThreadsRunnable",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_min": {
+              "metric": "regionserver.Server.Delete_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/threadsNew": {
+              "metric": "jvm.JvmMetrics.ThreadsNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthorizationFailures": {
+              "metric": "regionserver.RegionServer.authorizationFailures",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/RpcQueueTime_avg_time": {
+              "metric": "regionserver.RegionServer.QueueCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/boottime": {
+              "metric": "boottime",
+              "pointInTime": true,
+              "temporal": true,
+              "amsHostMetric":true
+            },
+            "metrics/hbase/regionserver/writeRequestsCount": {
+              "metric": "regionserver.Server.writeRequestCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_min": {
+              "metric": "regionserver.Server.Get_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/RpcProcessingTime_num_ops": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logError": {
+              "metric": "jvm.JvmMetrics.LogError",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_75th_percentile": {
+              "metric": "regionserver.Server.Mutate_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheHitCount": {
+              "metric": "regionserver.Server.blockCacheHitCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowPutCount": {
+              "metric": "regionserver.Server.slowPutCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/regionServerStartup_avg_time": {
+              "metric": "regionserver.Server.regionServerStartTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheSize": {
+              "metric": "regionserver.Server.blockCacheSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/threadsBlocked": {
+              "metric": "jvm.JvmMetrics.ThreadsBlocked",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_median": {
+              "metric": "regionserver.Server.Mutate_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/readRequestsCount": {
+              "metric": "regionserver.Server.readRequestCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_min": {
+              "metric": "regionserver.Server.Mutate_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/storefileIndexSizeMB": {
+              "metric": "regionserver.Server.storeFileIndexSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_median": {
+              "metric": "regionserver.Server.Delete_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Get_num_ops": {
+              "metric": "regionserver.Server.Get_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/ScanNext_num_ops": {
+              "metric": "regionserver.Server.ScanNext_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Append_num_ops": {
+              "metric": "regionserver.Server.Append_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Delete_num_ops": {
+              "metric": "regionserver.Server.Delete_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Mutate_num_ops": {
+              "metric": "regionserver.Server.Mutate_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Increment_num_ops": {
+              "metric": "regionserver.Server.Increment_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Get_95th_percentile": {
+              "metric": "regionserver.Server.Get_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/ScanNext_95th_percentile": {
+              "metric": "regionserver.Server.ScanNext_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Mutate_95th_percentile": {
+              "metric": "regionserver.Server.Mutate_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Increment_95th_percentile": {
+              "metric": "regionserver.Server.Increment_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Append_95th_percentile": {
+              "metric": "regionserver.Server.Append_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/Delete_95th_percentile": {
+              "metric": "regionserver.Server.Delete_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/percentFilesLocal": {
+              "metric": "regionserver.Server.percentFilesLocal",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/Server/updatesBlockedTime": {
+              "metric": "regionserver.Server.updatesBlockedTime",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/ipc/IPC/numOpenConnections": {
+              "metric": "regionserver.RegionServer.numOpenConnections",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/ipc/IPC/numActiveHandler": {
+              "metric": "regionserver.RegionServer.numActiveHandler",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/ipc/IPC/numCallsInGeneralQueue": {
+              "metric": "regionserver.RegionServer.numCallsInGeneralQueue",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_mean": {
+              "metric": "regionserver.Server.Mutate_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/RpcProcessingTime_avg_time": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+           "metrics/rpc/rpcAuthenticationFailures": {
+              "metric": "regionserver.RegionServer.authenticationFailures",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheHitRatio": {
+              "metric": "regionserver.Server.blockCacheCountHitPercent",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheHitPercent": {
+              "metric": "regionserver.Server.blockCountHitPercent",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheEvictedCount": {
+              "metric": "regionserver.Server.blockCacheEvictionCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_99th_percentile": {
+              "metric": "regionserver.Server.Mutate_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_max": {
+              "metric": "regionserver.Server.Get_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/loginFailure_avg_time": {
+              "metric": "ugi.UgiMetrics.LoginFailureAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logFatal": {
+              "metric": "jvm.JvmMetrics.LogFatal",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_99th_percentile": {
+              "metric": "regionserver.Server.Get_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/loginSuccess_avg_time": {
+              "metric": "ugi.UgiMetrics.LoginSuccessAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_99th_percentile": {
+              "metric": "regionserver.Server.Delete_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/memNonHeapUsedM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapUsedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowIncrementCount": {
+              "metric": "regionserver.Server.slowIncrementCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/compactionQueueSize": {
+              "metric": "regionserver.Server.compactionQueueLength",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/flushTime_num_ops": {
+              "metric": "regionserver.Server.FlushTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_75th_percentile": {
+              "metric": "regionserver.Server.Get_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/memNonHeapCommittedM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapCommittedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/stores": {
+              "metric": "regionserver.Server.storeCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/loginFailure_num_ops": {
+              "metric": "ugi.UgiMetrics.LoginFailureNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthenticationSuccesses": {
+              "metric": "regionserver.RegionServer.authenticationSuccesses",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/ReceivedBytes": {
+              "metric": "regionserver.RegionServer.receivedBytes",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/gcTimeMillis": {
+              "metric": "jvm.JvmMetrics.GcTimeMillis",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsTerminated": {
+              "metric": "jvm.JvmMetrics.ThreadsTerminated",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_max": {
+              "metric": "regionserver.Server.Mutate_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_mean": {
+              "metric": "regionserver.Server.Get_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/regions": {
+              "metric": "regionserver.Server.regionCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheFree": {
+              "metric": "regionserver.Server.blockCacheFreeSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheMissCount": {
+              "metric": "regionserver.Server.blockCacheMissCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/flushQueueSize": {
+              "metric": "regionserver.Server.flushQueueLength",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/RpcQueueTime_num_ops": {
+              "metric": "regionserver.RegionServer.QueueCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_mean": {
+              "metric": "regionserver.Server.Delete_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/cpu/cpu_aidle": {
+              "metric": "cpu_aidle",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/totalStaticIndexSizeKB": {
+              "metric": "regionserver.Server.staticIndexSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/mutationsWithoutWALCount": {
+              "metric": "regionserver.Server.mutationsWithoutWALCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_median": {
+              "metric": "regionserver.Server.Get_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/cpu/cpu_speed": {
+              "metric": "cpu_speed",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_max": {
+              "metric": "regionserver.Server.Delete_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/SentBytes": {
+              "metric": "regionserver.RegionServer.sentBytes",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logWarn": {
+              "metric": "jvm.JvmMetrics.LogWarn",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/maxMemoryM": {
+              "metric": "jvm.metrics.maxMemoryM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsTimedWaiting": {
+              "metric": "jvm.JvmMetrics.ThreadsTimedWaiting",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/gcCount": {
+              "metric": "jvm.JvmMetrics.GcCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/flushTime_avg_time": {
+              "metric": "regionserver.Server.FlushTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memHeapUsedM": {
+              "metric": "jvm.JvmMetrics.MemHeapUsedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsWaiting": {
+              "metric": "jvm.JvmMetrics.ThreadsWaiting",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowGetCount": {
+              "metric": "regionserver.Server.slowGetCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/requests": {
+              "metric": "regionserver.Server.totalRequestCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/storefiles": {
+              "metric": "regionserver.Server.storeFileCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowDeleteCount": {
+              "metric": "regionserver.Server.slowDeleteCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/logInfo": {
+              "metric": "jvm.JvmMetrics.LogInfo",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/hlogFileCount": {
+              "metric": "regionserver.Server.hlogFileCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_95th_percentile": {
+              "metric": "regionserver.Server.Delete_95th_percentile",
+              "unit": "ms",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/memstoreSize": {
+              "metric": "regionserver.Server.memStoreSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_75th_percentile": {
+              "metric": "regionserver.Server.Delete_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthorizationSuccesses": {
+              "metric": "regionserver.RegionServer.authorizationSuccesses",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/totalStaticBloomSizeKB": {
+              "metric": "regionserver.Server.staticBloomSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/increment_avg_time": {
+              "metric": "regionserver.Server.Increment_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcCountConcurrentMarkSweep": {
+              "metric": "jvm.JvmMetrics.GcCountConcurrentMarkSweep",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcCountParNew": {
+              "metric": "jvm.JvmMetrics.GcCountParNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcTimeMillisConcurrentMarkSweep": {
+              "metric": "jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcTimeMillisParNew": {
+              "metric": "jvm.JvmMetrics.GcTimeMillisParNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemHeapMaxM": {
+              "metric": "jvm.JvmMetrics.MemHeapMaxM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemMaxM": {
+              "metric": "jvm.JvmMetrics.MemMaxM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemNonHeapMaxM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapMaxM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/DroppedPubAll": {
+              "metric": "metricssystem.MetricsSystem.DroppedPubAll",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumActiveSinks": {
+              "metric": "metricssystem.MetricsSystem.NumActiveSinks",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumActiveSources": {
+              "metric": "metricssystem.MetricsSystem.NumActiveSources",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumAllSinks": {
+              "metric": "metricssystem.MetricsSystem.NumAllSinks",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumAllSources": {
+              "metric": "metricssystem.MetricsSystem.NumAllSources",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/PublishAvgTime": {
+              "metric": "metricssystem.MetricsSystem.PublishAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/PublishNumOps": {
+              "metric": "metricssystem.MetricsSystem.PublishNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineAvgTime": {
+              "metric": "metricssystem.MetricsSystem.Sink_timelineAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineDropped": {
+              "metric": "metricssystem.MetricsSystem.Sink_timelineDropped",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineNumOps": {
+              "metric": "metricssystem.MetricsSystem.Sink_timelineNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineQsize": {
+              "metric": "metricssystem.MetricsSystem.Sink_timelineQsize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/SnapshotAvgTime": {
+              "metric": "metricssystem.MetricsSystem.SnapshotAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/SnapshotNumOps": {
+              "metric": "metricssystem.MetricsSystem.SnapshotNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/numCallsInPriorityQueue": {
+              "metric": "regionserver.RegionServer.numCallsInPriorityQueue",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/numCallsInReplicationQueue": {
+              "metric": "regionserver.RegionServer.numCallsInReplicationQueue",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_75th_percentile": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_95th_percentile": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_99th_percentile": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_max": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_mean": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_min": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_75th_percentile": {
+              "metric": "regionserver.RegionServer.QueueCallTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_95th_percentile": {
+              "metric": "regionserver.RegionServer.QueueCallTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_99th_percentile": {
+              "metric": "regionserver.RegionServer.QueueCallTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_max": {
+              "metric": "regionserver.RegionServer.QueueCallTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_mean": {
+              "metric": "regionserver.RegionServer.QueueCallTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_min": {
+              "metric": "regionserver.RegionServer.QueueCallTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/queueSize": {
+              "metric": "regionserver.RegionServer.queueSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_75th_percentile": {
+              "metric": "regionserver.RegionServer.TotalCallTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_95th_percentile": {
+              "metric": "regionserver.RegionServer.TotalCallTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_99th_percentile": {
+              "metric": "regionserver.RegionServer.TotalCallTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_max": {
+              "metric": "regionserver.RegionServer.TotalCallTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_mean": {
+              "metric": "regionserver.RegionServer.TotalCallTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_median": {
+              "metric": "regionserver.RegionServer.TotalCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_min": {
+              "metric": "regionserver.RegionServer.TotalCallTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_num_ops": {
+              "metric": "regionserver.RegionServer.TotalCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Replication/sink/ageOfLastAppliedOp": {
+              "metric": "regionserver.Replication.sink.ageOfLastAppliedOp",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Replication/sink/appliedBatches": {
+              "metric": "regionserver.Replication.sink.appliedBatches",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Replication/sink/appliedOps": {
+              "metric": "regionserver.Replication.sink.appliedOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_75th_percentile": {
+              "metric": "regionserver.Server.Append_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_99th_percentile": {
+              "metric": "regionserver.Server.Append_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_max": {
+              "metric": "regionserver.Server.Append_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_mean": {
+              "metric": "regionserver.Server.Append_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_median": {
+              "metric": "regionserver.Server.Append_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_min": {
+              "metric": "regionserver.Server.Append_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/blockCacheExpressHitPercent": {
+              "metric": "regionserver.Server.blockCacheExpressHitPercent",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/blockedRequestCount": {
+              "metric": "regionserver.Server.blockedRequestCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/checkMutateFailedCount": {
+              "metric": "regionserver.Server.checkMutateFailedCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/checkMutatePassedCount": {
+              "metric": "regionserver.Server.checkMutatePassedCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/compactedCellsCount": {
+              "metric": "regionserver.Server.compactedCellsCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/compactedCellsSize": {
+              "metric": "regionserver.Server.compactedCellsSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/flushedCellsCount": {
+              "metric": "regionserver.Server.flushedCellsCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/flushedCellsSize": {
+              "metric": "regionserver.Server.flushedCellsSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_75th_percentile": {
+              "metric": "regionserver.Server.FlushTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_95th_percentile": {
+              "metric": "regionserver.Server.FlushTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_99th_percentile": {
+              "metric": "regionserver.Server.FlushTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_max": {
+              "metric": "regionserver.Server.FlushTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_mean": {
+              "metric": "regionserver.Server.FlushTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_min": {
+              "metric": "regionserver.Server.FlushTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/hlogFileSize": {
+              "metric": "regionserver.Server.hlogFileSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_75th_percentile": {
+              "metric": "regionserver.Server.Increment_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_99th_percentile": {
+              "metric": "regionserver.Server.Increment_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_max": {
+              "metric": "regionserver.Server.Increment_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_mean": {
+              "metric": "regionserver.Server.Increment_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_min": {
+              "metric": "regionserver.Server.Increment_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/majorCompactedCellsCount": {
+              "metric": "regionserver.Server.majorCompactedCellsCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/majorCompactedCellsSize": {
+              "metric": "regionserver.Server.majorCompactedCellsSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/percentFilesLocalSecondaryRegions": {
+              "metric": "regionserver.Server.percentFilesLocalSecondaryRegions",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_75th_percentile": {
+              "metric": "regionserver.Server.Replay_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_95th_percentile": {
+              "metric": "regionserver.Server.Replay_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_99th_percentile": {
+              "metric": "regionserver.Server.Replay_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_max": {
+              "metric": "regionserver.Server.Replay_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_mean": {
+              "metric": "regionserver.Server.Replay_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_median": {
+              "metric": "regionserver.Server.Replay_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_min": {
+              "metric": "regionserver.Server.Replay_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_num_ops": {
+              "metric": "regionserver.Server.Replay_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_75th_percentile": {
+              "metric": "regionserver.Server.ScanNext_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_99th_percentile": {
+              "metric": "regionserver.Server.ScanNext_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_max": {
+              "metric": "regionserver.Server.ScanNext_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_mean": {
+              "metric": "regionserver.Server.ScanNext_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_median": {
+              "metric": "regionserver.Server.ScanNext_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_min": {
+              "metric": "regionserver.Server.ScanNext_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/splitQueueLength": {
+              "metric": "regionserver.Server.splitQueueLength",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/splitRequestCount": {
+              "metric": "regionserver.Server.splitRequestCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/splitSuccessCount": {
+              "metric": "regionserver.Server.splitSuccessCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_75th_percentile": {
+              "metric": "regionserver.Server.SplitTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_95th_percentile": {
+              "metric": "regionserver.Server.SplitTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_99th_percentile": {
+              "metric": "regionserver.Server.SplitTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_max": {
+              "metric": "regionserver.Server.SplitTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_mean": {
+              "metric": "regionserver.Server.SplitTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_median": {
+              "metric": "regionserver.Server.SplitTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_min": {
+              "metric": "regionserver.Server.SplitTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_num_ops": {
+              "metric": "regionserver.Server.SplitTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/storeFileSize": {
+              "metric": "regionserver.Server.storeFileSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/appendCount": {
+              "metric": "regionserver.WAL.appendCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_75th_percentile": {
+              "metric": "regionserver.WAL.AppendSize_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_95th_percentile": {
+              "metric": "regionserver.WAL.AppendSize_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_99th_percentile": {
+              "metric": "regionserver.WAL.AppendSize_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_max": {
+              "metric": "regionserver.WAL.AppendSize_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_mean": {
+              "metric": "regionserver.WAL.AppendSize_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_median": {
+              "metric": "regionserver.WAL.AppendSize_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_min": {
+              "metric": "regionserver.WAL.AppendSize_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_num_ops": {
+              "metric": "regionserver.WAL.AppendSize_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_75th_percentile": {
+              "metric": "regionserver.WAL.AppendTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_95th_percentile": {
+              "metric": "regionserver.WAL.AppendTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_99th_percentile": {
+              "metric": "regionserver.WAL.AppendTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_max": {
+              "metric": "regionserver.WAL.AppendTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_mean": {
+              "metric": "regionserver.WAL.AppendTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_median": {
+              "metric": "regionserver.WAL.AppendTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_min": {
+              "metric": "regionserver.WAL.AppendTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_num_ops": {
+              "metric": "regionserver.WAL.AppendTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/lowReplicaRollRequest": {
+              "metric": "regionserver.WAL.lowReplicaRollRequest",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/rollRequest": {
+              "metric": "regionserver.WAL.rollRequest",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/slowAppendCount": {
+              "metric": "regionserver.WAL.slowAppendCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_75th_percentile": {
+              "metric": "regionserver.WAL.SyncTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_95th_percentile": {
+              "metric": "regionserver.WAL.SyncTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_99th_percentile": {
+              "metric": "regionserver.WAL.SyncTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_max": {
+              "metric": "regionserver.WAL.SyncTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_mean": {
+              "metric": "regionserver.WAL.SyncTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_median": {
+              "metric": "regionserver.WAL.SyncTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_min": {
+              "metric": "regionserver.WAL.SyncTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_num_ops": {
+              "metric": "regionserver.WAL.SyncTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/GetGroupsAvgTime": {
+              "metric": "ugi.UgiMetrics.GetGroupsAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/GetGroupsNumOps": {
+              "metric": "ugi.UgiMetrics.GetGroupsNumOps",
+              "pointInTime": true,
+              "temporal": true
+            }
+          }
+        }
+      },
+      {
+        "type": "jmx",
+        "metrics": {
+          "default": {
+            "metrics/hbase/regionserver/slowPutCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowPutCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/percentFilesLocal": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.percentFilesLocal",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_min": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheFree": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheFreeSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/mutationsWithoutWALSize": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.mutationsWithoutWALSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheMissCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheMissCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/flushQueueSize": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.flushQueueLength",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_99th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_99th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/ScanNext_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.ScanNext_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/Increment_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Increment_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/Append_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Append_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/ScanNext_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.ScanNext_95th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/Append_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Append_95th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/Increment_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Increment_95th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/updatesBlockedTime": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.updatesBlockedTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/IPC/numActiveHandler": {
+              "metric": "Hadoop:service=HBase,name=IPC,sub=IPC.numActiveHandler",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/IPC/numCallsInGeneralQueue": {
+              "metric": "Hadoop:service=HBase,name=IPC,sub=IPC.numCallsInGeneralQueue",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/IPC/numOpenConnections": {
+              "metric": "Hadoop:service=HBase,name=IPC,sub=IPC.numOpenConnections",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/slowAppendCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowAppendCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheSize": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/slowIncrementCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowIncrementCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheEvictedCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheEvictionCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_median": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_median",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_mean": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/slowGetCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowGetCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_75th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_75th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_min": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/storefileIndexSizeMB": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.storeFileIndexSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_median": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_median",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_max": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/totalStaticIndexSizeKB": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.staticIndexSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_mean": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/requests": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.totalRequestCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/storefiles": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.storeFileCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/mutationsWithoutWALCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.mutationsWithoutWALCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_median": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_median",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/slowDeleteCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowDeleteCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_99th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_99th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/stores": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.storeCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_min": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_max": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_mean": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_75th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_75th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_max": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_75th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_75th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/totalStaticBloomSizeKB": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.staticBloomSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheHitCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheHitCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_99th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_99th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            }
+          }
+        }
+      }
+    ],
+    "HostComponent": [
+      {
+        "type": "ganglia",
+        "metrics": {
+          "default": {
+            "metrics/cpu/cpu_idle":{
+              "metric":"cpu_idle",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/cpu/cpu_nice":{
+              "metric":"cpu_nice",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/cpu/cpu_system":{
+              "metric":"cpu_system",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/cpu/cpu_user":{
+              "metric":"cpu_user",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/cpu/cpu_wio":{
+              "metric":"cpu_wio",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/disk_free":{
+              "metric":"disk_free",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/disk_total":{
+              "metric":"disk_total",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/read_bps":{
+              "metric":"read_bps",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/write_bps":{
+              "metric":"write_bps",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/load/load_fifteen":{
+              "metric":"load_fifteen",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/load/load_five":{
+              "metric":"load_five",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/load/load_one":{
+              "metric":"load_one",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/memory/mem_buffers":{
+              "metric":"mem_buffers",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/memory/mem_cached":{
+              "metric":"mem_cached",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/memory/mem_free":{
+              "metric":"mem_free",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/memory/mem_shared":{
+              "metric":"mem_shared",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/memory/mem_total":{
+              "metric":"mem_total",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/memory/swap_free":{
+              "metric":"swap_free",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/memory/swap_total":{
+              "metric":"swap_total",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/network/bytes_in":{
+              "metric":"bytes_in",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/network/bytes_out":{
+              "metric":"bytes_out",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/network/pkts_in":{
+              "metric":"pkts_in",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/network/pkts_out":{
+              "metric":"pkts_out",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/process/proc_run":{
+              "metric":"proc_run",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/process/proc_total":{
+              "metric":"proc_total",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/read_count":{
+              "metric":"read_count",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/write_count":{
+              "metric":"write_count",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/read_bytes":{
+              "metric":"read_bytes",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/write_bytes":{
+              "metric":"write_bytes",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/read_time":{
+              "metric":"read_time",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/disk/write_time":{
+              "metric":"write_time",
+              "pointInTime":true,
+              "temporal":true,
+              "amsHostMetric":true
+            },
+            "metrics/hbase/regionserver/mutationsWithoutWALSize": {
+              "metric": "regionserver.Server.mutationsWithoutWALSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/loginSuccess_avg_time": {
+              "metric": "ugi.UgiMetrics.LoginSuccessAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_99th_percentile": {
+              "metric": "regionserver.Server.Delete_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowAppendCount": {
+              "metric": "regionserver.Server.slowAppendCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/memNonHeapUsedM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapUsedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowIncrementCount": {
+              "metric": "regionserver.Server.slowIncrementCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_95th_percentile": {
+              "metric": "regionserver.Server.Mutate_95th_percentile",
+              "unit": "ms",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/compactionQueueSize": {
+              "metric": "regionserver.Server.compactionQueueLength",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/disk/part_max_used": {
+              "metric": "part_max_used",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/flushTime_num_ops": {
+              "metric": "regionserver.Server.FlushTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheCount": {
+              "metric": "regionserver.Server.blockCacheCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/loginSuccess_num_ops": {
+              "metric": "ugi.UgiMetrics.LoginSuccessNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_75th_percentile": {
+              "metric": "regionserver.Server.Get_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/memNonHeapCommittedM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapCommittedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/stores": {
+              "metric": "regionserver.Server.storeCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/loginFailure_num_ops": {
+              "metric": "ugi.UgiMetrics.LoginFailureNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthenticationSuccesses": {
+              "metric": "regionserver.RegionServer.authenticationSuccesses",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/memHeapCommittedM": {
+              "metric": "jvm.JvmMetrics.MemHeapCommittedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsRunnable": {
+              "metric": "jvm.JvmMetrics.ThreadsRunnable",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsNew": {
+              "metric": "jvm.JvmMetrics.ThreadsNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_min": {
+              "metric": "regionserver.Server.Delete_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthorizationFailures": {
+              "metric": "regionserver.RegionServer.authorizationFailures",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/RpcQueueTime_avg_time": {
+              "metric": "regionserver.RegionServer.QueueCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_num_ops": {
+              "metric": "regionserver.Server.Get_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/ReceivedBytes": {
+              "metric": "regionserver.RegionServer.receivedBytes",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/NumOpenConnections": {
+              "metric": "regionserver.RegionServer.NumOpenConnections",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/gcTimeMillis": {
+              "metric": "jvm.JvmMetrics.GcTimeMillis",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsTerminated": {
+              "metric": "jvm.JvmMetrics.ThreadsTerminated",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_max": {
+              "metric": "regionserver.Server.Mutate_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_num_ops": {
+              "metric": "regionserver.Server.Delete_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/boottime": {
+              "metric": "boottime",
+              "pointInTime": true,
+              "temporal": true,
+              "amsHostMetric":true
+            },
+            "metrics/hbase/regionserver/writeRequestsCount": {
+              "metric": "regionserver.Server.writeRequestCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_min": {
+              "metric": "regionserver.Server.Get_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/RpcProcessingTime_num_ops": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_mean": {
+              "metric": "regionserver.Server.Get_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/logError": {
+              "metric": "jvm.JvmMetrics.LogError",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_75th_percentile": {
+              "metric": "regionserver.Server.Mutate_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/regions": {
+              "metric": "regionserver.Server.regionCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheHitCount": {
+              "metric": "regionserver.Server.blockCacheHitCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowPutCount": {
+              "metric": "regionserver.Server.slowPutCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheFree": {
+              "metric": "regionserver.Server.blockCacheFreeSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheMissCount": {
+              "metric": "regionserver.Server.blockCacheMissCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/flushQueueSize": {
+              "metric": "regionserver.Server.flushQueueLength",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheSize": {
+              "metric": "regionserver.Server.blockCacheSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_num_ops": {
+              "metric": "regionserver.Server.Mutate_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/threadsBlocked": {
+              "metric": "jvm.JvmMetrics.ThreadsBlocked",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/RpcQueueTime_num_ops": {
+              "metric": "regionserver.RegionServer.QueueCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_median": {
+              "metric": "regionserver.Server.Mutate_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_mean": {
+              "metric": "regionserver.Server.Delete_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/cpu/cpu_aidle": {
+              "metric": "cpu_aidle",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/readRequestsCount": {
+              "metric": "regionserver.Server.readRequestCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_min": {
+              "metric": "regionserver.Server.Mutate_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/storefileIndexSizeMB": {
+              "metric": "regionserver.Server.storeFileIndexSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_median": {
+              "metric": "regionserver.Server.Delete_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/totalStaticIndexSizeKB": {
+              "metric": "regionserver.Server.staticIndexSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_mean": {
+              "metric": "regionserver.Server.Mutate_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/mutationsWithoutWALCount": {
+              "metric": "regionserver.Server.mutationsWithoutWALCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_median": {
+              "metric": "regionserver.Server.Get_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_max": {
+              "metric": "regionserver.Server.Delete_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/cpu/cpu_speed": {
+              "metric": "cpu_speed",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/RpcProcessingTime_avg_time": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthenticationFailures": {
+              "metric": "regionserver.RegionServer.authenticationFailures",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/percentFilesLocal": {
+              "metric": "regionserver.Server.percentFilesLocal",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheHitRatio": {
+              "metric": "regionserver.Server.blockCacheCountHitPercent",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/SentBytes": {
+              "metric": "regionserver.RegionServer.SentBytes",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/maxMemoryM": {
+              "metric": "jvm.metrics.maxMemoryM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logWarn": {
+              "metric": "jvm.JvmMetrics.LogWarn",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsTimedWaiting": {
+              "metric": "jvm.JvmMetrics.ThreadsTimedWaiting",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/gcCount": {
+              "metric": "jvm.JvmMetrics.GcCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/flushTime_avg_time": {
+              "metric": "regionserver.Server.FlushTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/blockCacheEvictedCount": {
+              "metric": "regionserver.Server.blockCacheEvictionCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/memHeapUsedM": {
+              "metric": "jvm.JvmMetrics.MemHeapUsedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsWaiting": {
+              "metric": "jvm.JvmMetrics.ThreadsWaiting",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowGetCount": {
+              "metric": "regionserver.Server.slowGetCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/requests": {
+              "metric": "regionserver.Server.totalRequestCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/storefiles": {
+              "metric": "regionserver.Server.storeFileCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/slowDeleteCount": {
+              "metric": "regionserver.Server.slowDeleteCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/putRequestLatency_99th_percentile": {
+              "metric": "regionserver.Server.Mutate_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/logInfo": {
+              "metric": "jvm.JvmMetrics.LogInfo",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/hlogFileCount": {
+              "metric": "regionserver.Server.hlogFileCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_95th_percentile": {
+              "metric": "regionserver.Server.Delete_95th_percentile",
+              "unit": "ms",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_95th_percentile": {
+              "metric": "regionserver.Server.Get_95th_percentile",
+              "unit": "ms",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_max": {
+              "metric": "regionserver.Server.Get_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/memstoreSize": {
+              "metric": "regionserver.Server.memStoreSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_75th_percentile": {
+              "metric": "regionserver.Server.Delete_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/loginFailure_avg_time": {
+              "metric": "ugi.UgiMetrics.LoginFailureAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthorizationSuccesses": {
+              "metric": "regionserver.RegionServer.authorizationSuccesses",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logFatal": {
+              "metric": "jvm.JvmMetrics.LogFatal",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/totalStaticBloomSizeKB": {
+              "metric": "regionserver.Server.staticBloomSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/hbase/regionserver/getRequestLatency_99th_percentile": {
+              "metric": "regionserver.Server.Get_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcCountConcurrentMarkSweep": {
+              "metric": "jvm.JvmMetrics.GcCountConcurrentMarkSweep",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcCountParNew": {
+              "metric": "jvm.JvmMetrics.GcCountParNew",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcTimeMillisConcurrentMarkSweep": {
+              "metric": "jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcTimeMillisParNew": {
+              "metric": "jvm.JvmMetrics.GcTimeMillisParNew",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/LogFatal": {
+              "metric": "jvm.JvmMetrics.LogFatal",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemHeapMaxM": {
+              "metric": "jvm.JvmMetrics.MemHeapMaxM",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemMaxM": {
+              "metric": "jvm.JvmMetrics.MemMaxM",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemNonHeapMaxM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapMaxM",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/DroppedPubAll": {
+              "metric": "metricssystem.MetricsSystem.DroppedPubAll",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumActiveSinks": {
+              "metric": "metricssystem.MetricsSystem.NumActiveSinks",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumActiveSources": {
+              "metric": "metricssystem.MetricsSystem.NumActiveSources",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumAllSinks": {
+              "metric": "metricssystem.MetricsSystem.NumAllSinks",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumAllSources": {
+              "metric": "metricssystem.MetricsSystem.NumAllSources",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/PublishAvgTime": {
+              "metric": "metricssystem.MetricsSystem.PublishAvgTime",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/PublishNumOps": {
+              "metric": "metricssystem.MetricsSystem.PublishNumOps",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineAvgTime": {
+              "metric": "metricssystem.MetricsSystem.Sink_timelineAvgTime",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineDropped": {
+              "metric": "metricssystem.MetricsSystem.Sink_timelineDropped",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineNumOps": {
+              "metric": "metricssystem.MetricsSystem.Sink_timelineNumOps",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineQsize": {
+              "metric": "metricssystem.MetricsSystem.Sink_timelineQsize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/SnapshotAvgTime": {
+              "metric": "metricssystem.MetricsSystem.SnapshotAvgTime",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/SnapshotNumOps": {
+              "metric": "metricssystem.MetricsSystem.SnapshotNumOps",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/authorizationSuccesses": {
+              "metric": "regionserver.RegionServer.authorizationSuccesses",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/numActiveHandler": {
+              "metric": "regionserver.RegionServer.numActiveHandler",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/numCallsInGeneralQueue": {
+              "metric": "regionserver.RegionServer.numCallsInGeneralQueue",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/numCallsInPriorityQueue": {
+              "metric": "regionserver.RegionServer.numCallsInPriorityQueue",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/numCallsInReplicationQueue": {
+              "metric": "regionserver.RegionServer.numCallsInReplicationQueue",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/numOpenConnections": {
+              "metric": "regionserver.RegionServer.numOpenConnections",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_75th_percentile": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_95th_percentile": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_95th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_99th_percentile": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_max": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_mean": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/ProcessCallTime_min": {
+              "metric": "regionserver.RegionServer.ProcessCallTime_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_75th_percentile": {
+              "metric": "regionserver.RegionServer.QueueCallTime_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_95th_percentile": {
+              "metric": "regionserver.RegionServer.QueueCallTime_95th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_99th_percentile": {
+              "metric": "regionserver.RegionServer.QueueCallTime_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_max": {
+              "metric": "regionserver.RegionServer.QueueCallTime_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_mean": {
+              "metric": "regionserver.RegionServer.QueueCallTime_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/QueueCallTime_min": {
+              "metric": "regionserver.RegionServer.QueueCallTime_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/queueSize": {
+              "metric": "regionserver.RegionServer.queueSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/sentBytes": {
+              "metric": "regionserver.RegionServer.sentBytes",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_75th_percentile": {
+              "metric": "regionserver.RegionServer.TotalCallTime_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_95th_percentile": {
+              "metric": "regionserver.RegionServer.TotalCallTime_95th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_99th_percentile": {
+              "metric": "regionserver.RegionServer.TotalCallTime_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_max": {
+              "metric": "regionserver.RegionServer.TotalCallTime_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_mean": {
+              "metric": "regionserver.RegionServer.TotalCallTime_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_median": {
+              "metric": "regionserver.RegionServer.TotalCallTime_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_min": {
+              "metric": "regionserver.RegionServer.TotalCallTime_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/RegionServer/TotalCallTime_num_ops": {
+              "metric": "regionserver.RegionServer.TotalCallTime_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Replication/sink/ageOfLastAppliedOp": {
+              "metric": "regionserver.Replication.sink.ageOfLastAppliedOp",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Replication/sink/appliedBatches": {
+              "metric": "regionserver.Replication.sink.appliedBatches",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Replication/sink/appliedOps": {
+              "metric": "regionserver.Replication.sink.appliedOps",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_75th_percentile": {
+              "metric": "regionserver.Server.Append_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_95th_percentile": {
+              "metric": "regionserver.Server.Append_95th_percentile",
+              "unit": "ms",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_99th_percentile": {
+              "metric": "regionserver.Server.Append_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_max": {
+              "metric": "regionserver.Server.Append_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_mean": {
+              "metric": "regionserver.Server.Append_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_median": {
+              "metric": "regionserver.Server.Append_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_min": {
+              "metric": "regionserver.Server.Append_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Append_num_ops": {
+              "metric": "regionserver.Server.Append_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/blockCacheExpressHitPercent": {
+              "metric": "regionserver.Server.blockCacheExpressHitPercent",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/blockedRequestCount": {
+              "metric": "regionserver.Server.blockedRequestCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/checkMutateFailedCount": {
+              "metric": "regionserver.Server.checkMutateFailedCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/checkMutatePassedCount": {
+              "metric": "regionserver.Server.checkMutatePassedCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/compactedCellsCount": {
+              "metric": "regionserver.Server.compactedCellsCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/compactedCellsSize": {
+              "metric": "regionserver.Server.compactedCellsSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/flushedCellsCount": {
+              "metric": "regionserver.Server.flushedCellsCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/flushedCellsSize": {
+              "metric": "regionserver.Server.flushedCellsSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_75th_percentile": {
+              "metric": "regionserver.Server.FlushTime_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_95th_percentile": {
+              "metric": "regionserver.Server.FlushTime_95th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_99th_percentile": {
+              "metric": "regionserver.Server.FlushTime_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_max": {
+              "metric": "regionserver.Server.FlushTime_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_mean": {
+              "metric": "regionserver.Server.FlushTime_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/FlushTime_min": {
+              "metric": "regionserver.Server.FlushTime_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Get_99th_percentile": {
+              "metric": "regionserver.Server.Get_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/hlogFileSize": {
+              "metric": "regionserver.Server.hlogFileSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_75th_percentile": {
+              "metric": "regionserver.Server.Increment_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_95th_percentile": {
+              "metric": "regionserver.Server.Increment_95th_percentile",
+              "unit": "ms",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_99th_percentile": {
+              "metric": "regionserver.Server.Increment_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_max": {
+              "metric": "regionserver.Server.Increment_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_mean": {
+              "metric": "regionserver.Server.Increment_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_median": {
+              "metric": "regionserver.Server.Increment_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_min": {
+              "metric": "regionserver.Server.Increment_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Increment_num_ops": {
+              "metric": "regionserver.Server.Increment_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/majorCompactedCellsCount": {
+              "metric": "regionserver.Server.majorCompactedCellsCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/majorCompactedCellsSize": {
+              "metric": "regionserver.Server.majorCompactedCellsSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/percentFilesLocalSecondaryRegions": {
+              "metric": "regionserver.Server.percentFilesLocalSecondaryRegions",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/regionServerStartTime": {
+              "metric": "regionserver.Server.regionServerStartTime",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_75th_percentile": {
+              "metric": "regionserver.Server.Replay_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_95th_percentile": {
+              "metric": "regionserver.Server.Replay_95th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_99th_percentile": {
+              "metric": "regionserver.Server.Replay_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_max": {
+              "metric": "regionserver.Server.Replay_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_mean": {
+              "metric": "regionserver.Server.Replay_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_median": {
+              "metric": "regionserver.Server.Replay_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_min": {
+              "metric": "regionserver.Server.Replay_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/Replay_num_ops": {
+              "metric": "regionserver.Server.Replay_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_75th_percentile": {
+              "metric": "regionserver.Server.ScanNext_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_95th_percentile": {
+              "metric": "regionserver.Server.ScanNext_95th_percentile",
+              "unit": "ms",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_99th_percentile": {
+              "metric": "regionserver.Server.ScanNext_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_max": {
+              "metric": "regionserver.Server.ScanNext_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_mean": {
+              "metric": "regionserver.Server.ScanNext_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_median": {
+              "metric": "regionserver.Server.ScanNext_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_min": {
+              "metric": "regionserver.Server.ScanNext_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/ScanNext_num_ops": {
+              "metric": "regionserver.Server.ScanNext_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/splitQueueLength": {
+              "metric": "regionserver.Server.splitQueueLength",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/splitRequestCount": {
+              "metric": "regionserver.Server.splitRequestCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/splitSuccessCount": {
+              "metric": "regionserver.Server.splitSuccessCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_75th_percentile": {
+              "metric": "regionserver.Server.SplitTime_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_95th_percentile": {
+              "metric": "regionserver.Server.SplitTime_95th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_99th_percentile": {
+              "metric": "regionserver.Server.SplitTime_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_max": {
+              "metric": "regionserver.Server.SplitTime_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_mean": {
+              "metric": "regionserver.Server.SplitTime_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_median": {
+              "metric": "regionserver.Server.SplitTime_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_min": {
+              "metric": "regionserver.Server.SplitTime_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/SplitTime_num_ops": {
+              "metric": "regionserver.Server.SplitTime_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/staticBloomSize": {
+              "metric": "regionserver.Server.staticBloomSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/storeFileSize": {
+              "metric": "regionserver.Server.storeFileSize",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/Server/updatesBlockedTime": {
+              "metric": "regionserver.Server.updatesBlockedTime",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/appendCount": {
+              "metric": "regionserver.WAL.appendCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_75th_percentile": {
+              "metric": "regionserver.WAL.AppendSize_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_95th_percentile": {
+              "metric": "regionserver.WAL.AppendSize_95th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_99th_percentile": {
+              "metric": "regionserver.WAL.AppendSize_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_max": {
+              "metric": "regionserver.WAL.AppendSize_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_mean": {
+              "metric": "regionserver.WAL.AppendSize_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_median": {
+              "metric": "regionserver.WAL.AppendSize_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_min": {
+              "metric": "regionserver.WAL.AppendSize_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_num_ops": {
+              "metric": "regionserver.WAL.AppendSize_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_75th_percentile": {
+              "metric": "regionserver.WAL.AppendTime_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_95th_percentile": {
+              "metric": "regionserver.WAL.AppendTime_95th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_99th_percentile": {
+              "metric": "regionserver.WAL.AppendTime_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_max": {
+              "metric": "regionserver.WAL.AppendTime_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_mean": {
+              "metric": "regionserver.WAL.AppendTime_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_median": {
+              "metric": "regionserver.WAL.AppendTime_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_min": {
+              "metric": "regionserver.WAL.AppendTime_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_num_ops": {
+              "metric": "regionserver.WAL.AppendTime_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/lowReplicaRollRequest": {
+              "metric": "regionserver.WAL.lowReplicaRollRequest",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/rollRequest": {
+              "metric": "regionserver.WAL.rollRequest",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/slowAppendCount": {
+              "metric": "regionserver.WAL.slowAppendCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_75th_percentile": {
+              "metric": "regionserver.WAL.SyncTime_75th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_95th_percentile": {
+              "metric": "regionserver.WAL.SyncTime_95th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_99th_percentile": {
+              "metric": "regionserver.WAL.SyncTime_99th_percentile",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_max": {
+              "metric": "regionserver.WAL.SyncTime_max",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_mean": {
+              "metric": "regionserver.WAL.SyncTime_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_median": {
+              "metric": "regionserver.WAL.SyncTime_median",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_min": {
+              "metric": "regionserver.WAL.SyncTime_min",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_num_ops": {
+              "metric": "regionserver.WAL.SyncTime_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/GetGroupsAvgTime": {
+              "metric": "ugi.UgiMetrics.GetGroupsAvgTime",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/GetGroupsNumOps": {
+              "metric": "ugi.UgiMetrics.GetGroupsNumOps",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/LoginFailureAvgTime": {
+              "metric": "ugi.UgiMetrics.LoginFailureAvgTime",
+              "pointInTime": false,
+              "temporal": true
+            }
+          }
+        }
+      },
+      {
+        "type": "jmx",
+        "metrics": {
+          "default": {
+            "metrics/hbase/regionserver/slowPutCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowPutCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/percentFilesLocal": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.percentFilesLocal",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_min": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheFree": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheFreeSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/mutationsWithoutWALSize": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.mutationsWithoutWALSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheMissCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheMissCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/flushQueueSize": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.flushQueueLength",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_99th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_99th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/ScanNext_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.ScanNext_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/Increment_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Increment_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/Append_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Append_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/ScanNext_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.ScanNext_95th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/Append_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Append_95th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/Increment_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Increment_95th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/updatesBlockedTime": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.updatesBlockedTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/IPC/numActiveHandler": {
+              "metric": "Hadoop:service=HBase,name=IPC,sub=IPC.numActiveHandler",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/IPC/numCallsInGeneralQueue": {
+              "metric": "Hadoop:service=HBase,name=IPC,sub=IPC.numCallsInGeneralQueue",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/IPC/numOpenConnections": {
+              "metric": "Hadoop:service=HBase,name=IPC,sub=IPC.numOpenConnections",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/slowAppendCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowAppendCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheSize": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/slowIncrementCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowIncrementCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheEvictedCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheEvictionCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_median": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_median",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_mean": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/slowGetCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowGetCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_75th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_75th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_min": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/storefileIndexSizeMB": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.storeFileIndexSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_median": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_median",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_max": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/totalStaticIndexSizeKB": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.staticIndexSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_num_ops": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_mean": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/requests": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.totalRequestCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/storefiles": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.storeFileCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/mutationsWithoutWALCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.mutationsWithoutWALCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_median": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_median",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/slowDeleteCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.slowDeleteCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_99th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_99th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/stores": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.storeCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_min": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_95th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_95th_percentile",
+              "unit": "ms",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_max": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_mean": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_75th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_75th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/deleteRequestLatency_max": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/putRequestLatency_75th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Mutate_75th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/totalStaticBloomSizeKB": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.staticBloomSize",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/blockCacheHitCount": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheHitCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/regionserver/getRequestLatency_99th_percentile": {
+              "metric": "Hadoop:service=HBase,name=RegionServer,sub=Server.Get_99th_percentile",
+              "pointInTime": true,
+              "temporal": false
+            }
+          }
+        }
+      }
+    ]
+  },
+  "HBASE_MASTER": {
+    "Component": [
+      {
+        "type": "ganglia",
+        "metrics": {
+          "default": {
+            "metrics/load/load_one": {
+              "metric": "load_one",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memNonHeapUsedM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapUsedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/memory/swap_total": {
+              "metric": "swap_total",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/process/proc_total": {
+              "metric": "proc_total",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/balance_avg_time": {
+              "metric": "master.Balancer.BalancerCluster_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/disk/part_max_used": {
+              "metric": "part_max_used",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/balance_num_ops": {
+              "metric": "master.Balancer.BalancerCluster_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/network/bytes_in": {
+              "metric": "bytes_in",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memNonHeapCommittedM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapCommittedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/master/splitTime_num_ops": {
+              "metric": "master.FileSystem.HlogSplitTime_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthenticationSuccesses": {
+              "metric": "master.Master.authenticationSuccesses",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/network/pkts_in": {
+              "metric": "pkts_in",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memHeapCommittedM": {
+              "metric": "jvm.JvmMetrics.MemHeapCommittedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsRunnable": {
+              "metric": "jvm.JvmMetrics.ThreadsRunnable",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsNew": {
+              "metric": "jvm.JvmMetrics.ThreadsNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthorizationFailures": {
+              "metric": "master.Master.authorizationFailures",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/RpcQueueTime_avg_time": {
+              "metric": "master.Master.QueueCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/gcTimeMillis": {
+              "metric": "jvm.JvmMetrics.GcTimeMillis",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsTerminated": {
+              "metric": "jvm.JvmMetrics.ThreadsTerminated",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/network/bytes_out": {
+              "metric": "bytes_out",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/load/load_five": {
+              "metric": "load_five",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/boottime": {
+              "metric": "boottime",
+              "pointInTime": true,
+              "temporal": true,
+              "amsHostMetric":true
+            },
+            "metrics/rpc/RpcProcessingTime_num_ops": {
+              "metric": "master.Master.ProcessCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logError": {
+              "metric": "jvm.JvmMetrics.LogError",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/process/proc_run": {
+              "metric": "proc_run",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsBlocked": {
+              "metric": "jvm.JvmMetrics.ThreadsBlocked",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/RpcQueueTime_num_ops": {
+              "metric": "master.Master.QueueCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/master/splitSize_num_ops": {
+              "metric": "master.FileSystem.HlogSplitSize_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/cpu/cpu_aidle": {
+              "metric": "cpu_aidle",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/network/pkts_out": {
+              "metric": "pkts_out",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/master/splitSize_avg_time": {
+              "metric": "master.FileSystem.HlogSplitSize_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/cpu/cpu_speed": {
+              "metric": "cpu_speed",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/master/cluster_requests": {
+              "metric": "master.Server.clusterRequests",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/RpcProcessingTime_avg_time": {
+              "metric": "master.Master.ProcessCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthenticationFailures": {
+              "metric": "master.Master.authenticationFailures",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/maxMemoryM": {
+              "metric": "jvm.metrics.maxMemoryM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logWarn": {
+              "metric": "jvm.JvmMetrics.LogWarn",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsTimedWaiting": {
+              "metric": "jvm.JvmMetrics.ThreadsTimedWaiting",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/gcCount": {
+              "metric": "jvm.JvmMetrics.GcCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memHeapUsedM": {
+              "metric": "jvm.JvmMetrics.MemHeapUsedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsWaiting": {
+              "metric": "jvm.JvmMetrics.ThreadsWaiting",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/memory/mem_buffers": {
+              "metric": "mem_buffers",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/load/load_fifteen": {
+              "metric": "load_fifteen",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logInfo": {
+              "metric": "jvm.JvmMetrics.LogInfo",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthorizationSuccesses": {
+              "metric": "master.Master.authorizationSuccesses",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/logFatal": {
+              "metric": "jvm.JvmMetrics.LogFatal",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcCountConcurrentMarkSweep": {
+              "metric":"jvm.JvmMetrics.GcCountConcurrentMarkSweep",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcCountParNew": {
+              "metric":"jvm.JvmMetrics.GcCountParNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcTimeMillisConcurrentMarkSweep": {
+              "metric":"jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcTimeMillisParNew": {
+              "metric":"jvm.JvmMetrics.GcTimeMillisParNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemHeapMaxM": {
+              "metric":"jvm.JvmMetrics.MemHeapMaxM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemMaxM": {
+              "metric":"jvm.JvmMetrics.MemMaxM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemNonHeapMaxM": {
+              "metric":"jvm.JvmMetrics.MemNonHeapMaxM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_75th_percentile": {
+              "metric":"master.AssignmentManager.Assign_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_95th_percentile": {
+              "metric":"master.AssignmentManager.Assign_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_99th_percentile": {
+              "metric":"master.AssignmentManager.Assign_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_max": {
+              "metric":"master.AssignmentManager.Assign_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_mean": {
+              "metric":"master.AssignmentManager.Assign_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_median": {
+              "metric":"master.AssignmentManager.Assign_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_min": {
+              "metric":"master.AssignmentManager.Assign_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_num_ops": {
+              "metric":"master.AssignmentManager.Assign_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_75th_percentile": {
+              "metric":"master.AssignmentManager.BulkAssign_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_95th_percentile": {
+              "metric":"master.AssignmentManager.BulkAssign_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_99th_percentile": {
+              "metric":"master.AssignmentManager.BulkAssign_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_max": {
+              "metric":"master.AssignmentManager.BulkAssign_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_mean": {
+              "metric":"master.AssignmentManager.BulkAssign_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_median": {
+              "metric":"master.AssignmentManager.BulkAssign_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_min": {
+              "metric":"master.AssignmentManager.BulkAssign_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_num_ops": {
+              "metric":"master.AssignmentManager.BulkAssign_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/ritCount": {
+              "metric":"master.AssignmentManager.ritCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/ritCountOverThreshold": {
+              "metric":"master.AssignmentManager.ritCountOverThreshold",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/ritOldestAge": {
+              "metric":"master.AssignmentManager.ritOldestAge",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_75th_percentile": {
+              "metric":"master.Balancer.BalancerCluster_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_95th_percentile": {
+              "metric":"master.Balancer.BalancerCluster_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_99th_percentile": {
+              "metric":"master.Balancer.BalancerCluster_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_max": {
+              "metric":"master.Balancer.BalancerCluster_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_mean": {
+              "metric":"master.Balancer.BalancerCluster_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_min": {
+              "metric":"master.Balancer.BalancerCluster_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/miscInvocationCount": {
+              "metric":"master.Balancer.miscInvocationCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_75th_percentile": {
+              "metric":"master.FileSystem.HlogSplitSize_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_95th_percentile": {
+              "metric":"master.FileSystem.HlogSplitSize_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_99th_percentile": {
+              "metric":"master.FileSystem.HlogSplitSize_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_max": {
+              "metric":"master.FileSystem.HlogSplitSize_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_median": {
+              "metric":"master.FileSystem.HlogSplitSize_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_min": {
+              "metric":"master.FileSystem.HlogSplitSize_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_75th_percentile": {
+              "metric":"master.FileSystem.HlogSplitTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_95th_percentile": {
+              "metric":"master.FileSystem.HlogSplitTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_99th_percentile": {
+              "metric":"master.FileSystem.HlogSplitTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_max": {
+              "metric":"master.FileSystem.HlogSplitTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_mean": {
+              "metric":"master.FileSystem.HlogSplitTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_median": {
+              "metric":"master.FileSystem.HlogSplitTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_min": {
+              "metric":"master.FileSystem.HlogSplitTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_75th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_95th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_99th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_max": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_mean": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_median": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_min": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_num_ops": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_75th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_95th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_99th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_max": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_mean": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_median": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_min": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_num_ops": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numActiveHandler": {
+              "metric":"master.Master.numActiveHandler",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numCallsInGeneralQueue": {
+              "metric":"master.Master.numCallsInGeneralQueue",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numCallsInPriorityQueue": {
+              "metric":"master.Master.numCallsInPriorityQueue",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numCallsInReplicationQueue": {
+              "metric":"master.Master.numCallsInReplicationQueue",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numOpenConnections": {
+              "metric":"master.Master.numOpenConnections",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_75th_percentile": {
+              "metric":"master.Master.ProcessCallTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_95th_percentile": {
+              "metric":"master.Master.ProcessCallTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_99th_percentile": {
+              "metric":"master.Master.ProcessCallTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_max": {
+              "metric":"master.Master.ProcessCallTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_mean": {
+              "metric":"master.Master.ProcessCallTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_median": {
+              "metric":"master.Master.ProcessCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_min": {
+              "metric":"master.Master.ProcessCallTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_75th_percentile": {
+              "metric":"master.Master.QueueCallTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_95th_percentile": {
+              "metric":"master.Master.QueueCallTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_99th_percentile": {
+              "metric":"master.Master.QueueCallTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_max": {
+              "metric":"master.Master.QueueCallTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_mean": {
+              "metric":"master.Master.QueueCallTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_min": {
+              "metric":"master.Master.QueueCallTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/queueSize": {
+              "metric":"master.Master.queueSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/receivedBytes": {
+              "metric":"master.Master.receivedBytes",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/sentBytes": {
+              "metric":"master.Master.sentBytes",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_75th_percentile": {
+              "metric":"master.Master.TotalCallTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_95th_percentile": {
+              "metric":"master.Master.TotalCallTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_99th_percentile": {
+              "metric":"master.Master.TotalCallTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_max": {
+              "metric":"master.Master.TotalCallTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_mean": {
+              "metric":"master.Master.TotalCallTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_median": {
+              "metric":"master.Master.TotalCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_min": {
+              "metric":"master.Master.TotalCallTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_num_ops": {
+              "metric":"master.Master.TotalCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/averageLoad": {
+              "metric":"master.Server.averageLoad",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/masterActiveTime": {
+              "metric":"master.Server.masterActiveTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/masterStartTime": {
+              "metric":"master.Server.masterStartTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/numDeadRegionServers": {
+              "metric":"master.Server.numDeadRegionServers",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/numRegionServers": {
+              "metric":"master.Server.numRegionServers",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/DroppedPubAll": {
+              "metric":"metricssystem.MetricsSystem.DroppedPubAll",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumActiveSinks": {
+              "metric":"metricssystem.MetricsSystem.NumActiveSinks",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumActiveSources": {
+              "metric":"metricssystem.MetricsSystem.NumActiveSources",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumAllSinks": {
+              "metric":"metricssystem.MetricsSystem.NumAllSinks",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumAllSources": {
+              "metric":"metricssystem.MetricsSystem.NumAllSources",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/PublishAvgTime": {
+              "metric":"metricssystem.MetricsSystem.PublishAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/PublishNumOps": {
+              "metric":"metricssystem.MetricsSystem.PublishNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineAvgTime": {
+              "metric":"metricssystem.MetricsSystem.Sink_timelineAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineDropped": {
+              "metric":"metricssystem.MetricsSystem.Sink_timelineDropped",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineNumOps": {
+              "metric":"metricssystem.MetricsSystem.Sink_timelineNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineQsize": {
+              "metric":"metricssystem.MetricsSystem.Sink_timelineQsize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/SnapshotAvgTime": {
+              "metric":"metricssystem.MetricsSystem.SnapshotAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/SnapshotNumOps": {
+              "metric":"metricssystem.MetricsSystem.SnapshotNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/appendCount": {
+              "metric":"regionserver.WAL.appendCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_75th_percentile": {
+              "metric":"regionserver.WAL.AppendSize_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_95th_percentile": {
+              "metric":"regionserver.WAL.AppendSize_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_99th_percentile": {
+              "metric":"regionserver.WAL.AppendSize_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_max": {
+              "metric":"regionserver.WAL.AppendSize_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_mean": {
+              "metric":"regionserver.WAL.AppendSize_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_median": {
+              "metric":"regionserver.WAL.AppendSize_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_min": {
+              "metric":"regionserver.WAL.AppendSize_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_num_ops": {
+              "metric":"regionserver.WAL.AppendSize_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_75th_percentile": {
+              "metric":"regionserver.WAL.AppendTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_95th_percentile": {
+              "metric":"regionserver.WAL.AppendTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_99th_percentile": {
+              "metric":"regionserver.WAL.AppendTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_max": {
+              "metric":"regionserver.WAL.AppendTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_mean": {
+              "metric":"regionserver.WAL.AppendTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_median": {
+              "metric":"regionserver.WAL.AppendTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_min": {
+              "metric":"regionserver.WAL.AppendTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_num_ops": {
+              "metric":"regionserver.WAL.AppendTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/lowReplicaRollRequest": {
+              "metric":"regionserver.WAL.lowReplicaRollRequest",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/rollRequest": {
+              "metric":"regionserver.WAL.rollRequest",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/slowAppendCount": {
+              "metric":"regionserver.WAL.slowAppendCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_75th_percentile": {
+              "metric":"regionserver.WAL.SyncTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_95th_percentile": {
+              "metric":"regionserver.WAL.SyncTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_99th_percentile": {
+              "metric":"regionserver.WAL.SyncTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_max": {
+              "metric":"regionserver.WAL.SyncTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_mean": {
+              "metric":"regionserver.WAL.SyncTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_median": {
+              "metric":"regionserver.WAL.SyncTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_min": {
+              "metric":"regionserver.WAL.SyncTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_num_ops": {
+              "metric":"regionserver.WAL.SyncTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/GetGroupsAvgTime": {
+              "metric":"ugi.UgiMetrics.GetGroupsAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/GetGroupsNumOps": {
+              "metric":"ugi.UgiMetrics.GetGroupsNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/LoginFailureAvgTime": {
+              "metric":"ugi.UgiMetrics.LoginFailureAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/LoginFailureNumOps": {
+              "metric":"ugi.UgiMetrics.LoginFailureNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/LoginSuccessAvgTime": {
+              "metric":"ugi.UgiMetrics.LoginSuccessAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/LoginSuccessNumOps": {
+              "metric":"ugi.UgiMetrics.LoginSuccessNumOps",
+              "pointInTime": true,
+              "temporal": true
+            }
+          }
+        }
+      },
+      {
+        "type": "jmx",
+        "metrics": {
+          "default": {
+            "metrics/rpc/regionServerReport.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReport.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/jvm/memMaxM": {
+              "metric": "Hadoop:service=HBase,name=JvmMetrics.MemMaxM",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalError.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalError.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunningAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunningAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcQueueTimeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcQueueTimeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/Revision": {
+              "metric": "hadoop:service=HBase,name=Info.revision",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/splitRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.splitRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptorsMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptorsMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatus.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatus.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/splitRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.splitRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getBlockCacheColumnFamilySummariesMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getBlockCacheColumnFamilySummariesMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumnMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumnMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClosestRowBeforeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClosestRowBeforeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignatureNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignatureNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcSlowResponseMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcSlowResponseMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/AverageLoad": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.averageLoad",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openScannerNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openScannerNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumn.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumn.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getCompactionStateMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getCompactionStateMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReport.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReport.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaster.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaster.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumn.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumn.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHServerInfoAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHServerInfoAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rollHLogWriterMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rollHLogWriterMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offlineMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offlineMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignatureMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignatureMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/ServerName": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.serverName",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHServerInfoMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHServerInfoMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSizeMaxTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitSize_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/execCoprocessorAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.execCoprocessorAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcProcessingTimeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcProcessingTimeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatusMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatusMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offline.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offline.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/ZookeeperQuorum": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.zookeeperQuorum",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsDate": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsDate",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offlineMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offlineMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offline.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offline.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClosestRowBeforeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClosestRowBeforeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumnNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumnNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getLastFlushTimeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getLastFlushTimeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaster.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaster.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsUrl": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsUrl",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/multiMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.multiMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/revision": {
+              "metric": "hadoop:service=HBase,name=Info.revision",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumnMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumnMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumnMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumnMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptorsNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptorsNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcQueueTimeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcQueueTimeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/multiNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.multiNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersion.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersion.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offline.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offline.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReportMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReportMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalErrorNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalErrorNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumn.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumn.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassign.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassign.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalErrorMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalErrorMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/existsAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.existsAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/MasterActiveTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.masterActiveTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getBlockCacheColumnFamilySummariesNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getBlockCacheColumnFamilySummariesNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rpcAuthorizationFailures": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rpcAuthorizationFailures",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsUser": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsUser",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartupAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartupAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartupNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartupNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumnNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumnNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/version": {
+              "metric": "hadoop:service=HBase,name=Info.version",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTimeMaxTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitTime_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitchNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitchNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMasterNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMasterNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTimeNumOps": {
+              "metric": "hadoop:service=Master,name=MasterStatistics.splitTimeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalErrorAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalErrorAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicateLogEntriesNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicateLogEntriesNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/multiMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.multiMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumnMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumnMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignature.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignature.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getLastFlushTimeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getLastFlushTimeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/NumOpenConnections": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.NumOpenConnections",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcQueueTimeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcQueueTimeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReportMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReportMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/IsActiveMaster": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.isActiveMaster",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/bulkLoadHFilesMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.bulkLoadHFilesMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitch.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitch.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/MasterStartTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.masterStartTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitchMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitchMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unlockRowMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unlockRowMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalError.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalError.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitch.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitch.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/execCoprocessorMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.execCoprocessorMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/putMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.putMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/flushRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.flushRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/nextNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.nextNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getOnlineRegionsAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getOnlineRegionsAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatusAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatusAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assign.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assign.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartup.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartup.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitch.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitch.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunningMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunningMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/existsNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.existsNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/compactRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.compactRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/bulkLoadHFilesMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.bulkLoadHFilesMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rollHLogWriterNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rollHLogWriterNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unlockRowAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unlockRowAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionsNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionsNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndDeleteMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndDeleteMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaster.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaster.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/splitRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.splitRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptorsMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptorsMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumn.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumn.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/moveMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.moveMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdown.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdown.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/appendNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.appendNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/appendAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.appendAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatusNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatusNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcSlowResponseNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcSlowResponseNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSize_num_ops": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitSize_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/MasterActiveTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.masterActiveTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getLastFlushTimeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getLastFlushTimeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndPutNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndPutNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTime_avg_time": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitTime_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatus.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatus.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getCompactionStateMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getCompactionStateMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTimeAvgTime": {
+              "metric": "hadoop:service=Master,name=MasterStatistics.splitTimeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getStoreFileListMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getStoreFileListMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignature.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignature.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcProcessingTimeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcProcessingTimeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementColumnValueNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementColumnValueNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/multiAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.multiAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdownAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdownAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getBlockCacheColumnFamilySummariesMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getBlockCacheColumnFamilySummariesMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumn.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumn.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumn.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumn.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersion.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersion.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicateLogEntriesAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicateLogEntriesAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/cluster_requests": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.clusterRequests",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHServerInfoMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHServerInfoMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatusMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatusMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rpcAuthenticationFailures": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rpcAuthenticationFailures",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/Coprocessors": {
+              "metric": "hadoop:service=Master,name=Master.Coprocessors",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unlockRowNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unlockRowNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatus.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatus.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartup.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartup.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementColumnValueMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementColumnValueMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumn.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumn.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/RegionsInTransition": {
+              "metric": "hadoop:service=Master,name=Master.RegionsInTransition",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/master/AssignmentManager/ritCount": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=AssignmentManager.ritCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitchAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitchAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatusMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatusMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassignMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassignMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/nextAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.nextAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rollHLogWriterMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rollHLogWriterMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatus.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatus.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsVersion": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsVersion",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassignMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassignMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcSlowResponseAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcSlowResponseAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assignNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assignNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getLastFlushTimeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getLastFlushTimeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatusAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatusAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/mutateRowNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.mutateRowNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClosestRowBeforeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClosestRowBeforeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReport.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReport.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatus.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatus.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitchNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitchNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/RegionServers": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.numRegionServers",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/liveRegionServersHosts": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.liveRegionServers",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/bulkLoadHFilesAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.bulkLoadHFilesAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/compactRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.compactRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openScannerMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openScannerMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/moveMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.moveMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndPutMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndPutMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assign.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assign.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/ClusterId": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.clusterId",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartupMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartupMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcQueueTimeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcQueueTimeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcProcessingTimeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcProcessingTimeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatus.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatus.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumnNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumnNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSizeNumOps": {
+              "metric": "hadoop:service=Master,name=MasterStatistics.splitSizeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassignNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassignNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balance.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balance.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignature.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignature.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitchAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitchAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/appendMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.appendMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unlockRowMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unlockRowMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getStoreFileListMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getStoreFileListMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/moveAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.moveAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/mutateRowMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.mutateRowMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptors.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptors.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getCompactionStateAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getCompactionStateAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalError.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalError.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcSlowResponseMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcSlowResponseMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitchMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitchMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/lockRowMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.lockRowMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openScannerMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openScannerMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getStoreFileListAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getStoreFileListAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionsAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionsAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getOnlineRegionsNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getOnlineRegionsNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumnAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumnAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/putMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.putMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicationCallQueueLen": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicationCallQueueLen",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalErrorMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalErrorMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/flushRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.flushRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumn.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumn.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offline.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offline.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTimeMinTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitTime_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/lockRowAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.lockRowAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/lockRowMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.lockRowMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitch.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitch.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatus.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatus.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumn.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumn.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunningNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunningNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumn.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumn.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicateLogEntriesMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicateLogEntriesMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionsMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionsMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balance.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balance.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicateLogEntriesMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicateLogEntriesMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/execCoprocessorMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.execCoprocessorMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTime_num_ops": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitTime_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/RegionsInTransition": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.ritCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assignMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assignMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunning.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunning.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReport.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReport.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rpcAuthenticationSuccesses": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rpcAuthenticationSuccesses",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/execCoprocessorNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.execCoprocessorNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/move.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.move.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdownMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdownMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assign.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assign.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReportNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReportNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumnAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumnAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunning.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunning.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersion.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersion.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/HeapMemoryUsed": {
+              "metric": "java.lang:type=Memory.HeapMemoryUsage[used]",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rollHLogWriterAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rollHLogWriterAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getRegionInfoNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getRegionInfoNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/ReceivedBytes": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.ReceivedBytes",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/move.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.move.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignatureAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignatureAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/NonHeapMemoryMax": {
+              "metric": "java.lang:type=Memory.NonHeapMemoryUsage[max]",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementColumnValueAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementColumnValueAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatus.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatus.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumnAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumnAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalError.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalError.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/DeadRegionServers": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.numDeadRegionServers",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/deadRegionServersHosts": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.deadRegionServers",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassign.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassign.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balance.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balance.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/nextMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.nextMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/AverageLoad": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.averageLoad",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/MasterStartTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.masterStartTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/appendMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.appendMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/priorityCallQueueLen": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.priorityCallQueueLen",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatusMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatusMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/bulkLoadHFilesNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.bulkLoadHFilesNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/callQueueLen": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.callQueueLen",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitchMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitchMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/flushRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.flushRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartup.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartup.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassignAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassignAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdown.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdown.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndPutMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndPutMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assignAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assignAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndDeleteMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndDeleteMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdownNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdownNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatusNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatusNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumn.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumn.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitch.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitch.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getCompactionStateNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getCompactionStateNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptorsAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptorsAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaster.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaster.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getRegionInfoMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getRegionInfoMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/putNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.putNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsRevision": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsRevision",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/url": {
+              "metric": "hadoop:service=HBase,name=Info.url",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignature.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignature.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionsMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionsMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/compactRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.compactRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/nextMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.nextMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getOnlineRegionsMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getOnlineRegionsMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndDeleteNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndDeleteNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunning.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunning.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassign.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassign.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getRegionInfoAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getRegionInfoAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balance.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balance.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHServerInfoNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHServerInfoNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdown.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdown.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/NonHeapMemoryUsed": {
+              "metric": "java.lang:type=Memory.NonHeapMemoryUsage[used]",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumnMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumnMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getStoreFileListNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getStoreFileListNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumn.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumn.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitchMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitchMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunningMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunningMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assign.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assign.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offlineAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offlineAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/HeapMemoryMax": {
+              "metric": "java.lang:type=Memory.HeapMemoryUsage[max]",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/moveNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.moveNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementColumnValueMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementColumnValueMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSize_avg_time": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitSize_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndDeleteAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndDeleteAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/mutateRowAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.mutateRowAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/existsMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.existsMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitch.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitch.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitch.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitch.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/date": {
+              "metric": "hadoop:service=HBase,name=Info.date",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/flushRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.flushRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getOnlineRegionsMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getOnlineRegionsMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/user": {
+              "metric": "java.lang:type=Runtime.SystemProperties.user.name",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptors.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptors.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClosestRowBeforeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClosestRowBeforeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offlineNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offlineNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/SentBytes": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.SentBytes",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndPutAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndPutAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openScannerAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openScannerAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMasterAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMasterAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assignMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assignMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/compactRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.compactRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumnMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumnMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcProcessingTimeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcProcessingTimeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/existsMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.existsMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSizeMinTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitSize_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdown.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdown.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartup.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartup.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/lockRowNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.lockRowNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMasterMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMasterMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersion.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersion.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitch.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitch.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/move.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.move.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignatureMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignatureMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/splitRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.splitRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/mutateRowMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.mutateRowMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSizeAvgTime": {
+              "metric": "hadoop:service=Master,name=MasterStatistics.splitSizeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReportAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReportAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/putAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.putAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdownMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdownMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getBlockCacheColumnFamilySummariesAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getBlockCacheColumnFamilySummariesAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptors.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptors.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartupMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartupMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptors.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptors.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rpcAuthorizationSuccesses": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rpcAuthorizationSuccesses",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunning.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunning.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "ServiceComponentInfo/Version": {
+              "metric": "hadoop:service=HBase,name=Info.version",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getRegionInfoMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getRegionInfoMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassign.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassign.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMasterMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMasterMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/move.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.move.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            }
+          }
+        }
+      }
+    ],
+    "HostComponent": [
+      {
+        "type": "ganglia",
+        "metrics": {
+          "default": {
+            "metrics/load/load_one": {
+              "metric": "load_one",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memNonHeapUsedM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapUsedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/memory/swap_total": {
+              "metric": "swap_total",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/process/proc_total": {
+              "metric": "proc_total",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/balance_avg_time": {
+              "metric": "master.Balancer.BalancerCluster_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/disk/part_max_used": {
+              "metric": "part_max_used",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/balance_num_ops": {
+              "metric": "master.Balancer.BalancerCluster_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/network/bytes_in": {
+              "metric": "bytes_in",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memNonHeapCommittedM": {
+              "metric": "jvm.JvmMetrics.MemNonHeapCommittedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/master/splitTime_num_ops": {
+              "metric": "master.FileSystem.HlogSplitTime_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthenticationSuccesses": {
+              "metric": "master.Master.authenticationSuccesses",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/network/pkts_in": {
+              "metric": "pkts_in",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memHeapCommittedM": {
+              "metric": "jvm.JvmMetrics.MemHeapCommittedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsRunnable": {
+              "metric": "jvm.JvmMetrics.ThreadsRunnable",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsNew": {
+              "metric": "jvm.JvmMetrics.ThreadsNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthorizationFailures": {
+              "metric": "master.Master.authorizationFailures",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/RpcQueueTime_avg_time": {
+              "metric": "master.Master.QueueCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/gcTimeMillis": {
+              "metric": "jvm.JvmMetrics.GcTimeMillis",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsTerminated": {
+              "metric": "jvm.JvmMetrics.ThreadsTerminated",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/network/bytes_out": {
+              "metric": "bytes_out",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/load/load_five": {
+              "metric": "load_five",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/boottime": {
+              "metric": "boottime",
+              "pointInTime": true,
+              "temporal": true,
+              "amsHostMetric":true
+            },
+            "metrics/rpc/RpcProcessingTime_num_ops": {
+              "metric": "master.Master.ProcessCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logError": {
+              "metric": "jvm.JvmMetrics.LogError",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/process/proc_run": {
+              "metric": "proc_run",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsBlocked": {
+              "metric": "jvm.JvmMetrics.ThreadsBlocked",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/RpcQueueTime_num_ops": {
+              "metric": "master.Master.QueueCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/master/splitSize_num_ops": {
+              "metric": "master.FileSystem.HlogSplitSize_num_ops",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/cpu/cpu_aidle": {
+              "metric": "cpu_aidle",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/network/pkts_out": {
+              "metric": "pkts_out",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/master/splitSize_avg_time": {
+              "metric": "master.FileSystem.HlogSplitSize_mean",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/cpu/cpu_speed": {
+              "metric": "cpu_speed",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/hbase/master/cluster_requests": {
+              "metric": "master.Server.clusterRequests",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/rpc/RpcProcessingTime_avg_time": {
+              "metric": "master.Master.ProcessCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthenticationFailures": {
+              "metric": "master.Master.authenticationFailures",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/maxMemoryM": {
+              "metric": "jvm.metrics.maxMemoryM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logWarn": {
+              "metric": "jvm.JvmMetrics.LogWarn",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsTimedWaiting": {
+              "metric": "jvm.JvmMetrics.ThreadsTimedWaiting",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/gcCount": {
+              "metric": "jvm.JvmMetrics.GcCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/memHeapUsedM": {
+              "metric": "jvm.JvmMetrics.MemHeapUsedM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/threadsWaiting": {
+              "metric": "jvm.JvmMetrics.ThreadsWaiting",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/memory/mem_buffers": {
+              "metric": "mem_buffers",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/load/load_fifteen": {
+              "metric": "load_fifteen",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/logInfo": {
+              "metric": "jvm.JvmMetrics.LogInfo",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/rpc/rpcAuthorizationSuccesses": {
+              "metric": "master.Master.authorizationSuccesses",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/jvm/logFatal": {
+              "metric": "jvm.JvmMetrics.LogFatal",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcCountConcurrentMarkSweep": {
+              "metric":"jvm.JvmMetrics.GcCountConcurrentMarkSweep",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcCountParNew": {
+              "metric":"jvm.JvmMetrics.GcCountParNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcTimeMillisConcurrentMarkSweep": {
+              "metric":"jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/GcTimeMillisParNew": {
+              "metric":"jvm.JvmMetrics.GcTimeMillisParNew",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemHeapMaxM": {
+              "metric":"jvm.JvmMetrics.MemHeapMaxM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemMaxM": {
+              "metric":"jvm.JvmMetrics.MemMaxM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/jvm/JvmMetrics/MemNonHeapMaxM": {
+              "metric":"jvm.JvmMetrics.MemNonHeapMaxM",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_75th_percentile": {
+              "metric":"master.AssignmentManager.Assign_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_95th_percentile": {
+              "metric":"master.AssignmentManager.Assign_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_99th_percentile": {
+              "metric":"master.AssignmentManager.Assign_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_max": {
+              "metric":"master.AssignmentManager.Assign_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_mean": {
+              "metric":"master.AssignmentManager.Assign_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_median": {
+              "metric":"master.AssignmentManager.Assign_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_min": {
+              "metric":"master.AssignmentManager.Assign_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/Assign_num_ops": {
+              "metric":"master.AssignmentManager.Assign_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_75th_percentile": {
+              "metric":"master.AssignmentManager.BulkAssign_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_95th_percentile": {
+              "metric":"master.AssignmentManager.BulkAssign_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_99th_percentile": {
+              "metric":"master.AssignmentManager.BulkAssign_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_max": {
+              "metric":"master.AssignmentManager.BulkAssign_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_mean": {
+              "metric":"master.AssignmentManager.BulkAssign_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_median": {
+              "metric":"master.AssignmentManager.BulkAssign_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_min": {
+              "metric":"master.AssignmentManager.BulkAssign_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/BulkAssign_num_ops": {
+              "metric":"master.AssignmentManager.BulkAssign_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/ritCount": {
+              "metric":"master.AssignmentManager.ritCount",
+              "pointInTime": false,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/ritCountOverThreshold": {
+              "metric":"master.AssignmentManager.ritCountOverThreshold",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/AssignmentManager/ritOldestAge": {
+              "metric":"master.AssignmentManager.ritOldestAge",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_75th_percentile": {
+              "metric":"master.Balancer.BalancerCluster_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_95th_percentile": {
+              "metric":"master.Balancer.BalancerCluster_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_99th_percentile": {
+              "metric":"master.Balancer.BalancerCluster_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_max": {
+              "metric":"master.Balancer.BalancerCluster_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_mean": {
+              "metric":"master.Balancer.BalancerCluster_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/BalancerCluster_min": {
+              "metric":"master.Balancer.BalancerCluster_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Balancer/miscInvocationCount": {
+              "metric":"master.Balancer.miscInvocationCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_75th_percentile": {
+              "metric":"master.FileSystem.HlogSplitSize_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_95th_percentile": {
+              "metric":"master.FileSystem.HlogSplitSize_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_99th_percentile": {
+              "metric":"master.FileSystem.HlogSplitSize_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_max": {
+              "metric":"master.FileSystem.HlogSplitSize_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_median": {
+              "metric":"master.FileSystem.HlogSplitSize_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitSize_min": {
+              "metric":"master.FileSystem.HlogSplitSize_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_75th_percentile": {
+              "metric":"master.FileSystem.HlogSplitTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_95th_percentile": {
+              "metric":"master.FileSystem.HlogSplitTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_99th_percentile": {
+              "metric":"master.FileSystem.HlogSplitTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_max": {
+              "metric":"master.FileSystem.HlogSplitTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_mean": {
+              "metric":"master.FileSystem.HlogSplitTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_median": {
+              "metric":"master.FileSystem.HlogSplitTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/HlogSplitTime_min": {
+              "metric":"master.FileSystem.HlogSplitTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_75th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_95th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_99th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_max": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_mean": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_median": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_min": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitSize_num_ops": {
+              "metric":"master.FileSystem.MetaHlogSplitSize_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_75th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_95th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_99th_percentile": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_max": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_mean": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_median": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_min": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/FileSystem/MetaHlogSplitTime_num_ops": {
+              "metric":"master.FileSystem.MetaHlogSplitTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numActiveHandler": {
+              "metric":"master.Master.numActiveHandler",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numCallsInGeneralQueue": {
+              "metric":"master.Master.numCallsInGeneralQueue",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numCallsInPriorityQueue": {
+              "metric":"master.Master.numCallsInPriorityQueue",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numCallsInReplicationQueue": {
+              "metric":"master.Master.numCallsInReplicationQueue",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/numOpenConnections": {
+              "metric":"master.Master.numOpenConnections",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_75th_percentile": {
+              "metric":"master.Master.ProcessCallTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_95th_percentile": {
+              "metric":"master.Master.ProcessCallTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_99th_percentile": {
+              "metric":"master.Master.ProcessCallTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_max": {
+              "metric":"master.Master.ProcessCallTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_mean": {
+              "metric":"master.Master.ProcessCallTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_median": {
+              "metric":"master.Master.ProcessCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/ProcessCallTime_min": {
+              "metric":"master.Master.ProcessCallTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_75th_percentile": {
+              "metric":"master.Master.QueueCallTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_95th_percentile": {
+              "metric":"master.Master.QueueCallTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_99th_percentile": {
+              "metric":"master.Master.QueueCallTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_max": {
+              "metric":"master.Master.QueueCallTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_mean": {
+              "metric":"master.Master.QueueCallTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/QueueCallTime_min": {
+              "metric":"master.Master.QueueCallTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/queueSize": {
+              "metric":"master.Master.queueSize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/receivedBytes": {
+              "metric":"master.Master.receivedBytes",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/sentBytes": {
+              "metric":"master.Master.sentBytes",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_75th_percentile": {
+              "metric":"master.Master.TotalCallTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_95th_percentile": {
+              "metric":"master.Master.TotalCallTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_99th_percentile": {
+              "metric":"master.Master.TotalCallTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_max": {
+              "metric":"master.Master.TotalCallTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_mean": {
+              "metric":"master.Master.TotalCallTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_median": {
+              "metric":"master.Master.TotalCallTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_min": {
+              "metric":"master.Master.TotalCallTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Master/TotalCallTime_num_ops": {
+              "metric":"master.Master.TotalCallTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/averageLoad": {
+              "metric":"master.Server.averageLoad",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/masterActiveTime": {
+              "metric":"master.Server.masterActiveTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/masterStartTime": {
+              "metric":"master.Server.masterStartTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/numDeadRegionServers": {
+              "metric":"master.Server.numDeadRegionServers",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/master/Server/numRegionServers": {
+              "metric":"master.Server.numRegionServers",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/DroppedPubAll": {
+              "metric":"metricssystem.MetricsSystem.DroppedPubAll",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumActiveSinks": {
+              "metric":"metricssystem.MetricsSystem.NumActiveSinks",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumActiveSources": {
+              "metric":"metricssystem.MetricsSystem.NumActiveSources",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumAllSinks": {
+              "metric":"metricssystem.MetricsSystem.NumAllSinks",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/NumAllSources": {
+              "metric":"metricssystem.MetricsSystem.NumAllSources",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/PublishAvgTime": {
+              "metric":"metricssystem.MetricsSystem.PublishAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/PublishNumOps": {
+              "metric":"metricssystem.MetricsSystem.PublishNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineAvgTime": {
+              "metric":"metricssystem.MetricsSystem.Sink_timelineAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineDropped": {
+              "metric":"metricssystem.MetricsSystem.Sink_timelineDropped",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineNumOps": {
+              "metric":"metricssystem.MetricsSystem.Sink_timelineNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/Sink_timelineQsize": {
+              "metric":"metricssystem.MetricsSystem.Sink_timelineQsize",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/SnapshotAvgTime": {
+              "metric":"metricssystem.MetricsSystem.SnapshotAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/metricssystem/MetricsSystem/SnapshotNumOps": {
+              "metric":"metricssystem.MetricsSystem.SnapshotNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/appendCount": {
+              "metric":"regionserver.WAL.appendCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_75th_percentile": {
+              "metric":"regionserver.WAL.AppendSize_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_95th_percentile": {
+              "metric":"regionserver.WAL.AppendSize_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_99th_percentile": {
+              "metric":"regionserver.WAL.AppendSize_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_max": {
+              "metric":"regionserver.WAL.AppendSize_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_mean": {
+              "metric":"regionserver.WAL.AppendSize_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_median": {
+              "metric":"regionserver.WAL.AppendSize_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_min": {
+              "metric":"regionserver.WAL.AppendSize_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendSize_num_ops": {
+              "metric":"regionserver.WAL.AppendSize_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_75th_percentile": {
+              "metric":"regionserver.WAL.AppendTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_95th_percentile": {
+              "metric":"regionserver.WAL.AppendTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_99th_percentile": {
+              "metric":"regionserver.WAL.AppendTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_max": {
+              "metric":"regionserver.WAL.AppendTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_mean": {
+              "metric":"regionserver.WAL.AppendTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_median": {
+              "metric":"regionserver.WAL.AppendTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_min": {
+              "metric":"regionserver.WAL.AppendTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/AppendTime_num_ops": {
+              "metric":"regionserver.WAL.AppendTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/lowReplicaRollRequest": {
+              "metric":"regionserver.WAL.lowReplicaRollRequest",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/rollRequest": {
+              "metric":"regionserver.WAL.rollRequest",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/slowAppendCount": {
+              "metric":"regionserver.WAL.slowAppendCount",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_75th_percentile": {
+              "metric":"regionserver.WAL.SyncTime_75th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_95th_percentile": {
+              "metric":"regionserver.WAL.SyncTime_95th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_99th_percentile": {
+              "metric":"regionserver.WAL.SyncTime_99th_percentile",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_max": {
+              "metric":"regionserver.WAL.SyncTime_max",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_mean": {
+              "metric":"regionserver.WAL.SyncTime_mean",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_median": {
+              "metric":"regionserver.WAL.SyncTime_median",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_min": {
+              "metric":"regionserver.WAL.SyncTime_min",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/regionserver/WAL/SyncTime_num_ops": {
+              "metric":"regionserver.WAL.SyncTime_num_ops",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/GetGroupsAvgTime": {
+              "metric":"ugi.UgiMetrics.GetGroupsAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/GetGroupsNumOps": {
+              "metric":"ugi.UgiMetrics.GetGroupsNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/LoginFailureAvgTime": {
+              "metric":"ugi.UgiMetrics.LoginFailureAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/LoginFailureNumOps": {
+              "metric":"ugi.UgiMetrics.LoginFailureNumOps",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/LoginSuccessAvgTime": {
+              "metric":"ugi.UgiMetrics.LoginSuccessAvgTime",
+              "pointInTime": true,
+              "temporal": true
+            },
+            "metrics/ugi/UgiMetrics/LoginSuccessNumOps": {
+              "metric":"ugi.UgiMetrics.LoginSuccessNumOps",
+              "pointInTime": true,
+              "temporal": true
+            }
+          }
+        }
+      },
+      {
+        "type": "jmx",
+        "metrics": {
+          "default": {
+            "metrics/rpc/regionServerReport.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReport.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/jvm/memMaxM": {
+              "metric": "Hadoop:service=HBase,name=JvmMetrics.MemMaxM",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalError.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalError.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunningAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunningAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcQueueTimeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcQueueTimeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/splitRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.splitRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptorsMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptorsMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatus.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatus.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/splitRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.splitRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumnMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumnMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getBlockCacheColumnFamilySummariesMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getBlockCacheColumnFamilySummariesMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClosestRowBeforeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClosestRowBeforeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignatureNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignatureNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcSlowResponseMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcSlowResponseMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/AverageLoad": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.averageLoad",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openScannerNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openScannerNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumn.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumn.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getCompactionStateMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getCompactionStateMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaster.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaster.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReport.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReport.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHServerInfoAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHServerInfoAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumn.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumn.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rollHLogWriterMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rollHLogWriterMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offlineMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offlineMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignatureMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignatureMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/ServerName": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.serverName",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHServerInfoMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHServerInfoMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSizeMaxTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitSize_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/execCoprocessorAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.execCoprocessorAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcProcessingTimeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcProcessingTimeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatusMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatusMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offline.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offline.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/ZookeeperQuorum": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.zookeeperQuorum",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsDate": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsDate",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offlineMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offlineMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offline.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offline.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClosestRowBeforeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClosestRowBeforeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumnNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumnNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getLastFlushTimeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getLastFlushTimeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaster.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaster.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsUrl": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsUrl",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/jvm/NonHeapMemoryMax": {
+              "metric": "java.lang:type=Memory.NonHeapMemoryUsage[max]",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/multiMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.multiMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/revision": {
+              "metric": "hadoop:service=HBase,name=Info.revision",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumnMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumnMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumnMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumnMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptorsNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptorsNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcQueueTimeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcQueueTimeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/multiNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.multiNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersion.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersion.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offline.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offline.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReportMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReportMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalErrorNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalErrorNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumn.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumn.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassign.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassign.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalErrorMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalErrorMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/existsAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.existsAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/MasterActiveTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.masterActiveTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/master/AssignmentManager/ritCount": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=AssignmentManager.ritCount",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getBlockCacheColumnFamilySummariesNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getBlockCacheColumnFamilySummariesNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rpcAuthorizationFailures": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rpcAuthorizationFailures",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsUser": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsUser",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartupAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartupAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartupNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartupNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumnNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumnNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/version": {
+              "metric": "hadoop:service=HBase,name=Info.version",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTimeMaxTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitTime_max",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitchNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitchNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMasterNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMasterNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTimeNumOps": {
+              "metric": "hadoop:service=Master,name=MasterStatistics.splitTimeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalErrorAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalErrorAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicateLogEntriesNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicateLogEntriesNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/multiMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.multiMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumnMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumnMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignature.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignature.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getLastFlushTimeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getLastFlushTimeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/NumOpenConnections": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.NumOpenConnections",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcQueueTimeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcQueueTimeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReportMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReportMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/IsActiveMaster": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.isActiveMaster",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/bulkLoadHFilesMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.bulkLoadHFilesMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitch.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitch.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/MasterStartTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.masterStartTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitchMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitchMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unlockRowMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unlockRowMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalError.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalError.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitch.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitch.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/execCoprocessorMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.execCoprocessorMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/putMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.putMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/flushRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.flushRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/nextNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.nextNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getOnlineRegionsAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getOnlineRegionsAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatusAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatusAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assign.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assign.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartup.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartup.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitch.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitch.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunningMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunningMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/existsNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.existsNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/compactRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.compactRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/bulkLoadHFilesMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.bulkLoadHFilesMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rollHLogWriterNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rollHLogWriterNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unlockRowAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unlockRowAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionsNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionsNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndDeleteMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndDeleteMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaster.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaster.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/splitRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.splitRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptorsMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptorsMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumn.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumn.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/moveMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.moveMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdown.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdown.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/appendNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.appendNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/appendAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.appendAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatusNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatusNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcSlowResponseNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcSlowResponseNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSize_num_ops": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitSize_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getLastFlushTimeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getLastFlushTimeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndPutNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndPutNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTime_avg_time": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitTime_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatus.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatus.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getCompactionStateMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getCompactionStateMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTimeAvgTime": {
+              "metric": "hadoop:service=Master,name=MasterStatistics.splitTimeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getStoreFileListMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getStoreFileListMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignature.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignature.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcProcessingTimeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcProcessingTimeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementColumnValueNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementColumnValueNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/multiAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.multiAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdownAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdownAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getBlockCacheColumnFamilySummariesMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getBlockCacheColumnFamilySummariesMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumn.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumn.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumn.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumn.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersion.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersion.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicateLogEntriesAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicateLogEntriesAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/cluster_requests": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.clusterRequests",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHServerInfoMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHServerInfoMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatusMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatusMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rpcAuthenticationFailures": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rpcAuthenticationFailures",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/Coprocessors": {
+              "metric": "hadoop:service=Master,name=Master.Coprocessors",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unlockRowNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unlockRowNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatus.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatus.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartup.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartup.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumn.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumn.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementColumnValueMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementColumnValueMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/RegionsInTransition": {
+              "metric": "hadoop:service=Master,name=Master.RegionsInTransition",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitchAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitchAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatusMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatusMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassignMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassignMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/nextAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.nextAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rollHLogWriterMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rollHLogWriterMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatus.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatus.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsVersion": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsVersion",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassignMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassignMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcSlowResponseAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcSlowResponseAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assignNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assignNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getLastFlushTimeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getLastFlushTimeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatusAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatusAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/mutateRowNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.mutateRowNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClosestRowBeforeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClosestRowBeforeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReport.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReport.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatus.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatus.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitchNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitchNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/RegionServers": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.numRegionServers",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/liveRegionServersHosts": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.liveRegionServers",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/bulkLoadHFilesAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.bulkLoadHFilesAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/compactRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.compactRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openScannerMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openScannerMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/moveMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.moveMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndPutMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndPutMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assign.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assign.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/ClusterId": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.clusterId",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartupMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartupMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcQueueTimeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcQueueTimeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcProcessingTimeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcProcessingTimeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatus.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatus.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSizeNumOps": {
+              "metric": "hadoop:service=Master,name=MasterStatistics.splitSizeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumnNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumnNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassignNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassignNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balance.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balance.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignature.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignature.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitchAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitchAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/appendMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.appendMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unlockRowMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unlockRowMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getStoreFileListMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getStoreFileListMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/moveAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.moveAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/mutateRowMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.mutateRowMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptors.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptors.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getCompactionStateAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getCompactionStateAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalError.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalError.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/jvm/HeapMemoryMax": {
+              "metric": "java.lang:type=Memory.HeapMemoryUsage[max]",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcSlowResponseMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcSlowResponseMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitchMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitchMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/jvm/HeapMemoryUsed": {
+              "metric": "java.lang:type=Memory.HeapMemoryUsage[used]",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/lockRowMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.lockRowMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openScannerMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openScannerMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getStoreFileListAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getStoreFileListAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionsAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionsAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getOnlineRegionsNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getOnlineRegionsNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumnAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumnAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicationCallQueueLen": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicationCallQueueLen",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/putMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.putMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalErrorMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalErrorMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/flushRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.flushRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumn.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumn.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offline.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offline.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTimeMinTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitTime_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/lockRowAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.lockRowAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/lockRowMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.lockRowMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitch.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitch.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatus.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatus.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumn.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumn.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunningNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunningNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumn.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumn.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicateLogEntriesMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicateLogEntriesMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionsMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionsMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balance.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balance.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/replicateLogEntriesMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.replicateLogEntriesMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/execCoprocessorMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.execCoprocessorMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitTime_num_ops": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitTime_num_ops",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assignMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assignMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunning.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunning.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReport.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReport.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rpcAuthenticationSuccesses": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rpcAuthenticationSuccesses",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/execCoprocessorNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.execCoprocessorNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/jvm/NonHeapMemoryUsed": {
+              "metric": "java.lang:type=Memory.NonHeapMemoryUsage[used]",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/move.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.move.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdownMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdownMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assign.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assign.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReportNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReportNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumnAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumnAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunning.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunning.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersion.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersion.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTable.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTable.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rollHLogWriterAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rollHLogWriterAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getRegionInfoNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getRegionInfoNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/ReceivedBytes": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.ReceivedBytes",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/move.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.move.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignatureAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignatureAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementColumnValueAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementColumnValueAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTableNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTableNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClusterStatus.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClusterStatus.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumnAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumnAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/reportRSFatalError.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.reportRSFatalError.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/DeadRegionServers": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.numDeadRegionServers",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/deadRegionServersHosts": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.deadRegionServers",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassign.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassign.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balance.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balance.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/nextMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.nextMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/appendMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.appendMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/priorityCallQueueLen": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.priorityCallQueueLen",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatusMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatusMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/bulkLoadHFilesNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.bulkLoadHFilesNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/callQueueLen": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.callQueueLen",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitchMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitchMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/flushRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.flushRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartup.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartup.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassignAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassignAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdown.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdown.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndPutMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndPutMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assignAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assignAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndDeleteMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndDeleteMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdownNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdownNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getAlterStatusNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getAlterStatusNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyColumn.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyColumn.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitch.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitch.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getCompactionStateNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getCompactionStateNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptorsAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptorsAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMaster.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMaster.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getRegionInfoMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getRegionInfoMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/putNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.putNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/hdfsRevision": {
+              "metric": "hadoop:service=HBase,name=Info.hdfsRevision",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/url": {
+              "metric": "hadoop:service=HBase,name=Info.url",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignature.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignature.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionsMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionsMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/compactRegionNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.compactRegionNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/nextMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.nextMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getOnlineRegionsMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getOnlineRegionsMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndDeleteNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndDeleteNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunning.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunning.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassign.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassign.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getRegionInfoAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getRegionInfoAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balance.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balance.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHServerInfoNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHServerInfoNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdown.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdown.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteColumnMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteColumnMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getStoreFileListNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getStoreFileListNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTableMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTableMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumn.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumn.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitchMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitchMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunningMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunningMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/closeRegionAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.closeRegionAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/disableTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.disableTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assign.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assign.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offlineAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offlineAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/moveNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.moveNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementColumnValueMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementColumnValueMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSize_avg_time": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitSize_mean",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/load/AverageLoad": {
+              "metric": "hadoop:service=Master,name=Master.AverageLoad",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndDeleteAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndDeleteAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/mutateRowAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.mutateRowAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/existsMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.existsMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitch.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitch.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTable.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTable.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/synchronousBalanceSwitch.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.synchronousBalanceSwitch.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/date": {
+              "metric": "hadoop:service=HBase,name=Info.date",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/flushRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.flushRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getOnlineRegionsMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getOnlineRegionsMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/user": {
+              "metric": "java.lang:type=Runtime.SystemProperties.user.name",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptors.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptors.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getClosestRowBeforeAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getClosestRowBeforeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/offlineNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.offlineNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/SentBytes": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.SentBytes",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/incrementMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.incrementMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/deleteTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.deleteTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/checkAndPutAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.checkAndPutAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openScannerAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openScannerAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMasterAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMasterAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/assignMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.assignMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/compactRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.compactRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/openRegionMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.openRegionMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/addColumnMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.addColumnMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/RpcProcessingTimeNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.RpcProcessingTimeNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/existsMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.existsMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSizeMinTime": {
+              "metric": "Hadoop:service=HBase,name=Master,sub=Server.HlogSplitSize_min",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdown.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdown.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTableAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTableAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartup.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartup.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/lockRowNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.lockRowNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolVersion.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolVersion.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMasterMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMasterMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/balanceSwitch.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.balanceSwitch.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/move.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.move.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getProtocolSignatureMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getProtocolSignatureMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/modifyTable.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.modifyTable.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/splitRegionMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.splitRegionMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/mutateRowMinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.mutateRowMinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/hbase/master/splitSizeAvgTime": {
+              "metric": "hadoop:service=Master,name=MasterStatistics.splitSizeAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerReportAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerReportAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/putAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.putAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getNumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getNumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/shutdownMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.shutdownMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getBlockCacheColumnFamilySummariesAvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getBlockCacheColumnFamilySummariesAvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptors.aboveOneSec.MaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptors.aboveOneSec.MaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/regionServerStartupMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.regionServerStartupMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/createTableMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.createTableMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getHTableDescriptors.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getHTableDescriptors.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/rpcAuthorizationSuccesses": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.rpcAuthorizationSuccesses",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/isMasterRunning.aboveOneSec.NumOps": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.isMasterRunning.aboveOneSec.NumOps",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/enableTable.aboveOneSec.AvgTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.enableTable.aboveOneSec.AvgTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/getRegionInfoMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.getRegionInfoMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/unassign.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.unassign.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/stopMasterMaxTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.stopMasterMaxTime",
+              "pointInTime": true,
+              "temporal": false
+            },
+            "metrics/rpc/move.aboveOneSec.MinTime": {
+              "metric": "hadoop:service=HBase,name=RPCStatistics.move.aboveOneSec.MinTime",
+              "pointInTime": true,
+              "temporal": false
+            }
+          }
+        }
+      }
+    ]
+  }
+}

+ 23 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/files/hbase-smoke-cleanup.sh

@@ -0,0 +1,23 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#
+disable 'ambarismoketest'
+drop 'ambarismoketest'
+exit

+ 34 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/files/hbaseSmokeVerify.sh

@@ -0,0 +1,34 @@
+#!/usr/bin/env bash
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#
+conf_dir=$1
+data=$2
+hbase_cmd=$3
+echo "scan 'ambarismoketest'" | $hbase_cmd --config $conf_dir shell > /tmp/hbase_chk_verify
+cat /tmp/hbase_chk_verify
+echo "Looking for $data"
+tr -d '\n|\t| ' < /tmp/hbase_chk_verify | grep -q $data
+if [ "$?" -ne 0 ]
+then
+  exit 1
+fi
+
+grep -q '1 row(s)' /tmp/hbase_chk_verify

+ 19 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/__init__.py

@@ -0,0 +1,19 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""

+ 54 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/functions.py

@@ -0,0 +1,54 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+import re
+import math
+import datetime
+
+from resource_management.core.shell import checked_call
+
+def calc_xmn_from_xms(heapsize_str, xmn_percent, xmn_max):
+  """
+  @param heapsize_str: str (e.g '1000m')
+  @param xmn_percent: float (e.g 0.2)
+  @param xmn_max: integer (e.g 512)
+  """
+  heapsize = int(re.search('\d+',heapsize_str).group(0))
+  heapsize_unit = re.search('\D+',heapsize_str).group(0)
+  xmn_val = int(math.floor(heapsize*xmn_percent))
+  xmn_val -= xmn_val % 8
+  
+  result_xmn_val = xmn_max if xmn_val > xmn_max else xmn_val
+  return str(result_xmn_val) + heapsize_unit
+
+def ensure_unit_for_memory(memory_size):
+  memory_size_values = re.findall('\d+', str(memory_size))
+  memory_size_unit = re.findall('\D+', str(memory_size))
+
+  if len(memory_size_values) > 0:
+    unit = 'm'
+    if len(memory_size_unit) > 0:
+      unit = memory_size_unit[0]
+    if unit not in ['b', 'k', 'm', 'g', 't', 'p']:
+      raise Exception("Memory size unit error. %s - wrong unit" % unit)
+    return "%s%s" % (memory_size_values[0], unit)
+  else:
+    raise Exception('Memory size can not be calculated')

+ 252 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase.py

@@ -0,0 +1,252 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import os
+import sys
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.resources.xml_config import XmlConfig
+from resource_management.libraries.resources.template_config import TemplateConfig
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions import lzo_utils
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions.generate_logfeeder_input_config import generate_logfeeder_input_config
+from resource_management.core.source import Template, InlineTemplate
+from resource_management.core.resources import Package
+from resource_management.core.resources.service import ServiceConfig
+from resource_management.core.resources.system import Directory, Execute, File
+from ambari_commons.os_family_impl import OsFamilyFuncImpl, OsFamilyImpl
+from ambari_commons import OSConst
+from resource_management.libraries.functions.constants import StackFeature
+from resource_management.libraries.functions.stack_features import check_stack_feature
+
+@OsFamilyFuncImpl(os_family=OSConst.WINSRV_FAMILY)
+def hbase(name=None):
+  import params
+  XmlConfig("hbase-site.xml",
+            conf_dir = params.hbase_conf_dir,
+            configurations = params.config['configurations']['hbase-site'],
+            configuration_attributes=params.config['configurationAttributes']['hbase-site']
+  )
+
+  if params.service_map.has_key(name):
+    # Manually overriding service logon user & password set by the installation package
+    service_name = params.service_map[name]
+    ServiceConfig(service_name,
+                  action="change_user",
+                  username = params.hbase_user,
+                  password = Script.get_password(params.hbase_user))
+
+# name is 'master' or 'regionserver' or 'queryserver' or 'client'
+@OsFamilyFuncImpl(os_family=OsFamilyImpl.DEFAULT)
+def hbase(name=None):
+  import params
+
+  # ensure that matching LZO libraries are installed for HBase
+  lzo_utils.install_lzo_if_needed()
+
+  Directory( params.etc_prefix_dir,
+      mode=0755
+  )
+
+  Directory( params.hbase_conf_dir,
+      owner = params.hbase_user,
+      group = params.user_group,
+      create_parents = True
+  )
+   
+  Directory(params.java_io_tmpdir,
+      create_parents = True,
+      mode=0777
+  )
+
+  # If a file location is specified in ioengine parameter,
+  # ensure that directory exists. Otherwise create the
+  # directory with permissions assigned to hbase:hadoop.
+  ioengine_input = params.ioengine_param
+  if ioengine_input != None:
+    if ioengine_input.startswith("file:/"):
+      ioengine_fullpath = ioengine_input[5:]
+      ioengine_dir = os.path.dirname(ioengine_fullpath)
+      Directory(ioengine_dir,
+          owner = params.hbase_user,
+          group = params.user_group,
+          create_parents = True,
+          mode = 0755
+      )
+  
+  parent_dir = os.path.dirname(params.tmp_dir)
+  # In case if we have several placeholders in path
+  while ("${" in parent_dir):
+    parent_dir = os.path.dirname(parent_dir)
+  if parent_dir != os.path.abspath(os.sep) :
+    Directory (parent_dir,
+          create_parents = True,
+          cd_access="a",
+    )
+    Execute(("chmod", "1777", parent_dir), sudo=True)
+
+  XmlConfig( "hbase-site.xml",
+            conf_dir = params.hbase_conf_dir,
+            configurations = params.config['configurations']['hbase-site'],
+            configuration_attributes=params.config['configurationAttributes']['hbase-site'],
+            owner = params.hbase_user,
+            group = params.user_group
+  )
+
+  if check_stack_feature(StackFeature.PHOENIX_CORE_HDFS_SITE_REQUIRED, params.version_for_stack_feature_checks):
+    XmlConfig( "core-site.xml",
+               conf_dir = params.hbase_conf_dir,
+               configurations = params.config['configurations']['core-site'],
+               configuration_attributes=params.config['configurationAttributes']['core-site'],
+               owner = params.hbase_user,
+               group = params.user_group,
+               xml_include_file=params.mount_table_xml_inclusion_file_full_path
+               )
+
+    if params.mount_table_content:
+      File(params.mount_table_xml_inclusion_file_full_path,
+           owner=params.hbase_user,
+           group=params.user_group,
+           content=params.mount_table_content,
+           mode=0644
+           )
+
+    if 'hdfs-site' in params.config['configurations']:
+      XmlConfig( "hdfs-site.xml",
+              conf_dir = params.hbase_conf_dir,
+              configurations = params.config['configurations']['hdfs-site'],
+              configuration_attributes=params.config['configurationAttributes']['hdfs-site'],
+              owner = params.hbase_user,
+              group = params.user_group
+      )
+  else:
+    File(format("{params.hbase_conf_dir}/hdfs-site.xml"),
+         action="delete"
+    )
+    File(format("{params.hbase_conf_dir}/core-site.xml"),
+         action="delete"
+    )
+
+  if 'hbase-policy' in params.config['configurations']:
+    XmlConfig( "hbase-policy.xml",
+            conf_dir = params.hbase_conf_dir,
+            configurations = params.config['configurations']['hbase-policy'],
+            configuration_attributes=params.config['configurationAttributes']['hbase-policy'],
+            owner = params.hbase_user,
+            group = params.user_group
+    )
+  # Manually overriding ownership of file installed by hadoop package
+  else: 
+    File( format("{params.hbase_conf_dir}/hbase-policy.xml"),
+      owner = params.hbase_user,
+      group = params.user_group
+    )
+
+  File(format("{hbase_conf_dir}/hbase-env.sh"),
+       owner = params.hbase_user,
+       content=InlineTemplate(params.hbase_env_sh_template),
+       group = params.user_group,
+  )
+  
+  # On some OS this folder could be not exists, so we will create it before pushing there files
+  Directory(params.limits_conf_dir,
+            create_parents = True,
+            owner='root',
+            group='root'
+            )
+  
+  File(os.path.join(params.limits_conf_dir, 'hbase.conf'),
+       owner='root',
+       group='root',
+       mode=0644,
+       content=Template("hbase.conf.j2")
+       )
+
+  hbase_TemplateConfig( params.metric_prop_file_name,
+    tag = 'GANGLIA-MASTER' if name == 'master' else 'GANGLIA-RS'
+  )
+
+  hbase_TemplateConfig( 'regionservers')
+
+  if params.security_enabled:
+    hbase_TemplateConfig( format("hbase_{name}_jaas.conf"))
+  
+  if name != "client":
+    Directory( params.pid_dir,
+      owner = params.hbase_user,
+      create_parents = True,
+      cd_access = "a",
+      mode = 0755,
+    )
+  
+    Directory (params.log_dir,
+      owner = params.hbase_user,
+      create_parents = True,
+      cd_access = "a",
+      mode = 0755,
+    )
+
+    generate_logfeeder_input_config('hbase', Template("input.config-hbase.json.j2", extra_imports=[default]))
+
+  if (params.log4j_props != None):
+    File(format("{params.hbase_conf_dir}/log4j.properties"),
+         mode=0644,
+         group=params.user_group,
+         owner=params.hbase_user,
+         content=InlineTemplate(params.log4j_props)
+    )
+  elif (os.path.exists(format("{params.hbase_conf_dir}/log4j.properties"))):
+    File(format("{params.hbase_conf_dir}/log4j.properties"),
+      mode=0644,
+      group=params.user_group,
+      owner=params.hbase_user
+    )
+  if name == "master":
+    params.HdfsResource(params.hbase_hdfs_root_dir,
+                         type="directory",
+                         action="create_on_execute",
+                         owner=params.hbase_user
+    )
+    params.HdfsResource(params.hbase_staging_dir,
+                         type="directory",
+                         action="create_on_execute",
+                         owner=params.hbase_user,
+                         mode=0711
+    )
+    if params.create_hbase_home_directory:
+      params.HdfsResource(params.hbase_home_directory,
+                          type="directory",
+                          action="create_on_execute",
+                          owner=params.hbase_user,
+                          mode=0755
+      )
+    params.HdfsResource(None, action="execute")
+
+  if params.phoenix_enabled:
+    Package(params.phoenix_package,
+            retry_on_repo_unavailability=params.agent_stack_retry_on_unavailability,
+            retry_count=params.agent_stack_retry_count)
+
+def hbase_TemplateConfig(name, tag=None):
+  import params
+
+  TemplateConfig( format("{hbase_conf_dir}/{name}"),
+      owner = params.hbase_user,
+      template_tag = tag
+  )

+ 69 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_client.py

@@ -0,0 +1,69 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.constants import StackFeature
+from resource_management.libraries.functions.stack_features import check_stack_feature
+from hbase import hbase
+from ambari_commons import OSCheck, OSConst
+from ambari_commons.os_family_impl import OsFamilyImpl
+from resource_management.core.exceptions import ClientComponentHasNoStatus
+
+class HbaseClient(Script):
+  def install(self, env):
+    import params
+    env.set_params(params)
+    self.install_packages(env)
+    self.configure(env)
+
+  def configure(self, env):
+    import params
+    env.set_params(params)
+    hbase(name='client')
+
+  def status(self, env):
+    raise ClientComponentHasNoStatus()
+
+
+@OsFamilyImpl(os_family=OSConst.WINSRV_FAMILY)
+class HbaseClientWindows(HbaseClient):
+  pass
+
+
+@OsFamilyImpl(os_family=OsFamilyImpl.DEFAULT)
+class HbaseClientDefault(HbaseClient):
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+
+    if params.version and check_stack_feature(StackFeature.ROLLING_UPGRADE, params.version): 
+      # phoenix may not always be deployed
+      try:
+        stack_select.select_packages(params.version)
+      except Exception as e:
+        print "Ignoring error due to missing phoenix-client"
+        print str(e)
+
+
+
+if __name__ == "__main__":
+  HbaseClient().execute()

+ 88 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_decommission.py

@@ -0,0 +1,88 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from resource_management.core.resources.system import Execute, File
+from resource_management.core.source import StaticFile
+from resource_management.libraries.functions.format import format
+from ambari_commons.os_family_impl import OsFamilyFuncImpl, OsFamilyImpl
+from ambari_commons import OSConst
+
+@OsFamilyFuncImpl(os_family=OSConst.WINSRV_FAMILY)
+def hbase_decommission(env):
+  import params
+
+  env.set_params(params)
+
+  hosts = params.hbase_excluded_hosts.split(",")
+  for host in hosts:
+    if host:
+      if params.hbase_drain_only == True:
+        regiondrainer_cmd = format("cmd /c {hbase_executable} org.jruby.Main {region_drainer} remove {host}")
+        Execute(regiondrainer_cmd, user=params.hbase_user, logoutput=True)
+      else:
+        regiondrainer_cmd = format("cmd /c {hbase_executable} org.jruby.Main {region_drainer} add {host}")
+        regionmover_cmd = format("cmd /c {hbase_executable} org.jruby.Main {region_mover} -o unload -r {host}")
+        Execute(regiondrainer_cmd, user=params.hbase_user, logoutput=True)
+        Execute(regionmover_cmd, user=params.hbase_user, logoutput=True)
+
+
+@OsFamilyFuncImpl(os_family=OsFamilyImpl.DEFAULT)
+def hbase_decommission(env):
+  import params
+
+  env.set_params(params)
+  kinit_cmd = params.kinit_cmd_master
+
+  if params.hbase_excluded_hosts and params.hbase_excluded_hosts.split(","):
+    hosts = params.hbase_excluded_hosts.split(",")
+  elif params.hbase_included_hosts and params.hbase_included_hosts.split(","):
+    hosts = params.hbase_included_hosts.split(",")
+
+  if params.hbase_drain_only:
+    for host in hosts:
+      if host:
+        regiondrainer_cmd = format(
+          "{kinit_cmd} HBASE_SERVER_JAAS_OPTS=\"{master_security_config}\" {hbase_cmd} --config {hbase_conf_dir} {hbase_decommission_auth_config} org.jruby.Main {region_drainer} remove {host}")
+        Execute(regiondrainer_cmd,
+                user=params.hbase_user,
+                logoutput=True
+        )
+        pass
+    pass
+
+  else:
+    for host in hosts:
+      if host:
+        regiondrainer_cmd = format(
+          "{kinit_cmd} HBASE_SERVER_JAAS_OPTS=\"{master_security_config}\" {hbase_cmd} --config {hbase_conf_dir} {hbase_decommission_auth_config} org.jruby.Main {region_drainer} add {host}")
+        regionmover_cmd = format(
+          "{kinit_cmd} HBASE_SERVER_JAAS_OPTS=\"{master_security_config}\" {hbase_cmd} --config {hbase_conf_dir} {hbase_decommission_auth_config} org.jruby.Main {region_mover} -o unload -r {host}")
+
+        Execute(regiondrainer_cmd,
+                user=params.hbase_user,
+                logoutput=True
+        )
+
+        Execute(regionmover_cmd,
+                user=params.hbase_user,
+                logoutput=True
+        )
+      pass
+    pass
+  pass

+ 170 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_master.py

@@ -0,0 +1,170 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions.check_process_status import check_process_status
+from resource_management.libraries.functions.security_commons import build_expectations, \
+  cached_kinit_executor, get_params_from_filesystem, validate_security_config_properties, \
+  FILE_TYPE_XML
+from hbase import hbase
+from hbase_service import hbase_service
+from hbase_decommission import hbase_decommission
+import upgrade
+from setup_ranger_hbase import setup_ranger_hbase
+from ambari_commons import OSCheck, OSConst
+from ambari_commons.os_family_impl import OsFamilyImpl
+import os
+from resource_management.libraries.functions.setup_atlas_hook import has_atlas_in_cluster, setup_atlas_hook
+from ambari_commons.constants import SERVICE
+from resource_management.core.logger import Logger
+
+
+class HbaseMaster(Script):
+  def configure(self, env):
+    import params
+    env.set_params(params)
+    hbase(name='master')
+
+  def install(self, env):
+    import params
+    env.set_params(params)
+    self.install_packages(env)
+
+  def decommission(self, env):
+    import params
+    env.set_params(params)
+    hbase_decommission(env)
+
+
+@OsFamilyImpl(os_family=OSConst.WINSRV_FAMILY)
+class HbaseMasterWindows(HbaseMaster):
+  def start(self, env):
+    import status_params
+    self.configure(env)
+    Service(status_params.hbase_master_win_service_name, action="start")
+
+  def stop(self, env):
+    import status_params
+    env.set_params(status_params)
+    Service(status_params.hbase_master_win_service_name, action="stop")
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    check_windows_service_status(status_params.hbase_master_win_service_name)
+
+
+
+@OsFamilyImpl(os_family=OsFamilyImpl.DEFAULT)
+class HbaseMasterDefault(HbaseMaster):
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+    upgrade.prestart(env)
+
+  def start(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+    self.configure(env) # for security
+    setup_ranger_hbase(upgrade_type=upgrade_type, service_name="hbase-master")
+    if params.enable_hbase_atlas_hook:
+      Logger.info("Hbase Atlas hook is enabled, configuring Atlas HBase Hook.")
+      hbase_atlas_hook_file_path = os.path.join(params.hbase_conf_dir,params.atlas_hook_filename)
+      setup_atlas_hook(SERVICE.HBASE,params.hbase_atlas_hook_properties,hbase_atlas_hook_file_path,params.hbase_user,params.user_group)
+    else:
+      Logger.info("Hbase Atlas hook is disabled, skippking Atlas configurations.")
+    hbase_service('master', action = 'start')
+    
+  def stop(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+    hbase_service('master', action = 'stop')
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+
+    check_process_status(status_params.hbase_master_pid_file)
+
+  def security_status(self, env):
+    import status_params
+
+    env.set_params(status_params)
+    if status_params.security_enabled:
+      props_value_check = {"hbase.security.authentication" : "kerberos",
+                           "hbase.security.authorization": "true"}
+      props_empty_check = ['hbase.master.keytab.file',
+                           'hbase.master.kerberos.principal']
+      props_read_check = ['hbase.master.keytab.file']
+      hbase_site_expectations = build_expectations('hbase-site', props_value_check, props_empty_check,
+                                                  props_read_check)
+
+      hbase_expectations = {}
+      hbase_expectations.update(hbase_site_expectations)
+
+      security_params = get_params_from_filesystem(status_params.hbase_conf_dir,
+                                                   {'hbase-site.xml': FILE_TYPE_XML})
+      result_issues = validate_security_config_properties(security_params, hbase_expectations)
+      if not result_issues:  # If all validations passed successfully
+        try:
+          # Double check the dict before calling execute
+          if ( 'hbase-site' not in security_params
+               or 'hbase.master.keytab.file' not in security_params['hbase-site']
+               or 'hbase.master.kerberos.principal' not in security_params['hbase-site']):
+            self.put_structured_out({"securityState": "UNSECURED"})
+            self.put_structured_out(
+              {"securityIssuesFound": "Keytab file or principal are not set property."})
+            return
+
+          cached_kinit_executor(status_params.kinit_path_local,
+                                status_params.hbase_user,
+                                security_params['hbase-site']['hbase.master.keytab.file'],
+                                security_params['hbase-site']['hbase.master.kerberos.principal'],
+                                status_params.hostname,
+                                status_params.tmp_dir)
+          self.put_structured_out({"securityState": "SECURED_KERBEROS"})
+        except Exception as e:
+          self.put_structured_out({"securityState": "ERROR"})
+          self.put_structured_out({"securityStateErrorInfo": str(e)})
+      else:
+        issues = []
+        for cf in result_issues:
+          issues.append("Configuration file %s did not pass the validation. Reason: %s" % (cf, result_issues[cf]))
+        self.put_structured_out({"securityIssuesFound": ". ".join(issues)})
+        self.put_structured_out({"securityState": "UNSECURED"})
+    else:
+      self.put_structured_out({"securityState": "UNSECURED"})
+      
+  def get_log_folder(self):
+    import params
+    return params.log_dir
+  
+  def get_user(self):
+    import params
+    return params.hbase_user
+
+  def get_pid_files(self):
+    import status_params
+    return [status_params.hbase_master_pid_file]
+
+if __name__ == "__main__":
+  HbaseMaster().execute()

+ 171 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_regionserver.py

@@ -0,0 +1,171 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+
+from resource_management.core import shell
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions.check_process_status import check_process_status
+from resource_management.libraries.functions.security_commons import build_expectations, \
+  cached_kinit_executor, get_params_from_filesystem, validate_security_config_properties, \
+  FILE_TYPE_XML
+
+from ambari_commons import OSCheck, OSConst
+from ambari_commons.os_family_impl import OsFamilyImpl
+
+from hbase import hbase
+from hbase_service import hbase_service
+import upgrade
+from setup_ranger_hbase import setup_ranger_hbase
+
+
+class HbaseRegionServer(Script):
+  def install(self, env):
+    import params
+    env.set_params(params)
+    self.install_packages(env)
+
+  def configure(self, env):
+    import params
+    env.set_params(params)
+    hbase(name='regionserver')
+
+  def decommission(self, env):
+    print "Decommission not yet implemented!"
+
+
+
+@OsFamilyImpl(os_family=OSConst.WINSRV_FAMILY)
+class HbaseRegionServerWindows(HbaseRegionServer):
+  def start(self, env):
+    import status_params
+    self.configure(env)
+    Service(status_params.hbase_regionserver_win_service_name, action="start")
+
+  def stop(self, env):
+    import status_params
+    env.set_params(status_params)
+    Service(status_params.hbase_regionserver_win_service_name, action="stop")
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    check_windows_service_status(status_params.hbase_regionserver_win_service_name)
+
+
+
+@OsFamilyImpl(os_family=OsFamilyImpl.DEFAULT)
+class HbaseRegionServerDefault(HbaseRegionServer):
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+    upgrade.prestart(env)
+
+  def post_upgrade_restart(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+    upgrade.post_regionserver(env)
+
+  def start(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+    self.configure(env) # for security
+    setup_ranger_hbase(upgrade_type=upgrade_type, service_name="hbase-regionserver")
+
+    hbase_service('regionserver', action='start')
+
+  def stop(self, env, upgrade_type=None):
+    import params
+    env.set_params(params)
+
+    hbase_service( 'regionserver',
+      action = 'stop'
+    )
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+
+    check_process_status(status_params.regionserver_pid_file)
+
+  def security_status(self, env):
+    import status_params
+
+    env.set_params(status_params)
+    if status_params.security_enabled:
+      props_value_check = {"hbase.security.authentication" : "kerberos",
+                           "hbase.security.authorization": "true"}
+      props_empty_check = ['hbase.regionserver.keytab.file',
+                           'hbase.regionserver.kerberos.principal']
+      props_read_check = ['hbase.regionserver.keytab.file']
+      hbase_site_expectations = build_expectations('hbase-site', props_value_check, props_empty_check,
+                                                   props_read_check)
+
+      hbase_expectations = {}
+      hbase_expectations.update(hbase_site_expectations)
+
+      security_params = get_params_from_filesystem(status_params.hbase_conf_dir,
+                                                   {'hbase-site.xml': FILE_TYPE_XML})
+      result_issues = validate_security_config_properties(security_params, hbase_expectations)
+      if not result_issues:  # If all validations passed successfully
+        try:
+          # Double check the dict before calling execute
+          if ( 'hbase-site' not in security_params
+               or 'hbase.regionserver.keytab.file' not in security_params['hbase-site']
+               or 'hbase.regionserver.kerberos.principal' not in security_params['hbase-site']):
+            self.put_structured_out({"securityState": "UNSECURED"})
+            self.put_structured_out(
+              {"securityIssuesFound": "Keytab file or principal are not set property."})
+            return
+
+          cached_kinit_executor(status_params.kinit_path_local,
+                                status_params.hbase_user,
+                                security_params['hbase-site']['hbase.regionserver.keytab.file'],
+                                security_params['hbase-site']['hbase.regionserver.kerberos.principal'],
+                                status_params.hostname,
+                                status_params.tmp_dir)
+          self.put_structured_out({"securityState": "SECURED_KERBEROS"})
+        except Exception as e:
+          self.put_structured_out({"securityState": "ERROR"})
+          self.put_structured_out({"securityStateErrorInfo": str(e)})
+      else:
+        issues = []
+        for cf in result_issues:
+          issues.append("Configuration file %s did not pass the validation. Reason: %s" % (cf, result_issues[cf]))
+        self.put_structured_out({"securityIssuesFound": ". ".join(issues)})
+        self.put_structured_out({"securityState": "UNSECURED"})
+    else:
+      self.put_structured_out({"securityState": "UNSECURED"})
+
+  def get_log_folder(self):
+    import params
+    return params.log_dir
+  
+  def get_user(self):
+    import params
+    return params.hbase_user
+
+  def get_pid_files(self):
+    import status_params
+    return [status_params.regionserver_pid_file]
+
+if __name__ == "__main__":
+  HbaseRegionServer().execute()

+ 66 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_service.py

@@ -0,0 +1,66 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions.show_logs import show_logs
+from resource_management.core.shell import as_sudo
+from resource_management.core.resources.system import Execute, File
+
+def hbase_service(
+  name,
+  action = 'start'): # 'start' or 'stop' or 'status'
+    
+    import params
+  
+    role = name
+    cmd = format("{daemon_script} --config {hbase_conf_dir}")
+    pid_file = format("{pid_dir}/hbase-{hbase_user}-{role}.pid")
+    pid_expression = as_sudo(["cat", pid_file])
+    no_op_test = as_sudo(["test", "-f", pid_file]) + format(" && ps -p `{pid_expression}` >/dev/null 2>&1")
+    
+    if action == 'start':
+      daemon_cmd = format("{cmd} start {role}")
+      
+      try:
+        Execute ( daemon_cmd,
+          not_if = no_op_test,
+          user = params.hbase_user
+        )
+      except:
+        show_logs(params.log_dir, params.hbase_user)
+        raise
+    elif action == 'stop':
+      daemon_cmd = format("{cmd} stop {role}")
+
+      try:
+        Execute ( daemon_cmd,
+          user = params.hbase_user,
+          only_if = no_op_test,
+          # BUGFIX: hbase regionserver sometimes hangs when nn is in safemode
+          timeout = params.hbase_regionserver_shutdown_timeout,
+          on_timeout = format("! ( {no_op_test} ) || {sudo} -H -E kill -9 `{pid_expression}`"),
+        )
+      except:
+        show_logs(params.log_dir, params.hbase_user)
+        raise
+      
+      File(pid_file,
+           action = "delete",
+      )

+ 42 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/hbase_upgrade.py

@@ -0,0 +1,42 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management.libraries.script import Script
+from resource_management.libraries.functions.format import format
+from resource_management.core.resources.system import Execute
+
+class HbaseMasterUpgrade(Script):
+
+  def take_snapshot(self, env):
+    import params
+
+    snap_cmd = "echo 'snapshot_all' | {0} shell".format(params.hbase_cmd)
+
+    exec_cmd = "{0} {1}".format(params.kinit_cmd, snap_cmd)
+
+    Execute(exec_cmd, user=params.hbase_user)
+
+  def restore_snapshot(self, env):
+    import params
+    print "TODO AMBARI-12698"
+
+if __name__ == "__main__":
+  HbaseMasterUpgrade().execute()

+ 28 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/params.py

@@ -0,0 +1,28 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from ambari_commons import OSCheck
+from resource_management.libraries.functions.default import default
+
+if OSCheck.is_windows_family():
+  from params_windows import *
+else:
+  from params_linux import *
+
+retryAble = default("/commandParams/command_retry_enabled", False)

+ 463 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/params_linux.py

@@ -0,0 +1,463 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import os
+import status_params
+import ambari_simplejson as json # simplejson is much faster comparing to Python 2.6 json module and has the same functions set.
+
+from functions import calc_xmn_from_xms, ensure_unit_for_memory
+
+from ambari_commons.constants import AMBARI_SUDO_BINARY
+from ambari_commons.os_check import OSCheck
+from ambari_commons.str_utils import string_set_intersection
+
+from resource_management.libraries.resources.hdfs_resource import HdfsResource
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions import format
+from resource_management.libraries.functions import StackFeature
+from resource_management.libraries.functions.stack_features import check_stack_feature
+from resource_management.libraries.functions.stack_features import get_stack_feature_version
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions import get_kinit_path
+from resource_management.libraries.functions import is_empty
+from resource_management.libraries.functions import get_unique_id_and_date
+from resource_management.libraries.functions.get_not_managed_resources import get_not_managed_resources
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions.expect import expect
+from ambari_commons.ambari_metrics_helper import select_metric_collector_hosts_from_hostnames
+from resource_management.libraries.functions.setup_ranger_plugin_xml import get_audit_configs, generate_ranger_service_config
+
+# server configurations
+config = Script.get_config()
+exec_tmp_dir = Script.get_tmp_dir()
+sudo = AMBARI_SUDO_BINARY
+
+stack_name = status_params.stack_name
+agent_stack_retry_on_unavailability = config['ambariLevelParams']['agent_stack_retry_on_unavailability']
+agent_stack_retry_count = expect("/ambariLevelParams/agent_stack_retry_count", int)
+version = default("/commandParams/version", None)
+component_directory = status_params.component_directory
+etc_prefix_dir = "/etc/hbase"
+
+stack_version_unformatted = status_params.stack_version_unformatted
+stack_version_formatted = status_params.stack_version_formatted
+stack_root = status_params.stack_root
+
+# get the correct version to use for checking stack features
+version_for_stack_feature_checks = get_stack_feature_version(config)
+
+stack_supports_ranger_kerberos = check_stack_feature(StackFeature.RANGER_KERBEROS_SUPPORT, version_for_stack_feature_checks)
+stack_supports_ranger_audit_db = check_stack_feature(StackFeature.RANGER_AUDIT_DB_SUPPORT, version_for_stack_feature_checks)
+
+# hadoop default parameters
+hadoop_bin_dir = stack_select.get_hadoop_dir("bin")
+hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
+daemon_script = "/usr/lib/hbase/bin/hbase-daemon.sh"
+region_mover = "/usr/lib/hbase/bin/region_mover.rb"
+region_drainer = "/usr/lib/hbase/bin/draining_servers.rb"
+hbase_cmd = "/usr/lib/hbase/bin/hbase"
+hbase_max_direct_memory_size = None
+
+# hadoop parameters for stacks supporting rolling_upgrade
+if stack_version_formatted and check_stack_feature(StackFeature.ROLLING_UPGRADE, stack_version_formatted):
+  daemon_script = format('{stack_root}/current/hbase-client/bin/hbase-daemon.sh')
+  region_mover = format('{stack_root}/current/hbase-client/bin/region_mover.rb')
+  region_drainer = format('{stack_root}/current/hbase-client/bin/draining_servers.rb')
+  hbase_cmd = format('{stack_root}/current/hbase-client/bin/hbase')
+
+  hbase_max_direct_memory_size  = default('configurations/hbase-env/hbase_max_direct_memory_size', None)
+
+  daemon_script=format("{stack_root}/current/{component_directory}/bin/hbase-daemon.sh")
+  region_mover = format("{stack_root}/current/{component_directory}/bin/region_mover.rb")
+  region_drainer = format("{stack_root}/current/{component_directory}/bin/draining_servers.rb")
+  hbase_cmd = format("{stack_root}/current/{component_directory}/bin/hbase")
+
+
+hbase_conf_dir = status_params.hbase_conf_dir
+limits_conf_dir = status_params.limits_conf_dir
+
+hbase_user_nofile_limit = default("/configurations/hbase-env/hbase_user_nofile_limit", "32000")
+hbase_user_nproc_limit = default("/configurations/hbase-env/hbase_user_nproc_limit", "16000")
+
+# no symlink for phoenix-server at this point
+phx_daemon_script = format('{stack_root}/current/phoenix-server/bin/queryserver.py')
+
+hbase_excluded_hosts = config['commandParams']['excluded_hosts']
+hbase_drain_only = default("/commandParams/mark_draining_only",False)
+hbase_included_hosts = config['commandParams']['included_hosts']
+
+hbase_user = status_params.hbase_user
+hbase_principal_name = config['configurations']['hbase-env']['hbase_principal_name']
+smokeuser = config['configurations']['cluster-env']['smokeuser']
+_authentication = config['configurations']['core-site']['hadoop.security.authentication']
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+
+# this is "hadoop-metrics.properties" for 1.x stacks
+metric_prop_file_name = "hadoop-metrics2-hbase.properties"
+
+# not supporting 32 bit jdk.
+java64_home = config['ambariLevelParams']['java_home']
+java_version = expect("/ambariLevelParams/java_version", int)
+
+log_dir = config['configurations']['hbase-env']['hbase_log_dir']
+java_io_tmpdir = default("/configurations/hbase-env/hbase_java_io_tmpdir", "/tmp")
+master_heapsize = ensure_unit_for_memory(config['configurations']['hbase-env']['hbase_master_heapsize'])
+
+regionserver_heapsize = ensure_unit_for_memory(config['configurations']['hbase-env']['hbase_regionserver_heapsize'])
+regionserver_xmn_max = config['configurations']['hbase-env']['hbase_regionserver_xmn_max']
+regionserver_xmn_percent = expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float)
+regionserver_xmn_size = calc_xmn_from_xms(regionserver_heapsize, regionserver_xmn_percent, regionserver_xmn_max)
+
+parallel_gc_threads = expect("/configurations/hbase-env/hbase_parallel_gc_threads", int)
+
+hbase_regionserver_shutdown_timeout = expect('/configurations/hbase-env/hbase_regionserver_shutdown_timeout', int, 30)
+
+phoenix_hosts = default('/clusterHostInfo/phoenix_query_server_hosts', [])
+phoenix_enabled = default('/configurations/hbase-env/phoenix_sql_enabled', False)
+has_phoenix = len(phoenix_hosts) > 0
+
+underscored_version = stack_version_unformatted.replace('.', '_')
+dashed_version = stack_version_unformatted.replace('.', '-')
+# if OSCheck.is_redhat_family() or OSCheck.is_suse_family():
+#   phoenix_package = format("phoenix_{underscored_version}_*")
+# elif OSCheck.is_ubuntu_family():
+#   phoenix_package = format("phoenix-{dashed_version}-.*")
+phoenix_package = "phoenix"
+
+pid_dir = status_params.pid_dir
+tmp_dir = config['configurations']['hbase-site']['hbase.tmp.dir']
+local_dir = config['configurations']['hbase-site']['hbase.local.dir']
+ioengine_param = default('/configurations/hbase-site/hbase.bucketcache.ioengine', None)
+
+client_jaas_config_file = format("{hbase_conf_dir}/hbase_client_jaas.conf")
+master_jaas_config_file = format("{hbase_conf_dir}/hbase_master_jaas.conf")
+regionserver_jaas_config_file = format("{hbase_conf_dir}/hbase_regionserver_jaas.conf")
+queryserver_jaas_config_file = format("{hbase_conf_dir}/hbase_queryserver_jaas.conf")
+
+ganglia_server_hosts = default('/clusterHostInfo/ganglia_server_host', []) # is not passed when ganglia is not present
+has_ganglia_server = not len(ganglia_server_hosts) == 0
+if has_ganglia_server:
+  ganglia_server_host = ganglia_server_hosts[0]
+
+set_instanceId = "false"
+if 'cluster-env' in config['configurations'] and \
+    'metrics_collector_external_hosts' in config['configurations']['cluster-env']:
+  ams_collector_hosts = config['configurations']['cluster-env']['metrics_collector_external_hosts']
+  set_instanceId = "true"
+else:
+  ams_collector_hosts = ",".join(default("/clusterHostInfo/metrics_collector_hosts", []))
+has_metric_collector = not len(ams_collector_hosts) == 0
+metric_collector_port = None
+if has_metric_collector:
+  if 'cluster-env' in config['configurations'] and \
+      'metrics_collector_external_port' in config['configurations']['cluster-env']:
+    metric_collector_port = config['configurations']['cluster-env']['metrics_collector_external_port']
+  else:
+    metric_collector_web_address = default("/configurations/ams-site/timeline.metrics.service.webapp.address", "0.0.0.0:6188")
+    if metric_collector_web_address.find(':') != -1:
+      metric_collector_port = metric_collector_web_address.split(':')[1]
+    else:
+      metric_collector_port = '6188'
+  if default("/configurations/ams-site/timeline.metrics.service.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+    metric_collector_protocol = 'https'
+  else:
+    metric_collector_protocol = 'http'
+  metric_truststore_path= default("/configurations/ams-ssl-client/ssl.client.truststore.location", "")
+  metric_truststore_type= default("/configurations/ams-ssl-client/ssl.client.truststore.type", "")
+  metric_truststore_password= default("/configurations/ams-ssl-client/ssl.client.truststore.password", "")
+  pass
+metrics_report_interval = default("/configurations/ams-site/timeline.metrics.sink.report.interval", 60)
+metrics_collection_period = default("/configurations/ams-site/timeline.metrics.sink.collection.period", 10)
+
+host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
+host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
+
+# if hbase is selected the hbase_regionserver_hosts, should not be empty, but still default just in case
+if 'datanode_hosts' in config['clusterHostInfo']:
+  rs_hosts = default('/clusterHostInfo/hbase_regionserver_hosts', '/clusterHostInfo/datanode_hosts') #if hbase_regionserver_hosts not given it is assumed that region servers on same nodes as slaves
+else:
+  rs_hosts = default('/clusterHostInfo/hbase_regionserver_hosts', '/clusterHostInfo/all_hosts') 
+
+smoke_test_user = config['configurations']['cluster-env']['smokeuser']
+smokeuser_principal =  config['configurations']['cluster-env']['smokeuser_principal_name']
+smokeuser_permissions = "RWXCA"
+service_check_data = get_unique_id_and_date()
+user_group = config['configurations']['cluster-env']["user_group"]
+
+if security_enabled:
+  zk_principal_name = default("/configurations/zookeeper-env/zookeeper_principal_name", "zookeeper/_HOST@EXAMPLE.COM")
+  zk_principal_user = zk_principal_name.split('/')[0]
+  zk_security_opts = format('-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username={zk_principal_user} -Dzookeeper.sasl.clientconfig=Client')
+  _hostname_lowercase = config['agentLevelParams']['hostname'].lower()
+  master_jaas_princ = config['configurations']['hbase-site']['hbase.master.kerberos.principal'].replace('_HOST',_hostname_lowercase)
+  master_keytab_path = config['configurations']['hbase-site']['hbase.master.keytab.file']
+  regionserver_jaas_princ = config['configurations']['hbase-site']['hbase.regionserver.kerberos.principal'].replace('_HOST',_hostname_lowercase)
+  _queryserver_jaas_princ = config['configurations']['hbase-site']['phoenix.queryserver.kerberos.principal']
+  if not is_empty(_queryserver_jaas_princ):
+    queryserver_jaas_princ =_queryserver_jaas_princ.replace('_HOST',_hostname_lowercase)
+
+regionserver_keytab_path = config['configurations']['hbase-site']['hbase.regionserver.keytab.file']
+queryserver_keytab_path = config['configurations']['hbase-site']['phoenix.queryserver.keytab.file']
+smoke_user_keytab = config['configurations']['cluster-env']['smokeuser_keytab']
+hbase_user_keytab = config['configurations']['hbase-env']['hbase_user_keytab']
+kinit_path_local = get_kinit_path(default('/configurations/kerberos-env/executable_search_paths', None))
+if security_enabled:
+  kinit_cmd = format("{kinit_path_local} -kt {hbase_user_keytab} {hbase_principal_name};")
+  kinit_cmd_master = format("{kinit_path_local} -kt {master_keytab_path} {master_jaas_princ};")
+  master_security_config = format("-Djava.security.auth.login.config={hbase_conf_dir}/hbase_master_jaas.conf")
+  hbase_decommission_auth_config = "--auth-as-server"
+else:
+  kinit_cmd = ""
+  kinit_cmd_master = ""
+  master_security_config = ""
+  hbase_decommission_auth_config = ""
+
+#log4j.properties
+# HBase log4j settings
+hbase_log_maxfilesize = default('configurations/hbase-log4j/hbase_log_maxfilesize',256)
+hbase_log_maxbackupindex = default('configurations/hbase-log4j/hbase_log_maxbackupindex',20)
+hbase_security_log_maxfilesize = default('configurations/hbase-log4j/hbase_security_log_maxfilesize',256)
+hbase_security_log_maxbackupindex = default('configurations/hbase-log4j/hbase_security_log_maxbackupindex',20)
+
+if (('hbase-log4j' in config['configurations']) and ('content' in config['configurations']['hbase-log4j'])):
+  log4j_props = config['configurations']['hbase-log4j']['content']
+else:
+  log4j_props = None
+  
+hbase_env_sh_template = config['configurations']['hbase-env']['content']
+
+hbase_hdfs_root_dir = config['configurations']['hbase-site']['hbase.rootdir']
+hbase_staging_dir = "/apps/hbase/staging"
+#for create_hdfs_directory
+hostname = config['agentLevelParams']['hostname']
+hdfs_user_keytab = config['configurations']['hadoop-env']['hdfs_user_keytab']
+hdfs_user = config['configurations']['hadoop-env']['hdfs_user']
+hdfs_principal_name = config['configurations']['hadoop-env']['hdfs_principal_name']
+
+hdfs_site = config['configurations']['hdfs-site']
+default_fs = config['configurations']['core-site']['fs.defaultFS']
+
+dfs_type = default("/clusterLevelParams/dfs_type", "")
+
+import functools
+#create partial functions with common arguments for every HdfsResource call
+#to create/delete hdfs directory/file/copyfromlocal we need to call params.HdfsResource in code
+HdfsResource = functools.partial(
+  HdfsResource,
+  user=hdfs_user,
+  hdfs_resource_ignore_file = "/var/lib/ambari-agent/data/.hdfs_resource_ignore",
+  security_enabled = security_enabled,
+  keytab = hdfs_user_keytab,
+  kinit_path_local = kinit_path_local,
+  hadoop_bin_dir = hadoop_bin_dir,
+  hadoop_conf_dir = hadoop_conf_dir,
+  principal_name = hdfs_principal_name,
+  hdfs_site = hdfs_site,
+  default_fs = default_fs,
+  immutable_paths = get_not_managed_resources(),
+  dfs_type = dfs_type
+)
+
+zookeeper_znode_parent = config['configurations']['hbase-site']['zookeeper.znode.parent']
+hbase_zookeeper_quorum = config['configurations']['hbase-site']['hbase.zookeeper.quorum']
+hbase_zookeeper_property_clientPort = config['configurations']['hbase-site']['hbase.zookeeper.property.clientPort']
+hbase_security_authentication = config['configurations']['hbase-site']['hbase.security.authentication']
+hadoop_security_authentication = config['configurations']['core-site']['hadoop.security.authentication']
+
+# ranger hbase plugin section start
+
+# to get db connector jar
+jdk_location = config['ambariLevelParams']['jdk_location']
+
+# ranger host
+ranger_admin_hosts = default("/clusterHostInfo/ranger_admin_hosts", [])
+has_ranger_admin = not len(ranger_admin_hosts) == 0
+
+# ranger support xml_configuration flag, instead of depending on ranger xml_configurations_supported/ranger-env introduced, using stack feature
+xml_configurations_supported = check_stack_feature(StackFeature.RANGER_XML_CONFIGURATION, version_for_stack_feature_checks)
+
+# ranger hbase plugin enabled property
+enable_ranger_hbase = default("/configurations/ranger-hbase-plugin-properties/ranger-hbase-plugin-enabled", "No")
+enable_ranger_hbase = True if enable_ranger_hbase.lower() == 'yes' else False
+
+# ranger hbase properties
+if enable_ranger_hbase:
+  # get ranger policy url
+  policymgr_mgr_url = config['configurations']['admin-properties']['policymgr_external_url']
+  if xml_configurations_supported:
+    policymgr_mgr_url = config['configurations']['ranger-hbase-security']['ranger.plugin.hbase.policy.rest.url']
+
+  if not is_empty(policymgr_mgr_url) and policymgr_mgr_url.endswith('/'):
+    policymgr_mgr_url = policymgr_mgr_url.rstrip('/')
+
+  # ranger audit db user
+  xa_audit_db_user = default('/configurations/admin-properties/audit_db_user', 'rangerlogger')
+
+  # ranger hbase service/repository name
+  repo_name = str(config['clusterName']) + '_hbase'
+  repo_name_value = config['configurations']['ranger-hbase-security']['ranger.plugin.hbase.service.name']
+  if not is_empty(repo_name_value) and repo_name_value != "{{repo_name}}":
+    repo_name = repo_name_value
+
+  common_name_for_certificate = config['configurations']['ranger-hbase-plugin-properties']['common.name.for.certificate']
+  repo_config_username = config['configurations']['ranger-hbase-plugin-properties']['REPOSITORY_CONFIG_USERNAME']
+  ranger_plugin_properties = config['configurations']['ranger-hbase-plugin-properties']
+  policy_user = config['configurations']['ranger-hbase-plugin-properties']['policy_user']
+  repo_config_password = config['configurations']['ranger-hbase-plugin-properties']['REPOSITORY_CONFIG_PASSWORD']
+
+  # ranger-env config
+  ranger_env = config['configurations']['ranger-env']
+
+  # create ranger-env config having external ranger credential properties
+  if not has_ranger_admin and enable_ranger_hbase:
+    external_admin_username = default('/configurations/ranger-hbase-plugin-properties/external_admin_username', 'admin')
+    external_admin_password = default('/configurations/ranger-hbase-plugin-properties/external_admin_password', 'admin')
+    external_ranger_admin_username = default('/configurations/ranger-hbase-plugin-properties/external_ranger_admin_username', 'amb_ranger_admin')
+    external_ranger_admin_password = default('/configurations/ranger-hbase-plugin-properties/external_ranger_admin_password', 'amb_ranger_admin')
+    ranger_env = {}
+    ranger_env['admin_username'] = external_admin_username
+    ranger_env['admin_password'] = external_admin_password
+    ranger_env['ranger_admin_username'] = external_ranger_admin_username
+    ranger_env['ranger_admin_password'] = external_ranger_admin_password
+
+  xa_audit_db_password = ''
+  if not is_empty(config['configurations']['admin-properties']['audit_db_password']) and stack_supports_ranger_audit_db and has_ranger_admin:
+    xa_audit_db_password = config['configurations']['admin-properties']['audit_db_password']
+
+  downloaded_custom_connector = None
+  previous_jdbc_jar_name = None
+  driver_curl_source = None
+  driver_curl_target = None
+  previous_jdbc_jar = None
+
+  if has_ranger_admin and stack_supports_ranger_audit_db:
+    xa_audit_db_flavor = config['configurations']['admin-properties']['DB_FLAVOR']
+    jdbc_jar_name, previous_jdbc_jar_name, audit_jdbc_url, jdbc_driver = get_audit_configs(config)
+
+    downloaded_custom_connector = format("{exec_tmp_dir}/{jdbc_jar_name}") if stack_supports_ranger_audit_db else None
+    driver_curl_source = format("{jdk_location}/{jdbc_jar_name}") if stack_supports_ranger_audit_db else None
+    driver_curl_target = format("{stack_root}/current/{component_directory}/lib/{jdbc_jar_name}") if stack_supports_ranger_audit_db else None
+    previous_jdbc_jar = format("{stack_root}/current/{component_directory}/lib/{previous_jdbc_jar_name}") if stack_supports_ranger_audit_db else None
+    sql_connector_jar = ''
+
+  if security_enabled:
+    master_principal = config['configurations']['hbase-site']['hbase.master.kerberos.principal']
+
+  hbase_ranger_plugin_config = {
+    'username': repo_config_username,
+    'password': repo_config_password,
+    'hadoop.security.authentication': hadoop_security_authentication,
+    'hbase.security.authentication': hbase_security_authentication,
+    'hbase.zookeeper.property.clientPort': hbase_zookeeper_property_clientPort,
+    'hbase.zookeeper.quorum': hbase_zookeeper_quorum,
+    'zookeeper.znode.parent': zookeeper_znode_parent,
+    'commonNameForCertificate': common_name_for_certificate,
+    'hbase.master.kerberos.principal': master_principal if security_enabled else ''
+  }
+
+  if security_enabled:
+    hbase_ranger_plugin_config['policy.download.auth.users'] = hbase_user
+    hbase_ranger_plugin_config['tag.download.auth.users'] = hbase_user
+    hbase_ranger_plugin_config['policy.grantrevoke.auth.users'] = hbase_user
+
+  hbase_ranger_plugin_config['setup.additional.default.policies'] = "true"
+  hbase_ranger_plugin_config['default-policy.1.name'] = "Service Check User Policy for Hbase"
+  hbase_ranger_plugin_config['default-policy.1.resource.table'] = "ambarismoketest"
+  hbase_ranger_plugin_config['default-policy.1.resource.column-family'] = "*"
+  hbase_ranger_plugin_config['default-policy.1.resource.column'] = "*"
+  hbase_ranger_plugin_config['default-policy.1.policyItem.1.users'] = policy_user
+  hbase_ranger_plugin_config['default-policy.1.policyItem.1.accessTypes'] = "read,write,create"
+
+  custom_ranger_service_config = generate_ranger_service_config(ranger_plugin_properties)
+  if len(custom_ranger_service_config) > 0:
+    hbase_ranger_plugin_config.update(custom_ranger_service_config)
+
+  hbase_ranger_plugin_repo = {
+    'isEnabled': 'true',
+    'configs': hbase_ranger_plugin_config,
+    'description': 'hbase repo',
+    'name': repo_name,
+    'type': 'hbase'
+  }
+
+  ranger_hbase_principal = None
+  ranger_hbase_keytab = None
+  if stack_supports_ranger_kerberos and security_enabled and 'hbase-master' in component_directory.lower():
+    ranger_hbase_principal = master_jaas_princ
+    ranger_hbase_keytab = master_keytab_path
+  elif stack_supports_ranger_kerberos and security_enabled and 'hbase-regionserver' in component_directory.lower():
+    ranger_hbase_principal = regionserver_jaas_princ
+    ranger_hbase_keytab = regionserver_keytab_path
+
+  xa_audit_db_is_enabled = False
+  if xml_configurations_supported and stack_supports_ranger_audit_db:
+    xa_audit_db_is_enabled = config['configurations']['ranger-hbase-audit']['xasecure.audit.destination.db']
+
+  xa_audit_hdfs_is_enabled = config['configurations']['ranger-hbase-audit']['xasecure.audit.destination.hdfs'] if xml_configurations_supported else False
+  ssl_keystore_password = config['configurations']['ranger-hbase-policymgr-ssl']['xasecure.policymgr.clientssl.keystore.password'] if xml_configurations_supported else None
+  ssl_truststore_password = config['configurations']['ranger-hbase-policymgr-ssl']['xasecure.policymgr.clientssl.truststore.password'] if xml_configurations_supported else None
+  credential_file = format('/etc/ranger/{repo_name}/cred.jceks')
+
+  # for SQLA explicitly disable audit to DB for Ranger
+  if has_ranger_admin and stack_supports_ranger_audit_db and xa_audit_db_flavor.lower() == 'sqla':
+    xa_audit_db_is_enabled = False
+
+# need this to capture cluster name from where ranger hbase plugin is enabled
+cluster_name = config['clusterName']
+
+# ranger hbase plugin section end
+
+create_hbase_home_directory = check_stack_feature(StackFeature.HBASE_HOME_DIRECTORY, stack_version_formatted)
+hbase_home_directory = format("/user/{hbase_user}")
+
+atlas_hosts = default('/clusterHostInfo/atlas_server_hosts', [])
+has_atlas = len(atlas_hosts) > 0
+
+metadata_user = default('/configurations/atlas-env/metadata_user', None)
+atlas_graph_storage_hostname = default('/configurations/application-properties/atlas.graph.storage.hostname', None)
+atlas_graph_storage_hbase_table = default('/configurations/application-properties/atlas.graph.storage.hbase.table', None)
+atlas_audit_hbase_tablename = default('/configurations/application-properties/atlas.audit.hbase.tablename', None)
+
+if has_atlas:
+  zk_hosts_matches = string_set_intersection(atlas_graph_storage_hostname, hbase_zookeeper_quorum)
+  atlas_with_managed_hbase = len(zk_hosts_matches) > 0
+else:
+  atlas_with_managed_hbase = False
+
+# Hbase Atlas hook configurations
+atlas_hook_filename = default('/configurations/atlas-env/metadata_conf_file', 'atlas-application.properties')
+enable_hbase_atlas_hook = default('/configurations/hbase-env/hbase.atlas.hook', False)
+hbase_atlas_hook_properties = default('/configurations/hbase-atlas-application-properties', {})
+
+mount_table_xml_inclusion_file_full_path = None
+mount_table_content = None
+if 'viewfs-mount-table' in config['configurations']:
+  xml_inclusion_file_name = 'viewfs-mount-table.xml'
+  mount_table = config['configurations']['viewfs-mount-table']
+
+  if 'content' in mount_table and mount_table['content'].strip():
+    mount_table_xml_inclusion_file_full_path = os.path.join(hbase_conf_dir, xml_inclusion_file_name)
+    mount_table_content = mount_table['content']

+ 43 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/params_windows.py

@@ -0,0 +1,43 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+import status_params
+from resource_management.libraries.script.script import Script
+
+# server configurations
+config = Script.get_config()
+hbase_conf_dir = os.environ["HBASE_CONF_DIR"]
+hbase_bin_dir = os.path.join(os.environ["HBASE_HOME"],'bin')
+hbase_executable = os.path.join(hbase_bin_dir,"hbase.cmd")
+stack_root = os.path.abspath(os.path.join(os.environ["HADOOP_HOME"],".."))
+hadoop_user = config["configurations"]["cluster-env"]["hadoop.user.name"]
+hbase_user = hadoop_user
+
+#decomm params
+region_drainer = os.path.join(hbase_bin_dir,"draining_servers.rb")
+region_mover = os.path.join(hbase_bin_dir,"region_mover.rb")
+hbase_excluded_hosts = config['commandParams']['excluded_hosts']
+hbase_drain_only = config['commandParams']['mark_draining_only']
+
+service_map = {
+  'master' : status_params.hbase_master_win_service_name,
+  'regionserver' : status_params.hbase_regionserver_win_service_name
+}

+ 99 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/service_check.py

@@ -0,0 +1,99 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions.format import format
+from resource_management.core.resources.system import Execute, File
+from resource_management.core.source import StaticFile
+from resource_management.core.source import Template
+import functions
+from ambari_commons import OSCheck, OSConst
+from ambari_commons.os_family_impl import OsFamilyImpl
+
+
+class HbaseServiceCheck(Script):
+  pass
+
+
+@OsFamilyImpl(os_family=OSConst.WINSRV_FAMILY)
+class HbaseServiceCheckWindows(HbaseServiceCheck):
+  def service_check(self, env):
+    import params
+    env.set_params(params)
+    smoke_cmd = os.path.join(params.stack_root, "Run-SmokeTests.cmd")
+    service = "HBASE"
+    Execute(format("cmd /C {smoke_cmd} {service}"), user=params.hbase_user, logoutput=True)
+
+
+@OsFamilyImpl(os_family=OsFamilyImpl.DEFAULT)
+class HbaseServiceCheckDefault(HbaseServiceCheck):
+  def service_check(self, env):
+    import params
+    env.set_params(params)
+    
+    output_file = "/apps/hbase/data/ambarismoketest"
+    smokeuser_kinit_cmd = format("{kinit_path_local} -kt {smoke_user_keytab} {smokeuser_principal} &&") if params.security_enabled else ""
+    hbase_servicecheck_file = format("{exec_tmp_dir}/hbase-smoke.sh")
+    hbase_servicecheck_cleanup_file = format("{exec_tmp_dir}/hbase-smoke-cleanup.sh")
+
+    File( format("{exec_tmp_dir}/hbaseSmokeVerify.sh"),
+      content = StaticFile("hbaseSmokeVerify.sh"),
+      mode = 0755
+    )
+
+    File(hbase_servicecheck_cleanup_file,
+      content = StaticFile("hbase-smoke-cleanup.sh"),
+      mode = 0755
+    )
+  
+    File( hbase_servicecheck_file,
+      mode = 0755,
+      content = Template('hbase-smoke.sh.j2')
+    )
+    
+    if params.security_enabled:    
+      hbase_grant_premissions_file = format("{exec_tmp_dir}/hbase_grant_permissions.sh")
+      grantprivelegecmd = format("{kinit_cmd} {hbase_cmd} shell {hbase_grant_premissions_file}")
+  
+      File( hbase_grant_premissions_file,
+        owner   = params.hbase_user,
+        group   = params.user_group,
+        mode    = 0644,
+        content = Template('hbase_grant_permissions.j2')
+      )
+      
+      Execute( grantprivelegecmd,
+        user = params.hbase_user,
+        logoutput = True
+      )
+
+    servicecheckcmd = format("{smokeuser_kinit_cmd} {hbase_cmd} --config {hbase_conf_dir} shell {hbase_servicecheck_file}")
+    smokeverifycmd = format("{exec_tmp_dir}/hbaseSmokeVerify.sh {hbase_conf_dir} {service_check_data} {hbase_cmd}")
+    cleanupCmd = format("{smokeuser_kinit_cmd} {hbase_cmd} --config {hbase_conf_dir} shell {hbase_servicecheck_cleanup_file}")
+    Execute(format("{servicecheckcmd} && {smokeverifycmd} && {cleanupCmd}"),
+      tries     = 6,
+      try_sleep = 5,
+      user = params.smoke_test_user,
+      logoutput = True
+    )
+
+if __name__ == "__main__":
+  HbaseServiceCheck().execute()
+  

+ 89 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/setup_ranger_hbase.py

@@ -0,0 +1,89 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from resource_management.core.logger import Logger
+from resource_management.libraries.functions.setup_ranger_plugin_xml import setup_ranger_plugin
+
+def setup_ranger_hbase(upgrade_type=None, service_name="hbase-master"):
+  import params
+
+  if params.enable_ranger_hbase:
+
+    stack_version = None
+
+    if upgrade_type is not None:
+      stack_version = params.version
+
+    if params.retryAble:
+      Logger.info("HBase: Setup ranger: command retry enables thus retrying if ranger admin is down !")
+    else:
+      Logger.info("HBase: Setup ranger: command retry not enabled thus skipping if ranger admin is down !")
+
+    if params.xa_audit_hdfs_is_enabled and service_name == 'hbase-master':
+      try:
+        params.HdfsResource("/ranger/audit",
+                           type="directory",
+                           action="create_on_execute",
+                           owner=params.hdfs_user,
+                           group=params.hdfs_user,
+                           mode=0755,
+                           recursive_chmod=True
+        )
+        params.HdfsResource("/ranger/audit/hbaseMaster",
+                           type="directory",
+                           action="create_on_execute",
+                           owner=params.hbase_user,
+                           group=params.hbase_user,
+                           mode=0700,
+                           recursive_chmod=True
+        )
+        params.HdfsResource("/ranger/audit/hbaseRegional",
+                           type="directory",
+                           action="create_on_execute",
+                           owner=params.hbase_user,
+                           group=params.hbase_user,
+                           mode=0700,
+                           recursive_chmod=True
+        )
+        params.HdfsResource(None, action="execute")
+      except Exception, err:
+        Logger.exception("Audit directory creation in HDFS for HBASE Ranger plugin failed with error:\n{0}".format(err))
+
+    api_version = 'v2'
+
+    setup_ranger_plugin('hbase-client', 'hbase', params.previous_jdbc_jar, params.downloaded_custom_connector,
+                        params.driver_curl_source, params.driver_curl_target, params.java64_home,
+                        params.repo_name, params.hbase_ranger_plugin_repo,
+                        params.ranger_env, params.ranger_plugin_properties,
+                        params.policy_user, params.policymgr_mgr_url,
+                        params.enable_ranger_hbase, conf_dict=params.hbase_conf_dir,
+                        component_user=params.hbase_user, component_group=params.user_group, cache_service_list=['hbaseMaster', 'hbaseRegional'],
+                        plugin_audit_properties=params.config['configurations']['ranger-hbase-audit'], plugin_audit_attributes=params.config['configurationAttributes']['ranger-hbase-audit'],
+                        plugin_security_properties=params.config['configurations']['ranger-hbase-security'], plugin_security_attributes=params.config['configurationAttributes']['ranger-hbase-security'],
+                        plugin_policymgr_ssl_properties=params.config['configurations']['ranger-hbase-policymgr-ssl'], plugin_policymgr_ssl_attributes=params.config['configurationAttributes']['ranger-hbase-policymgr-ssl'],
+                        component_list=['hbase-client', 'hbase-master', 'hbase-regionserver'], audit_db_is_enabled=params.xa_audit_db_is_enabled,
+                        credential_file=params.credential_file, xa_audit_db_password=params.xa_audit_db_password,
+                        ssl_truststore_password=params.ssl_truststore_password, ssl_keystore_password=params.ssl_keystore_password,
+                        stack_version_override = stack_version, skip_if_rangeradmin_down= not params.retryAble, api_version=api_version,
+                        is_security_enabled = params.security_enabled,
+                        is_stack_supports_ranger_kerberos = params.stack_supports_ranger_kerberos if params.security_enabled else None,
+                        component_user_principal=params.ranger_hbase_principal if params.security_enabled else None,
+                        component_user_keytab=params.ranger_hbase_keytab if params.security_enabled else None)
+  else:
+    Logger.info('Ranger HBase plugin is not enabled')

+ 67 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/status_params.py

@@ -0,0 +1,67 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from ambari_commons.os_check import OSCheck
+
+from resource_management.libraries.functions import format
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions.version import format_stack_version
+from resource_management.libraries.functions.stack_features import check_stack_feature
+from resource_management.libraries.functions import StackFeature
+from resource_management.libraries.functions import get_kinit_path
+from resource_management.libraries.script.script import Script
+
+# a map of the Ambari role to the component name
+# for use with <stack-root>/current/<component>
+SERVER_ROLE_DIRECTORY_MAP = {
+  'HBASE_MASTER' : 'hbase-master',
+  'HBASE_REGIONSERVER' : 'hbase-regionserver',
+  'HBASE_CLIENT' : 'hbase-client'
+}
+
+component_directory = Script.get_component_from_role(SERVER_ROLE_DIRECTORY_MAP, "HBASE_CLIENT")
+
+config = Script.get_config()
+
+if OSCheck.is_windows_family():
+  hbase_master_win_service_name = "master"
+  hbase_regionserver_win_service_name = "regionserver"
+else:
+  pid_dir = config['configurations']['hbase-env']['hbase_pid_dir']
+  hbase_user = config['configurations']['hbase-env']['hbase_user']
+
+  hbase_master_pid_file = format("{pid_dir}/hbase-{hbase_user}-master.pid")
+  regionserver_pid_file = format("{pid_dir}/hbase-{hbase_user}-regionserver.pid")
+
+  # Security related/required params
+  hostname = config['agentLevelParams']['hostname']
+  security_enabled = config['configurations']['cluster-env']['security_enabled']
+  kinit_path_local = get_kinit_path(default('/configurations/kerberos-env/executable_search_paths', None))
+  tmp_dir = Script.get_tmp_dir()
+  
+  stack_version_unformatted = str(config['clusterLevelParams']['stack_version'])
+  stack_version_formatted = format_stack_version(stack_version_unformatted)
+  stack_root = Script.get_stack_root()
+
+  hbase_conf_dir = "/etc/hbase/conf"
+  limits_conf_dir = "/etc/security/limits.d"
+  if stack_version_formatted and check_stack_feature(StackFeature.ROLLING_UPGRADE, stack_version_formatted):
+    hbase_conf_dir = format("{stack_root}/current/{component_directory}/conf")
+    
+stack_name = default("/clusterLevelParams/stack_name", None)

+ 105 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/scripts/upgrade.py

@@ -0,0 +1,105 @@
+
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import re
+import socket
+
+from resource_management.core import shell
+from resource_management.core.exceptions import ComponentIsNotRunning
+from resource_management.core.exceptions import Fail
+from resource_management.core.logger import Logger
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.constants import StackFeature
+from resource_management.libraries.functions.stack_features import check_stack_feature
+from resource_management.libraries.functions.decorator import retry
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions import check_process_status
+
+
+def prestart(env):
+  import params
+
+  if params.version and check_stack_feature(StackFeature.ROLLING_UPGRADE, params.version):
+    stack_select.select_packages(params.version)
+
+def post_regionserver(env):
+  import params
+  env.set_params(params)
+
+  check_cmd = "echo 'status \"simple\"' | {0} shell".format(params.hbase_cmd)
+
+  exec_cmd = "{0} {1}".format(params.kinit_cmd, check_cmd)
+  is_regionserver_registered(exec_cmd, params.hbase_user, params.hostname, re.IGNORECASE)
+
+
+def is_region_server_process_running():
+  try:
+    pid_file = format("{pid_dir}/hbase-{hbase_user}-regionserver.pid")
+    check_process_status(pid_file)
+    return True
+  except ComponentIsNotRunning:
+    return False
+
+
+@retry(times=30, sleep_time=30, err_class=Fail)
+def is_regionserver_registered(cmd, user, hostname, regex_search_flags):
+  """
+  Queries HBase through the HBase shell to see which servers have successfully registered. This is
+  useful in cases, such as upgrades, where we must ensure that a RegionServer has not only started,
+  but also completed it's registration handshake before moving into upgrading the next RegionServer.
+
+  The hbase shell is used along with the "show 'simple'" command in order to determine if the
+  specified host has registered.
+  :param cmd:
+  :param user:
+  :param hostname:
+  :param regex_search_flags:
+  :return:
+  """
+  if not is_region_server_process_running():
+    Logger.info("RegionServer process is not running")
+    raise Fail("RegionServer process is not running")
+
+  # use hbase shell with "status 'simple'" command
+  code, out = shell.call(cmd, user=user)
+
+  # if we don't have ouput, then we can't check
+  if not out:
+    raise Fail("Unable to retrieve status information from the HBase shell")
+
+  # try matching the hostname with a colon (which indicates a bound port)
+  bound_hostname_to_match = hostname + ":"
+  match = re.search(bound_hostname_to_match, out, regex_search_flags)
+
+  # if there's no match, try again with the IP address
+  if not match:
+    try:
+      ip_address = socket.gethostbyname(hostname)
+      bound_ip_address_to_match = ip_address + ":"
+      match = re.search(bound_ip_address_to_match, out, regex_search_flags)
+    except socket.error:
+      # this is merely a backup, so just log that it failed
+      Logger.warning("Unable to lookup the IP address of {0}, reverse DNS lookup may not be working.".format(hostname))
+      pass
+
+  # failed with both a hostname and an IP address, so raise the Fail and let the function auto retry
+  if not match:
+    raise Fail(
+      "The RegionServer named {0} has not yet registered with the HBase Master".format(hostname))

+ 133 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2

@@ -0,0 +1,133 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# See http://wiki.apache.org/hadoop/GangliaMetrics
+#
+# Make sure you know whether you are using ganglia 3.0 or 3.1.
+# If 3.1, you will have to patch your hadoop instance with HADOOP-4675
+# And, yes, this file is named hadoop-metrics.properties rather than
+# hbase-metrics.properties because we're leveraging the hadoop metrics
+# package and hadoop-metrics.properties is an hardcoded-name, at least
+# for the moment.
+#
+# See also http://hadoop.apache.org/hbase/docs/current/metrics.html
+
+# HBase-specific configuration to reset long-running stats (e.g. compactions)
+# If this variable is left out, then the default is no expiration.
+hbase.extendedperiod = 3600
+
+{% if has_metric_collector %}
+
+*.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar
+*.sink.timeline.slave.host.name={{hostname}}
+
+hbase.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+hbase.period={{metrics_collection_period}}
+hbase.collector.hosts={{ams_collector_hosts}}
+hbase.protocol={{metric_collector_protocol}}
+hbase.port={{metric_collector_port}}
+
+jvm.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+jvm.period={{metrics_collection_period}}
+jvm.collector.hosts={{ams_collector_hosts}}
+jvm.protocol={{metric_collector_protocol}}
+jvm.port={{metric_collector_port}}
+
+rpc.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+rpc.period={{metrics_collection_period}}
+rpc.collector.hosts={{ams_collector_hosts}}
+rpc.protocol={{metric_collector_protocol}}
+rpc.port={{metric_collector_port}}
+
+hbase.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+hbase.sink.timeline.period={{metrics_collection_period}}
+hbase.sink.timeline.sendInterval={{metrics_report_interval}}000
+hbase.sink.timeline.collector.hosts={{ams_collector_hosts}}
+hbase.sink.timeline.protocol={{metric_collector_protocol}}
+hbase.sink.timeline.port={{metric_collector_port}}
+hbase.sink.timeline.instanceId = {{cluster_name}}
+hbase.sink.timeline.set.instanceId = {{set_instanceId}}
+hbase.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+hbase.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+hbase.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
+
+# HTTPS properties
+hbase.sink.timeline.truststore.path = {{metric_truststore_path}}
+hbase.sink.timeline.truststore.type = {{metric_truststore_type}}
+hbase.sink.timeline.truststore.password = {{metric_truststore_password}}
+
+{% endif %}
+
+{% if has_ganglia_server %}
+
+# Configuration of the "hbase" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+hbase.period=10
+hbase.servers={{ganglia_server_host}}:8663
+
+# Configuration of the "jvm" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+jvm.period=10
+jvm.servers={{ganglia_server_host}}:8663
+
+# Configuration of the "rpc" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+rpc.period=10
+rpc.servers={{ganglia_server_host}}:8663
+
+#Ganglia following hadoop example
+hbase.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
+hbase.sink.ganglia.period=10
+
+# default for supportsparse is false
+*.sink.ganglia.supportsparse=true
+
+.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
+.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
+
+hbase.sink.ganglia.servers={{ganglia_server_host}}:8663
+
+{% endif %}
+
+# Disable HBase metrics for regions/tables/regionservers by default.
+*.source.filter.class=org.apache.hadoop.metrics2.filter.RegexFilter
+hbase.*.source.filter.exclude=.*(Regions|Users|Tables).*

+ 131 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-RS.j2

@@ -0,0 +1,131 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# See http://wiki.apache.org/hadoop/GangliaMetrics
+#
+# Make sure you know whether you are using ganglia 3.0 or 3.1.
+# If 3.1, you will have to patch your hadoop instance with HADOOP-4675
+# And, yes, this file is named hadoop-metrics.properties rather than
+# hbase-metrics.properties because we're leveraging the hadoop metrics
+# package and hadoop-metrics.properties is an hardcoded-name, at least
+# for the moment.
+#
+# See also http://hadoop.apache.org/hbase/docs/current/metrics.html
+
+# HBase-specific configuration to reset long-running stats (e.g. compactions)
+# If this variable is left out, then the default is no expiration.
+hbase.extendedperiod = 3600
+
+{% if has_metric_collector %}
+
+*.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar
+*.sink.timeline.slave.host.name={{hostname}}
+hbase.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+hbase.period={{metrics_collection_period}}
+hbase.collector.hosts={{ams_collector_hosts}}
+hbase.protocol={{metric_collector_protocol}}
+hbase.port={{metric_collector_port}}
+
+jvm.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+jvm.period={{metrics_collection_period}}
+jvm.collector.hosts={{ams_collector_hosts}}
+jvm.protocol={{metric_collector_protocol}}
+jvm.port={{metric_collector_port}}
+
+rpc.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+rpc.period={{metrics_collection_period}}
+rpc.collector.hosts={{ams_collector_hosts}}
+rpc.protocol={{metric_collector_protocol}}
+rpc.port={{metric_collector_port}}
+
+hbase.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+hbase.sink.timeline.period={{metrics_collection_period}}
+hbase.sink.timeline.sendInterval={{metrics_report_interval}}000
+hbase.sink.timeline.collector.hosts={{ams_collector_hosts}}
+hbase.sink.timeline.protocol={{metric_collector_protocol}}
+hbase.sink.timeline.port={{metric_collector_port}}
+hbase.sink.timeline.instanceId = {{cluster_name}}
+hbase.sink.timeline.set.instanceId = {{set_instanceId}}
+hbase.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+hbase.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+hbase.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
+
+# HTTPS properties
+hbase.sink.timeline.truststore.path = {{metric_truststore_path}}
+hbase.sink.timeline.truststore.type = {{metric_truststore_type}}
+hbase.sink.timeline.truststore.password = {{metric_truststore_password}}
+
+{% endif %}
+
+{% if has_ganglia_server %}
+
+# Configuration of the "hbase" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+hbase.period=10
+hbase.servers={{ganglia_server_host}}:8656
+
+# Configuration of the "jvm" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+jvm.period=10
+jvm.servers={{ganglia_server_host}}:8656
+
+# Configuration of the "rpc" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+rpc.period=10
+rpc.servers={{ganglia_server_host}}:8656
+
+#Ganglia following hadoop example
+hbase.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
+hbase.sink.ganglia.period=10
+
+# default for supportsparse is false
+*.sink.ganglia.supportsparse=true
+
+.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
+.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
+
+hbase.sink.ganglia.servers={{ganglia_server_host}}:8656
+
+{% endif %}
+
+# Disable HBase metrics for regions/tables/regionservers by default.
+*.source.filter.class=org.apache.hadoop.metrics2.filter.RegexFilter
+hbase.*.source.filter.exclude=.*(Regions|Users|Tables).*

+ 44 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase-smoke.sh.j2

@@ -0,0 +1,44 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#
+disable 'ambarismoketest'
+drop 'ambarismoketest'
+create 'ambarismoketest','family'
+put 'ambarismoketest','row01','family:col01','{{service_check_data}}'
+scan 'ambarismoketest'
+exit

+ 35 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase.conf.j2

@@ -0,0 +1,35 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+{{hbase_user}}   - nofile   {{hbase_user_nofile_limit}}
+{{hbase_user}}   - nproc    {{hbase_user_nproc_limit}}

+ 23 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_client_jaas.conf.j2

@@ -0,0 +1,23 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=false
+useTicketCache=true;
+};

+ 39 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_grant_permissions.j2

@@ -0,0 +1,39 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#
+grant '{{smoke_test_user}}', '{{smokeuser_permissions}}'
+exit

+ 36 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_master_jaas.conf.j2

@@ -0,0 +1,36 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=true
+storeKey=true
+useTicketCache=false
+keyTab="{{master_keytab_path}}"
+principal="{{master_jaas_princ}}";
+};
+com.sun.security.jgss.krb5.initiate {
+com.sun.security.auth.module.Krb5LoginModule required
+renewTGT=false
+doNotPrompt=true
+useKeyTab=true
+storeKey=true
+useTicketCache=false
+keyTab="{{master_keytab_path}}"
+principal="{{master_jaas_princ}}";
+};

+ 26 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_queryserver_jaas.conf.j2

@@ -0,0 +1,26 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=true
+storeKey=true
+useTicketCache=false
+keyTab="{{queryserver_keytab_path}}"
+principal="{{queryserver_jaas_princ}}";
+};

+ 36 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/hbase_regionserver_jaas.conf.j2

@@ -0,0 +1,36 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=true
+storeKey=true
+useTicketCache=false
+keyTab="{{regionserver_keytab_path}}"
+principal="{{regionserver_jaas_princ}}";
+};
+com.sun.security.jgss.krb5.initiate {
+com.sun.security.auth.module.Krb5LoginModule required
+renewTGT=false
+doNotPrompt=true
+useKeyTab=true
+storeKey=true
+useTicketCache=false
+keyTab="{{regionserver_keytab_path}}"
+principal="{{regionserver_jaas_princ}}";
+};

+ 79 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/input.config-hbase.json.j2

@@ -0,0 +1,79 @@
+{#
+ # Licensed to the Apache Software Foundation (ASF) under one
+ # or more contributor license agreements.  See the NOTICE file
+ # distributed with this work for additional information
+ # regarding copyright ownership.  The ASF licenses this file
+ # to you under the Apache License, Version 2.0 (the
+ # "License"); you may not use this file except in compliance
+ # with the License.  You may obtain a copy of the License at
+ #
+ #   http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing, software
+ # distributed under the License is distributed on an "AS IS" BASIS,
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ # See the License for the specific language governing permissions and
+ # limitations under the License.
+ #}
+{
+  "input":[
+    {
+      "type":"hbase_master",
+      "rowtype":"service",
+      "path":"{{default('/configurations/hbase-env/hbase_log_dir', '/var/log/hbase')}}/hbase-*-master-*.log"
+    },
+    {
+      "type":"hbase_regionserver",
+      "rowtype":"service",
+      "path":"{{default('/configurations/hbase-env/hbase_log_dir', '/var/log/hbase')}}/hbase-*-regionserver-*.log"
+    },
+    {
+      "type":"hbase_phoenix_server",
+      "rowtype":"service",
+      "path":"{{default('/configurations/hbase-env/hbase_log_dir', '/var/log/hbase')}}/phoenix-*server.log"
+    }
+  ],
+  "filter":[
+    {
+      "filter":"grok",
+      "conditions":{
+        "fields":{
+          "type":[
+            "hbase_master",
+            "hbase_regionserver"
+          ]
+        }
+      },
+      "log4j_format":"%d{ISO8601} %-5p [%t] %c{2}: %m%n",
+      "multiline_pattern":"^(%{TIMESTAMP_ISO8601:logtime})",
+      "message_pattern":"(?m)^%{TIMESTAMP_ISO8601:logtime}%{SPACE}%{LOGLEVEL:level}%{SPACE}\\[%{DATA:thread_name}\\]%{SPACE}%{JAVACLASS:logger_name}:%{SPACE}%{GREEDYDATA:log_message}",
+      "post_map_values":{
+        "logtime":{
+          "map_date":{
+            "target_date_pattern":"yyyy-MM-dd HH:mm:ss,SSS"
+          }
+        }
+      }
+    },
+    {
+      "filter":"grok",
+      "conditions":{
+        "fields":{
+          "type":[
+            "hbase_phoenix_server"
+          ]
+        }
+      },
+      "log4j_format":"%d{ISO8601} %-5p [%t] %c{2}: %m%n",
+      "multiline_pattern":"^(%{TIMESTAMP_ISO8601:logtime})",
+      "message_pattern":"(?m)^%{TIMESTAMP_ISO8601:logtime}%{SPACE}%{LOGLEVEL:level}%{SPACE}%{JAVACLASS:logger_name}:%{SPACE}%{GREEDYDATA:log_message}",
+      "post_map_values":{
+        "logtime":{
+          "map_date":{
+            "target_date_pattern":"yyyy-MM-dd HH:mm:ss,SSS"
+          }
+        }
+      }
+    }
+  ]
+}

+ 20 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/package/templates/regionservers.j2

@@ -0,0 +1,20 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+{% for host in rs_hosts %}{{host}}
+{% endfor %}

+ 103 - 0
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HBASE/quicklinks/quicklinks.json

@@ -0,0 +1,103 @@
+{
+  "name": "default",
+  "description": "default quick links configuration",
+  "configuration": {
+    "protocol":
+    {
+      "type":"http"
+    },
+
+    "links": [
+      {
+        "name": "hbase_master_ui",
+        "label": "HBase Master UI",
+        "component_name": "HBASE_MASTER",
+        "url":"%@://%@:%@/master-status",
+        "requires_user_name": "false",
+        "port":{
+          "http_property": "hbase.master.info.port",
+          "http_default_port": "60010",
+          "https_property": "hbase.master.info.port",
+          "https_default_port": "60443",
+          "regex": "",
+          "site": "hbase-site"
+        }
+      },
+      {
+        "name": "hbase_logs",
+        "label": "HBase Logs",
+        "component_name": "HBASE_MASTER",
+        "url":"%@://%@:%@/logs",
+        "requires_user_name": "false",
+        "port":{
+          "http_property": "hbase.master.info.port",
+          "http_default_port": "60010",
+          "https_property": "hbase.master.info.port",
+          "https_default_port": "60443",
+          "regex": "",
+          "site": "hbase-site"
+        }
+      },
+      {
+        "name": "zookeeper_info",
+        "label": "Zookeeper Info",
+        "component_name": "HBASE_MASTER",
+        "url":"%@://%@:%@/zk.jsp",
+        "requires_user_name": "false",
+        "port":{
+          "http_property": "hbase.master.info.port",
+          "http_default_port": "60010",
+          "https_property": "hbase.master.info.port",
+          "https_default_port": "60443",
+          "regex": "",
+          "site": "hbase-site"
+        }
+      },
+      {
+        "name": "hbase_master_jmx",
+        "label": "HBase Master JMX",
+        "component_name": "HBASE_MASTER",
+        "url":"%@://%@:%@/jmx",
+        "requires_user_name": "false",
+        "port":{
+          "http_property": "hbase.master.info.port",
+          "http_default_port": "60010",
+          "https_property": "hbase.master.info.port",
+          "https_default_port": "60443",
+          "regex": "",
+          "site": "hbase-site"
+        }
+      },
+      {
+        "name": "debug_dump",
+        "label": "Debug Dump",
+        "component_name": "HBASE_MASTER",
+        "url":"%@://%@:%@/dump",
+        "requires_user_name": "false",
+        "port":{
+          "http_property": "hbase.master.info.port",
+          "http_default_port": "60010",
+          "https_property": "hbase.master.info.port",
+          "https_default_port": "60443",
+          "regex": "",
+          "site": "hbase-site"
+        }
+      },
+      {
+        "name": "thread_stacks",
+        "label": "Thread Stacks",
+        "component_name": "HBASE_MASTER",
+        "url":"%@://%@:%@/stacks",
+        "requires_user_name": "false",
+        "port":{
+          "http_property": "hbase.master.info.port",
+          "http_default_port": "60010",
+          "https_property": "hbase.master.info.port",
+          "https_default_port": "60443",
+          "regex": "",
+          "site": "hbase-site"
+        }
+      }
+    ]
+  }
+}

이 변경점에서 너무 많은 파일들이 변경되어 몇몇 파일들은 표시되지 않았습니다.