SecureMode.apt.vm 36 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689
  1. ~~ Licensed under the Apache License, Version 2.0 (the "License");
  2. ~~ you may not use this file except in compliance with the License.
  3. ~~ You may obtain a copy of the License at
  4. ~~
  5. ~~ http://www.apache.org/licenses/LICENSE-2.0
  6. ~~
  7. ~~ Unless required by applicable law or agreed to in writing, software
  8. ~~ distributed under the License is distributed on an "AS IS" BASIS,
  9. ~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  10. ~~ See the License for the specific language governing permissions and
  11. ~~ limitations under the License. See accompanying LICENSE file.
  12. ---
  13. Hadoop in Secure Mode
  14. ---
  15. ---
  16. ${maven.build.timestamp}
  17. %{toc|section=0|fromDepth=0|toDepth=3}
  18. Hadoop in Secure Mode
  19. * Introduction
  20. This document describes how to configure authentication for Hadoop in
  21. secure mode.
  22. By default Hadoop runs in non-secure mode in which no actual
  23. authentication is required.
  24. By configuring Hadoop runs in secure mode,
  25. each user and service needs to be authenticated by Kerberos
  26. in order to use Hadoop services.
  27. Security features of Hadoop consist of
  28. {{{Authentication}authentication}},
  29. {{{./ServiceLevelAuth.html}service level authorization}},
  30. {{{./HttpAuthentication.html}authentication for Web consoles}}
  31. and {{{Data confidentiality}data confidenciality}}.
  32. * Authentication
  33. ** End User Accounts
  34. When service level authentication is turned on,
  35. end users using Hadoop in secure mode needs to be authenticated by Kerberos.
  36. The simplest way to do authentication is using <<<kinit>>> command of Kerberos.
  37. ** User Accounts for Hadoop Daemons
  38. Ensure that HDFS and YARN daemons run as different Unix users,
  39. e.g. <<<hdfs>>> and <<<yarn>>>.
  40. Also, ensure that the MapReduce JobHistory server runs as
  41. different user such as <<<mapred>>>.
  42. It's recommended to have them share a Unix group, for e.g. <<<hadoop>>>.
  43. See also "{{Mapping from user to group}}" for group management.
  44. *---------------+----------------------------------------------------------------------+
  45. || User:Group || Daemons |
  46. *---------------+----------------------------------------------------------------------+
  47. | hdfs:hadoop | NameNode, Secondary NameNode, JournalNode, DataNode |
  48. *---------------+----------------------------------------------------------------------+
  49. | yarn:hadoop | ResourceManager, NodeManager |
  50. *---------------+----------------------------------------------------------------------+
  51. | mapred:hadoop | MapReduce JobHistory Server |
  52. *---------------+----------------------------------------------------------------------+
  53. ** Kerberos principals for Hadoop Daemons and Users
  54. For running hadoop service daemons in Hadoop in secure mode,
  55. Kerberos principals are required.
  56. Each service reads auhenticate information saved in keytab file with appropriate permission.
  57. HTTP web-consoles should be served by principal different from RPC's one.
  58. Subsections below shows the examples of credentials for Hadoop services.
  59. *** HDFS
  60. The NameNode keytab file, on the NameNode host, should look like the
  61. following:
  62. ----
  63. $ klist -e -k -t /etc/security/keytab/nn.service.keytab
  64. Keytab name: FILE:/etc/security/keytab/nn.service.keytab
  65. KVNO Timestamp Principal
  66. 4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  67. 4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  68. 4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  69. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  70. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  71. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  72. ----
  73. The Secondary NameNode keytab file, on that host, should look like the
  74. following:
  75. ----
  76. $ klist -e -k -t /etc/security/keytab/sn.service.keytab
  77. Keytab name: FILE:/etc/security/keytab/sn.service.keytab
  78. KVNO Timestamp Principal
  79. 4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  80. 4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  81. 4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  82. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  83. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  84. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  85. ----
  86. The DataNode keytab file, on each host, should look like the following:
  87. ----
  88. $ klist -e -k -t /etc/security/keytab/dn.service.keytab
  89. Keytab name: FILE:/etc/security/keytab/dn.service.keytab
  90. KVNO Timestamp Principal
  91. 4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  92. 4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  93. 4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  94. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  95. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  96. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  97. ----
  98. *** YARN
  99. The ResourceManager keytab file, on the ResourceManager host, should look
  100. like the following:
  101. ----
  102. $ klist -e -k -t /etc/security/keytab/rm.service.keytab
  103. Keytab name: FILE:/etc/security/keytab/rm.service.keytab
  104. KVNO Timestamp Principal
  105. 4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  106. 4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  107. 4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  108. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  109. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  110. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  111. ----
  112. The NodeManager keytab file, on each host, should look like the following:
  113. ----
  114. $ klist -e -k -t /etc/security/keytab/nm.service.keytab
  115. Keytab name: FILE:/etc/security/keytab/nm.service.keytab
  116. KVNO Timestamp Principal
  117. 4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  118. 4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  119. 4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  120. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  121. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  122. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  123. ----
  124. *** MapReduce JobHistory Server
  125. The MapReduce JobHistory Server keytab file, on that host, should look
  126. like the following:
  127. ----
  128. $ klist -e -k -t /etc/security/keytab/jhs.service.keytab
  129. Keytab name: FILE:/etc/security/keytab/jhs.service.keytab
  130. KVNO Timestamp Principal
  131. 4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  132. 4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  133. 4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  134. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  135. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  136. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  137. ----
  138. ** Mapping from Kerberos principal to OS user account
  139. Hadoop maps Kerberos principal to OS user account using
  140. the rule specified by <<<hadoop.security.auth_to_local>>>
  141. which works in the same way as the <<<auth_to_local>>> in
  142. {{{http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html}Kerberos configuration file (krb5.conf)}}.
  143. In addition, Hadoop <<<auth_to_local>>> mapping supports the <</L>> flag that
  144. lowercases the returned name.
  145. By default, it picks the first component of principal name as a user name
  146. if the realms matches to the <<<default_realm>>> (usually defined in /etc/krb5.conf).
  147. For example, <<<host/full.qualified.domain.name@REALM.TLD>>> is mapped to <<<host>>>
  148. by default rule.
  149. ** Mapping from user to group
  150. Though files on HDFS are associated to owner and group,
  151. Hadoop does not have the definition of group by itself.
  152. Mapping from user to group is done by OS or LDAP.
  153. You can change a way of mapping by
  154. specifying the name of mapping provider as a value of
  155. <<<hadoop.security.group.mapping>>>
  156. See {{{../hadoop-hdfs/HdfsPermissionsGuide.html}HDFS Permissions Guide}} for details.
  157. Practically you need to manage SSO environment using Kerberos with LDAP
  158. for Hadoop in secure mode.
  159. ** Proxy user
  160. Some products such as Apache Oozie which access the services of Hadoop
  161. on behalf of end users need to be able to impersonate end users.
  162. See {{{./Superusers.html}the doc of proxy user}} for details.
  163. ** Secure DataNode
  164. Because the data transfer protocol of DataNode
  165. does not use the RPC framework of Hadoop,
  166. DataNode must authenticate itself by
  167. using privileged ports which are specified by
  168. <<<dfs.datanode.address>>> and <<<dfs.datanode.http.address>>>.
  169. This authentication is based on the assumption
  170. that the attacker won't be able to get root privileges.
  171. When you execute <<<hdfs datanode>>> command as root,
  172. server process binds privileged port at first,
  173. then drops privilege and runs as the user account specified by
  174. <<<HADOOP_SECURE_DN_USER>>>.
  175. This startup process uses jsvc installed to <<<JSVC_HOME>>>.
  176. You must specify <<<HADOOP_SECURE_DN_USER>>> and <<<JSVC_HOME>>>
  177. as environment variables on start up (in hadoop-env.sh).
  178. As of version 2.6.0, SASL can be used to authenticate the data transfer
  179. protocol. In this configuration, it is no longer required for secured clusters
  180. to start the DataNode as root using jsvc and bind to privileged ports. To
  181. enable SASL on data transfer protocol, set <<<dfs.data.transfer.protection>>>
  182. in hdfs-site.xml, set a non-privileged port for <<<dfs.datanode.address>>>, set
  183. <<<dfs.http.policy>>> to <HTTPS_ONLY> and make sure the
  184. <<<HADOOP_SECURE_DN_USER>>> environment variable is not defined. Note that it
  185. is not possible to use SASL on data transfer protocol if
  186. <<<dfs.datanode.address>>> is set to a privileged port. This is required for
  187. backwards-compatibility reasons.
  188. In order to migrate an existing cluster that used root authentication to start
  189. using SASL instead, first ensure that version 2.6.0 or later has been deployed
  190. to all cluster nodes as well as any external applications that need to connect
  191. to the cluster. Only versions 2.6.0 and later of the HDFS client can connect
  192. to a DataNode that uses SASL for authentication of data transfer protocol, so
  193. it is vital that all callers have the correct version before migrating. After
  194. version 2.6.0 or later has been deployed everywhere, update configuration of
  195. any external applications to enable SASL. If an HDFS client is enabled for
  196. SASL, then it can connect successfully to a DataNode running with either root
  197. authentication or SASL authentication. Changing configuration for all clients
  198. guarantees that subsequent configuration changes on DataNodes will not disrupt
  199. the applications. Finally, each individual DataNode can be migrated by
  200. changing its configuration and restarting. It is acceptable to have a mix of
  201. some DataNodes running with root authentication and some DataNodes running with
  202. SASL authentication temporarily during this migration period, because an HDFS
  203. client enabled for SASL can connect to both.
  204. * Data confidentiality
  205. ** Data Encryption on RPC
  206. The data transfered between hadoop services and clients.
  207. Setting <<<hadoop.rpc.protection>>> to <<<"privacy">>> in the core-site.xml
  208. activate data encryption.
  209. ** Data Encryption on Block data transfer.
  210. You need to set <<<dfs.encrypt.data.transfer>>> to <<<"true">>> in the hdfs-site.xml
  211. in order to activate data encryption for data transfer protocol of DataNode.
  212. Optionally, you may set <<<dfs.encrypt.data.transfer.algorithm>>> to either
  213. "3des" or "rc4" to choose the specific encryption algorithm. If unspecified,
  214. then the configured JCE default on the system is used, which is usually 3DES.
  215. Setting <<<dfs.encrypt.data.transfer.cipher.suites>>> to
  216. <<<AES/CTR/NoPadding>>> activates AES encryption. By default, this is
  217. unspecified, so AES is not used. When AES is used, the algorithm specified in
  218. <<<dfs.encrypt.data.transfer.algorithm>>> is still used during an initial key
  219. exchange. The AES key bit length can be configured by setting
  220. <<<dfs.encrypt.data.transfer.cipher.key.bitlength>>> to 128, 192 or 256. The
  221. default is 128.
  222. AES offers the greatest cryptographic strength and the best performance. At
  223. this time, 3DES and RC4 have been used more often in Hadoop clusters.
  224. ** Data Encryption on HTTP
  225. Data transfer between Web-console and clients are protected by using SSL(HTTPS).
  226. * Configuration
  227. ** Permissions for both HDFS and local fileSystem paths
  228. The following table lists various paths on HDFS and local filesystems (on
  229. all nodes) and recommended permissions:
  230. *-------------------+-------------------+------------------+------------------+
  231. || Filesystem || Path || User:Group || Permissions |
  232. *-------------------+-------------------+------------------+------------------+
  233. | local | <<<dfs.namenode.name.dir>>> | hdfs:hadoop | drwx------ |
  234. *-------------------+-------------------+------------------+------------------+
  235. | local | <<<dfs.datanode.data.dir>>> | hdfs:hadoop | drwx------ |
  236. *-------------------+-------------------+------------------+------------------+
  237. | local | $HADOOP_LOG_DIR | hdfs:hadoop | drwxrwxr-x |
  238. *-------------------+-------------------+------------------+------------------+
  239. | local | $YARN_LOG_DIR | yarn:hadoop | drwxrwxr-x |
  240. *-------------------+-------------------+------------------+------------------+
  241. | local | <<<yarn.nodemanager.local-dirs>>> | yarn:hadoop | drwxr-xr-x |
  242. *-------------------+-------------------+------------------+------------------+
  243. | local | <<<yarn.nodemanager.log-dirs>>> | yarn:hadoop | drwxr-xr-x |
  244. *-------------------+-------------------+------------------+------------------+
  245. | local | container-executor | root:hadoop | --Sr-s--- |
  246. *-------------------+-------------------+------------------+------------------+
  247. | local | <<<conf/container-executor.cfg>>> | root:hadoop | r-------- |
  248. *-------------------+-------------------+------------------+------------------+
  249. | hdfs | / | hdfs:hadoop | drwxr-xr-x |
  250. *-------------------+-------------------+------------------+------------------+
  251. | hdfs | /tmp | hdfs:hadoop | drwxrwxrwxt |
  252. *-------------------+-------------------+------------------+------------------+
  253. | hdfs | /user | hdfs:hadoop | drwxr-xr-x |
  254. *-------------------+-------------------+------------------+------------------+
  255. | hdfs | <<<yarn.nodemanager.remote-app-log-dir>>> | yarn:hadoop | drwxrwxrwxt |
  256. *-------------------+-------------------+------------------+------------------+
  257. | hdfs | <<<mapreduce.jobhistory.intermediate-done-dir>>> | mapred:hadoop | |
  258. | | | | drwxrwxrwxt |
  259. *-------------------+-------------------+------------------+------------------+
  260. | hdfs | <<<mapreduce.jobhistory.done-dir>>> | mapred:hadoop | |
  261. | | | | drwxr-x--- |
  262. *-------------------+-------------------+------------------+------------------+
  263. ** Common Configurations
  264. In order to turn on RPC authentication in hadoop,
  265. set the value of <<<hadoop.security.authentication>>> property to
  266. <<<"kerberos">>>, and set security related settings listed below appropriately.
  267. The following properties should be in the <<<core-site.xml>>> of all the
  268. nodes in the cluster.
  269. *-------------------------+-------------------------+------------------------+
  270. || Parameter || Value || Notes |
  271. *-------------------------+-------------------------+------------------------+
  272. | <<<hadoop.security.authentication>>> | <kerberos> | |
  273. | | | <<<simple>>> : No authentication. (default) \
  274. | | | <<<kerberos>>> : Enable authentication by Kerberos. |
  275. *-------------------------+-------------------------+------------------------+
  276. | <<<hadoop.security.authorization>>> | <true> | |
  277. | | | Enable {{{./ServiceLevelAuth.html}RPC service-level authorization}}. |
  278. *-------------------------+-------------------------+------------------------+
  279. | <<<hadoop.rpc.protection>>> | <authentication> |
  280. | | | <authentication> : authentication only (default) \
  281. | | | <integrity> : integrity check in addition to authentication \
  282. | | | <privacy> : data encryption in addition to integrity |
  283. *-------------------------+-------------------------+------------------------+
  284. | <<<hadoop.security.auth_to_local>>> | | |
  285. | | <<<RULE:>>><exp1>\
  286. | | <<<RULE:>>><exp2>\
  287. | | <...>\
  288. | | DEFAULT |
  289. | | | The value is string containing new line characters.
  290. | | | See
  291. | | | {{{http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html}Kerberos documentation}}
  292. | | | for format for <exp>.
  293. *-------------------------+-------------------------+------------------------+
  294. | <<<hadoop.proxyuser.>>><superuser><<<.hosts>>> | | |
  295. | | | comma separated hosts from which <superuser> access are allowd to impersonation. |
  296. | | | <<<*>>> means wildcard. |
  297. *-------------------------+-------------------------+------------------------+
  298. | <<<hadoop.proxyuser.>>><superuser><<<.groups>>> | | |
  299. | | | comma separated groups to which users impersonated by <superuser> belongs. |
  300. | | | <<<*>>> means wildcard. |
  301. *-------------------------+-------------------------+------------------------+
  302. Configuration for <<<conf/core-site.xml>>>
  303. ** NameNode
  304. *-------------------------+-------------------------+------------------------+
  305. || Parameter || Value || Notes |
  306. *-------------------------+-------------------------+------------------------+
  307. | <<<dfs.block.access.token.enable>>> | <true> | |
  308. | | | Enable HDFS block access tokens for secure operations. |
  309. *-------------------------+-------------------------+------------------------+
  310. | <<<dfs.https.enable>>> | <true> | |
  311. | | | This value is deprecated. Use dfs.http.policy |
  312. *-------------------------+-------------------------+------------------------+
  313. | <<<dfs.http.policy>>> | <HTTP_ONLY> or <HTTPS_ONLY> or <HTTP_AND_HTTPS> | |
  314. | | | HTTPS_ONLY turns off http access. This option takes precedence over |
  315. | | | the deprecated configuration dfs.https.enable and hadoop.ssl.enabled. |
  316. | | | If using SASL to authenticate data transfer protocol instead of |
  317. | | | running DataNode as root and using privileged ports, then this property |
  318. | | | must be set to <HTTPS_ONLY> to guarantee authentication of HTTP servers. |
  319. | | | (See <<<dfs.data.transfer.protection>>>.) |
  320. *-------------------------+-------------------------+------------------------+
  321. | <<<dfs.namenode.https-address>>> | <nn_host_fqdn:50470> | |
  322. *-------------------------+-------------------------+------------------------+
  323. | <<<dfs.https.port>>> | <50470> | |
  324. *-------------------------+-------------------------+------------------------+
  325. | <<<dfs.namenode.keytab.file>>> | </etc/security/keytab/nn.service.keytab> | |
  326. | | | Kerberos keytab file for the NameNode. |
  327. *-------------------------+-------------------------+------------------------+
  328. | <<<dfs.namenode.kerberos.principal>>> | nn/_HOST@REALM.TLD | |
  329. | | | Kerberos principal name for the NameNode. |
  330. *-------------------------+-------------------------+------------------------+
  331. | <<<dfs.namenode.kerberos.internal.spnego.principal>>> | HTTP/_HOST@REALM.TLD | |
  332. | | | HTTP Kerberos principal name for the NameNode. |
  333. *-------------------------+-------------------------+------------------------+
  334. Configuration for <<<conf/hdfs-site.xml>>>
  335. ** Secondary NameNode
  336. *-------------------------+-------------------------+------------------------+
  337. || Parameter || Value || Notes |
  338. *-------------------------+-------------------------+------------------------+
  339. | <<<dfs.namenode.secondary.http-address>>> | <c_nn_host_fqdn:50090> | |
  340. *-------------------------+-------------------------+------------------------+
  341. | <<<dfs.namenode.secondary.https-port>>> | <50470> | |
  342. *-------------------------+-------------------------+------------------------+
  343. | <<<dfs.secondary.namenode.keytab.file>>> | | |
  344. | | </etc/security/keytab/sn.service.keytab> | |
  345. | | | Kerberos keytab file for the Secondary NameNode. |
  346. *-------------------------+-------------------------+------------------------+
  347. | <<<dfs.secondary.namenode.kerberos.principal>>> | sn/_HOST@REALM.TLD | |
  348. | | | Kerberos principal name for the Secondary NameNode. |
  349. *-------------------------+-------------------------+------------------------+
  350. | <<<dfs.secondary.namenode.kerberos.internal.spnego.principal>>> | | |
  351. | | HTTP/_HOST@REALM.TLD | |
  352. | | | HTTP Kerberos principal name for the Secondary NameNode. |
  353. *-------------------------+-------------------------+------------------------+
  354. Configuration for <<<conf/hdfs-site.xml>>>
  355. ** DataNode
  356. *-------------------------+-------------------------+------------------------+
  357. || Parameter || Value || Notes |
  358. *-------------------------+-------------------------+------------------------+
  359. | <<<dfs.datanode.data.dir.perm>>> | 700 | |
  360. *-------------------------+-------------------------+------------------------+
  361. | <<<dfs.datanode.address>>> | <0.0.0.0:1004> | |
  362. | | | Secure DataNode must use privileged port |
  363. | | | in order to assure that the server was started securely. |
  364. | | | This means that the server must be started via jsvc. |
  365. | | | Alternatively, this must be set to a non-privileged port if using SASL |
  366. | | | to authenticate data transfer protocol. |
  367. | | | (See <<<dfs.data.transfer.protection>>>.) |
  368. *-------------------------+-------------------------+------------------------+
  369. | <<<dfs.datanode.http.address>>> | <0.0.0.0:1006> | |
  370. | | | Secure DataNode must use privileged port |
  371. | | | in order to assure that the server was started securely. |
  372. | | | This means that the server must be started via jsvc. |
  373. *-------------------------+-------------------------+------------------------+
  374. | <<<dfs.datanode.https.address>>> | <0.0.0.0:50470> | |
  375. *-------------------------+-------------------------+------------------------+
  376. | <<<dfs.datanode.keytab.file>>> | </etc/security/keytab/dn.service.keytab> | |
  377. | | | Kerberos keytab file for the DataNode. |
  378. *-------------------------+-------------------------+------------------------+
  379. | <<<dfs.datanode.kerberos.principal>>> | dn/_HOST@REALM.TLD | |
  380. | | | Kerberos principal name for the DataNode. |
  381. *-------------------------+-------------------------+------------------------+
  382. | <<<dfs.encrypt.data.transfer>>> | <false> | |
  383. | | | set to <<<true>>> when using data encryption |
  384. *-------------------------+-------------------------+------------------------+
  385. | <<<dfs.encrypt.data.transfer.algorithm>>> | | |
  386. | | | optionally set to <<<3des>>> or <<<rc4>>> when using data encryption to |
  387. | | | control encryption algorithm |
  388. *-------------------------+-------------------------+------------------------+
  389. | <<<dfs.encrypt.data.transfer.cipher.suites>>> | | |
  390. | | | optionally set to <<<AES/CTR/NoPadding>>> to activate AES encryption |
  391. | | | when using data encryption |
  392. *-------------------------+-------------------------+------------------------+
  393. | <<<dfs.encrypt.data.transfer.cipher.key.bitlength>>> | | |
  394. | | | optionally set to <<<128>>>, <<<192>>> or <<<256>>> to control key bit |
  395. | | | length when using AES with data encryption |
  396. *-------------------------+-------------------------+------------------------+
  397. | <<<dfs.data.transfer.protection>>> | | |
  398. | | | <authentication> : authentication only \
  399. | | | <integrity> : integrity check in addition to authentication \
  400. | | | <privacy> : data encryption in addition to integrity |
  401. | | | This property is unspecified by default. Setting this property enables |
  402. | | | SASL for authentication of data transfer protocol. If this is enabled, |
  403. | | | then <<<dfs.datanode.address>>> must use a non-privileged port, |
  404. | | | <<<dfs.http.policy>>> must be set to <HTTPS_ONLY> and the |
  405. | | | <<<HADOOP_SECURE_DN_USER>>> environment variable must be undefined when |
  406. | | | starting the DataNode process. |
  407. *-------------------------+-------------------------+------------------------+
  408. Configuration for <<<conf/hdfs-site.xml>>>
  409. ** WebHDFS
  410. *-------------------------+-------------------------+------------------------+
  411. || Parameter || Value || Notes |
  412. *-------------------------+-------------------------+------------------------+
  413. | <<<dfs.web.authentication.kerberos.principal>>> | http/_HOST@REALM.TLD | |
  414. | | | Kerberos keytab file for the WebHDFS. |
  415. *-------------------------+-------------------------+------------------------+
  416. | <<<dfs.web.authentication.kerberos.keytab>>> | </etc/security/keytab/http.service.keytab> | |
  417. | | | Kerberos principal name for WebHDFS. |
  418. *-------------------------+-------------------------+------------------------+
  419. Configuration for <<<conf/hdfs-site.xml>>>
  420. ** ResourceManager
  421. *-------------------------+-------------------------+------------------------+
  422. || Parameter || Value || Notes |
  423. *-------------------------+-------------------------+------------------------+
  424. | <<<yarn.resourcemanager.keytab>>> | | |
  425. | | </etc/security/keytab/rm.service.keytab> | |
  426. | | | Kerberos keytab file for the ResourceManager. |
  427. *-------------------------+-------------------------+------------------------+
  428. | <<<yarn.resourcemanager.principal>>> | rm/_HOST@REALM.TLD | |
  429. | | | Kerberos principal name for the ResourceManager. |
  430. *-------------------------+-------------------------+------------------------+
  431. Configuration for <<<conf/yarn-site.xml>>>
  432. ** NodeManager
  433. *-------------------------+-------------------------+------------------------+
  434. || Parameter || Value || Notes |
  435. *-------------------------+-------------------------+------------------------+
  436. | <<<yarn.nodemanager.keytab>>> | </etc/security/keytab/nm.service.keytab> | |
  437. | | | Kerberos keytab file for the NodeManager. |
  438. *-------------------------+-------------------------+------------------------+
  439. | <<<yarn.nodemanager.principal>>> | nm/_HOST@REALM.TLD | |
  440. | | | Kerberos principal name for the NodeManager. |
  441. *-------------------------+-------------------------+------------------------+
  442. | <<<yarn.nodemanager.container-executor.class>>> | | |
  443. | | <<<org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor>>> |
  444. | | | Use LinuxContainerExecutor. |
  445. *-------------------------+-------------------------+------------------------+
  446. | <<<yarn.nodemanager.linux-container-executor.group>>> | <hadoop> | |
  447. | | | Unix group of the NodeManager. |
  448. *-------------------------+-------------------------+------------------------+
  449. | <<<yarn.nodemanager.linux-container-executor.path>>> | </path/to/bin/container-executor> | |
  450. | | | The path to the executable of Linux container executor. |
  451. *-------------------------+-------------------------+------------------------+
  452. Configuration for <<<conf/yarn-site.xml>>>
  453. ** Configuration for WebAppProxy
  454. The <<<WebAppProxy>>> provides a proxy between the web applications
  455. exported by an application and an end user. If security is enabled
  456. it will warn users before accessing a potentially unsafe web application.
  457. Authentication and authorization using the proxy is handled just like
  458. any other privileged web application.
  459. *-------------------------+-------------------------+------------------------+
  460. || Parameter || Value || Notes |
  461. *-------------------------+-------------------------+------------------------+
  462. | <<<yarn.web-proxy.address>>> | | |
  463. | | <<<WebAppProxy>>> host:port for proxy to AM web apps. | |
  464. | | | <host:port> if this is the same as <<<yarn.resourcemanager.webapp.address>>>|
  465. | | | or it is not defined then the <<<ResourceManager>>> will run the proxy|
  466. | | | otherwise a standalone proxy server will need to be launched.|
  467. *-------------------------+-------------------------+------------------------+
  468. | <<<yarn.web-proxy.keytab>>> | | |
  469. | | </etc/security/keytab/web-app.service.keytab> | |
  470. | | | Kerberos keytab file for the WebAppProxy. |
  471. *-------------------------+-------------------------+------------------------+
  472. | <<<yarn.web-proxy.principal>>> | wap/_HOST@REALM.TLD | |
  473. | | | Kerberos principal name for the WebAppProxy. |
  474. *-------------------------+-------------------------+------------------------+
  475. Configuration for <<<conf/yarn-site.xml>>>
  476. ** LinuxContainerExecutor
  477. A <<<ContainerExecutor>>> used by YARN framework which define how any
  478. <container> launched and controlled.
  479. The following are the available in Hadoop YARN:
  480. *--------------------------------------+--------------------------------------+
  481. || ContainerExecutor || Description |
  482. *--------------------------------------+--------------------------------------+
  483. | <<<DefaultContainerExecutor>>> | |
  484. | | The default executor which YARN uses to manage container execution. |
  485. | | The container process has the same Unix user as the NodeManager. |
  486. *--------------------------------------+--------------------------------------+
  487. | <<<LinuxContainerExecutor>>> | |
  488. | | Supported only on GNU/Linux, this executor runs the containers as either the |
  489. | | YARN user who submitted the application (when full security is enabled) or |
  490. | | as a dedicated user (defaults to nobody) when full security is not enabled. |
  491. | | When full security is enabled, this executor requires all user accounts to be |
  492. | | created on the cluster nodes where the containers are launched. It uses |
  493. | | a <setuid> executable that is included in the Hadoop distribution. |
  494. | | The NodeManager uses this executable to launch and kill containers. |
  495. | | The setuid executable switches to the user who has submitted the |
  496. | | application and launches or kills the containers. For maximum security, |
  497. | | this executor sets up restricted permissions and user/group ownership of |
  498. | | local files and directories used by the containers such as the shared |
  499. | | objects, jars, intermediate files, log files etc. Particularly note that, |
  500. | | because of this, except the application owner and NodeManager, no other |
  501. | | user can access any of the local files/directories including those |
  502. | | localized as part of the distributed cache. |
  503. *--------------------------------------+--------------------------------------+
  504. To build the LinuxContainerExecutor executable run:
  505. ----
  506. $ mvn package -Dcontainer-executor.conf.dir=/etc/hadoop/
  507. ----
  508. The path passed in <<<-Dcontainer-executor.conf.dir>>> should be the
  509. path on the cluster nodes where a configuration file for the setuid
  510. executable should be located. The executable should be installed in
  511. $HADOOP_YARN_HOME/bin.
  512. The executable must have specific permissions: 6050 or --Sr-s---
  513. permissions user-owned by <root> (super-user) and group-owned by a
  514. special group (e.g. <<<hadoop>>>) of which the NodeManager Unix user is
  515. the group member and no ordinary application user is. If any application
  516. user belongs to this special group, security will be compromised. This
  517. special group name should be specified for the configuration property
  518. <<<yarn.nodemanager.linux-container-executor.group>>> in both
  519. <<<conf/yarn-site.xml>>> and <<<conf/container-executor.cfg>>>.
  520. For example, let's say that the NodeManager is run as user <yarn> who is
  521. part of the groups users and <hadoop>, any of them being the primary group.
  522. Let also be that <users> has both <yarn> and another user
  523. (application submitter) <alice> as its members, and <alice> does not
  524. belong to <hadoop>. Going by the above description, the setuid/setgid
  525. executable should be set 6050 or --Sr-s--- with user-owner as <yarn> and
  526. group-owner as <hadoop> which has <yarn> as its member (and not <users>
  527. which has <alice> also as its member besides <yarn>).
  528. The LinuxTaskController requires that paths including and leading up to
  529. the directories specified in <<<yarn.nodemanager.local-dirs>>> and
  530. <<<yarn.nodemanager.log-dirs>>> to be set 755 permissions as described
  531. above in the table on permissions on directories.
  532. * <<<conf/container-executor.cfg>>>
  533. The executable requires a configuration file called
  534. <<<container-executor.cfg>>> to be present in the configuration
  535. directory passed to the mvn target mentioned above.
  536. The configuration file must be owned by the user running NodeManager
  537. (user <<<yarn>>> in the above example), group-owned by anyone and
  538. should have the permissions 0400 or r--------.
  539. The executable requires following configuration items to be present
  540. in the <<<conf/container-executor.cfg>>> file. The items should be
  541. mentioned as simple key=value pairs, one per-line:
  542. *-------------------------+-------------------------+------------------------+
  543. || Parameter || Value || Notes |
  544. *-------------------------+-------------------------+------------------------+
  545. | <<<yarn.nodemanager.linux-container-executor.group>>> | <hadoop> | |
  546. | | | Unix group of the NodeManager. The group owner of the |
  547. | | |<container-executor> binary should be this group. Should be same as the |
  548. | | | value with which the NodeManager is configured. This configuration is |
  549. | | | required for validating the secure access of the <container-executor> |
  550. | | | binary. |
  551. *-------------------------+-------------------------+------------------------+
  552. | <<<banned.users>>> | hdfs,yarn,mapred,bin | Banned users. |
  553. *-------------------------+-------------------------+------------------------+
  554. | <<<allowed.system.users>>> | foo,bar | Allowed system users. |
  555. *-------------------------+-------------------------+------------------------+
  556. | <<<min.user.id>>> | 1000 | Prevent other super-users. |
  557. *-------------------------+-------------------------+------------------------+
  558. Configuration for <<<conf/yarn-site.xml>>>
  559. To re-cap, here are the local file-sysytem permissions required for the
  560. various paths related to the <<<LinuxContainerExecutor>>>:
  561. *-------------------+-------------------+------------------+------------------+
  562. || Filesystem || Path || User:Group || Permissions |
  563. *-------------------+-------------------+------------------+------------------+
  564. | local | container-executor | root:hadoop | --Sr-s--- |
  565. *-------------------+-------------------+------------------+------------------+
  566. | local | <<<conf/container-executor.cfg>>> | root:hadoop | r-------- |
  567. *-------------------+-------------------+------------------+------------------+
  568. | local | <<<yarn.nodemanager.local-dirs>>> | yarn:hadoop | drwxr-xr-x |
  569. *-------------------+-------------------+------------------+------------------+
  570. | local | <<<yarn.nodemanager.log-dirs>>> | yarn:hadoop | drwxr-xr-x |
  571. *-------------------+-------------------+------------------+------------------+
  572. ** MapReduce JobHistory Server
  573. *-------------------------+-------------------------+------------------------+
  574. || Parameter || Value || Notes |
  575. *-------------------------+-------------------------+------------------------+
  576. | <<<mapreduce.jobhistory.address>>> | | |
  577. | | MapReduce JobHistory Server <host:port> | Default port is 10020. |
  578. *-------------------------+-------------------------+------------------------+
  579. | <<<mapreduce.jobhistory.keytab>>> | |
  580. | | </etc/security/keytab/jhs.service.keytab> | |
  581. | | | Kerberos keytab file for the MapReduce JobHistory Server. |
  582. *-------------------------+-------------------------+------------------------+
  583. | <<<mapreduce.jobhistory.principal>>> | jhs/_HOST@REALM.TLD | |
  584. | | | Kerberos principal name for the MapReduce JobHistory Server. |
  585. *-------------------------+-------------------------+------------------------+
  586. Configuration for <<<conf/mapred-site.xml>>>