ClusterSetup.apt.vm 54 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126
  1. ~~ Licensed under the Apache License, Version 2.0 (the "License");
  2. ~~ you may not use this file except in compliance with the License.
  3. ~~ You may obtain a copy of the License at
  4. ~~
  5. ~~ http://www.apache.org/licenses/LICENSE-2.0
  6. ~~
  7. ~~ Unless required by applicable law or agreed to in writing, software
  8. ~~ distributed under the License is distributed on an "AS IS" BASIS,
  9. ~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  10. ~~ See the License for the specific language governing permissions and
  11. ~~ limitations under the License. See accompanying LICENSE file.
  12. ---
  13. Hadoop Map Reduce Next Generation-${project.version} - Cluster Setup
  14. ---
  15. ---
  16. ${maven.build.timestamp}
  17. Hadoop MapReduce Next Generation - Cluster Setup
  18. \[ {{{./index.html}Go Back}} \]
  19. %{toc|section=1|fromDepth=0}
  20. * {Purpose}
  21. This document describes how to install, configure and manage non-trivial
  22. Hadoop clusters ranging from a few nodes to extremely large clusters
  23. with thousands of nodes.
  24. To play with Hadoop, you may first want to install it on a single
  25. machine (see {{{SingleCluster}Single Node Setup}}).
  26. * {Prerequisites}
  27. Download a stable version of Hadoop from Apache mirrors.
  28. * {Installation}
  29. Installing a Hadoop cluster typically involves unpacking the software on all
  30. the machines in the cluster or installing RPMs.
  31. Typically one machine in the cluster is designated as the NameNode and
  32. another machine the as ResourceManager, exclusively. These are the masters.
  33. The rest of the machines in the cluster act as both DataNode and NodeManager.
  34. These are the slaves.
  35. * {Running Hadoop in Non-Secure Mode}
  36. The following sections describe how to configure a Hadoop cluster.
  37. * {Configuration Files}
  38. Hadoop configuration is driven by two types of important configuration files:
  39. * Read-only default configuration - <<<core-default.xml>>>,
  40. <<<hdfs-default.xml>>>, <<<yarn-default.xml>>> and
  41. <<<mapred-default.xml>>>.
  42. * Site-specific configuration - <<conf/core-site.xml>>,
  43. <<conf/hdfs-site.xml>>, <<conf/yarn-site.xml>> and
  44. <<conf/mapred-site.xml>>.
  45. Additionally, you can control the Hadoop scripts found in the bin/
  46. directory of the distribution, by setting site-specific values via the
  47. <<conf/hadoop-env.sh>> and <<yarn-env.sh>>.
  48. * {Site Configuration}
  49. To configure the Hadoop cluster you will need to configure the
  50. <<<environment>>> in which the Hadoop daemons execute as well as the
  51. <<<configuration parameters>>> for the Hadoop daemons.
  52. The Hadoop daemons are NameNode/DataNode and ResourceManager/NodeManager.
  53. * {Configuring Environment of Hadoop Daemons}
  54. Administrators should use the <<conf/hadoop-env.sh>> and
  55. <<conf/yarn-env.sh>> script to do site-specific customization of the
  56. Hadoop daemons' process environment.
  57. At the very least you should specify the <<<JAVA_HOME>>> so that it is
  58. correctly defined on each remote node.
  59. In most cases you should also specify <<<HADOOP_PID_DIR>>> and
  60. <<<HADOOP_SECURE_DN_PID_DIR>>> to point to directories that can only be
  61. written to by the users that are going to run the hadoop daemons.
  62. Otherwise there is the potential for a symlink attack.
  63. Administrators can configure individual daemons using the configuration
  64. options shown below in the table:
  65. *--------------------------------------+--------------------------------------+
  66. || Daemon || Environment Variable |
  67. *--------------------------------------+--------------------------------------+
  68. | NameNode | HADOOP_NAMENODE_OPTS |
  69. *--------------------------------------+--------------------------------------+
  70. | DataNode | HADOOP_DATANODE_OPTS |
  71. *--------------------------------------+--------------------------------------+
  72. | Secondary NameNode | HADOOP_SECONDARYNAMENODE_OPTS |
  73. *--------------------------------------+--------------------------------------+
  74. | ResourceManager | YARN_RESOURCEMANAGER_OPTS |
  75. *--------------------------------------+--------------------------------------+
  76. | NodeManager | YARN_NODEMANAGER_OPTS |
  77. *--------------------------------------+--------------------------------------+
  78. | WebAppProxy | YARN_PROXYSERVER_OPTS |
  79. *--------------------------------------+--------------------------------------+
  80. | Map Reduce Job History Server | HADOOP_JOB_HISTORYSERVER_OPTS |
  81. *--------------------------------------+--------------------------------------+
  82. For example, To configure Namenode to use parallelGC, the following
  83. statement should be added in hadoop-env.sh :
  84. ----
  85. export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC ${HADOOP_NAMENODE_OPTS}"
  86. ----
  87. Other useful configuration parameters that you can customize include:
  88. * <<<HADOOP_LOG_DIR>>> / <<<YARN_LOG_DIR>>> - The directory where the
  89. daemons' log files are stored. They are automatically created if they
  90. don't exist.
  91. * <<<HADOOP_HEAPSIZE>>> / <<<YARN_HEAPSIZE>>> - The maximum amount of
  92. heapsize to use, in MB e.g. if the varibale is set to 1000 the heap
  93. will be set to 1000MB. This is used to configure the heap
  94. size for the daemon. By default, the value is 1000. If you want to
  95. configure the values separately for each deamon you can use.
  96. *--------------------------------------+--------------------------------------+
  97. || Daemon || Environment Variable |
  98. *--------------------------------------+--------------------------------------+
  99. | ResourceManager | YARN_RESOURCEMANAGER_HEAPSIZE |
  100. *--------------------------------------+--------------------------------------+
  101. | NodeManager | YARN_NODEMANAGER_HEAPSIZE |
  102. *--------------------------------------+--------------------------------------+
  103. | WebAppProxy | YARN_PROXYSERVER_HEAPSIZE |
  104. *--------------------------------------+--------------------------------------+
  105. | Map Reduce Job History Server | HADOOP_JOB_HISTORYSERVER_HEAPSIZE |
  106. *--------------------------------------+--------------------------------------+
  107. * {Configuring the Hadoop Daemons in Non-Secure Mode}
  108. This section deals with important parameters to be specified in
  109. the given configuration files:
  110. * <<<conf/core-site.xml>>>
  111. *-------------------------+-------------------------+------------------------+
  112. || Parameter || Value || Notes |
  113. *-------------------------+-------------------------+------------------------+
  114. | <<<fs.defaultFS>>> | NameNode URI | <hdfs://host:port/> |
  115. *-------------------------+-------------------------+------------------------+
  116. | <<<io.file.buffer.size>>> | 131072 | |
  117. | | | Size of read/write buffer used in SequenceFiles. |
  118. *-------------------------+-------------------------+------------------------+
  119. * <<<conf/hdfs-site.xml>>>
  120. * Configurations for NameNode:
  121. *-------------------------+-------------------------+------------------------+
  122. || Parameter || Value || Notes |
  123. *-------------------------+-------------------------+------------------------+
  124. | <<<dfs.namenode.name.dir>>> | | |
  125. | | Path on the local filesystem where the NameNode stores the namespace | |
  126. | | and transactions logs persistently. | |
  127. | | | If this is a comma-delimited list of directories then the name table is |
  128. | | | replicated in all of the directories, for redundancy. |
  129. *-------------------------+-------------------------+------------------------+
  130. | <<<dfs.namenode.hosts>>> / <<<dfs.namenode.hosts.exclude>>> | | |
  131. | | List of permitted/excluded DataNodes. | |
  132. | | | If necessary, use these files to control the list of allowable |
  133. | | | datanodes. |
  134. *-------------------------+-------------------------+------------------------+
  135. | <<<dfs.blocksize>>> | 268435456 | |
  136. | | | HDFS blocksize of 256MB for large file-systems. |
  137. *-------------------------+-------------------------+------------------------+
  138. | <<<dfs.namenode.handler.count>>> | 100 | |
  139. | | | More NameNode server threads to handle RPCs from large number of |
  140. | | | DataNodes. |
  141. *-------------------------+-------------------------+------------------------+
  142. * Configurations for DataNode:
  143. *-------------------------+-------------------------+------------------------+
  144. || Parameter || Value || Notes |
  145. *-------------------------+-------------------------+------------------------+
  146. | <<<dfs.datanode.data.dir>>> | | |
  147. | | Comma separated list of paths on the local filesystem of a | |
  148. | | <<<DataNode>>> where it should store its blocks. | |
  149. | | | If this is a comma-delimited list of directories, then data will be |
  150. | | | stored in all named directories, typically on different devices. |
  151. *-------------------------+-------------------------+------------------------+
  152. * <<<conf/yarn-site.xml>>>
  153. * Configurations for ResourceManager and NodeManager:
  154. *-------------------------+-------------------------+------------------------+
  155. || Parameter || Value || Notes |
  156. *-------------------------+-------------------------+------------------------+
  157. | <<<yarn.acl.enable>>> | | |
  158. | | <<<true>>> / <<<false>>> | |
  159. | | | Enable ACLs? Defaults to <false>. |
  160. *-------------------------+-------------------------+------------------------+
  161. | <<<yarn.admin.acl>>> | | |
  162. | | Admin ACL | |
  163. | | | ACL to set admins on the cluster. |
  164. | | | ACLs are of for <comma-separated-users><space><comma-separated-groups>. |
  165. | | | Defaults to special value of <<*>> which means <anyone>. |
  166. | | | Special value of just <space> means no one has access. |
  167. *-------------------------+-------------------------+------------------------+
  168. | <<<yarn.log-aggregation-enable>>> | | |
  169. | | <false> | |
  170. | | | Configuration to enable or disable log aggregation |
  171. *-------------------------+-------------------------+------------------------+
  172. * Configurations for ResourceManager:
  173. *-------------------------+-------------------------+------------------------+
  174. || Parameter || Value || Notes |
  175. *-------------------------+-------------------------+------------------------+
  176. | <<<yarn.resourcemanager.address>>> | | |
  177. | | <<<ResourceManager>>> host:port for clients to submit jobs. | |
  178. | | | <host:port> |
  179. *-------------------------+-------------------------+------------------------+
  180. | <<<yarn.resourcemanager.scheduler.address>>> | | |
  181. | | <<<ResourceManager>>> host:port for ApplicationMasters to talk to | |
  182. | | Scheduler to obtain resources. | |
  183. | | | <host:port> |
  184. *-------------------------+-------------------------+------------------------+
  185. | <<<yarn.resourcemanager.resource-tracker.address>>> | | |
  186. | | <<<ResourceManager>>> host:port for NodeManagers. | |
  187. | | | <host:port> |
  188. *-------------------------+-------------------------+------------------------+
  189. | <<<yarn.resourcemanager.admin.address>>> | | |
  190. | | <<<ResourceManager>>> host:port for administrative commands. | |
  191. | | | <host:port> |
  192. *-------------------------+-------------------------+------------------------+
  193. | <<<yarn.resourcemanager.webapp.address>>> | | |
  194. | | <<<ResourceManager>>> web-ui host:port. | |
  195. | | | <host:port> |
  196. *-------------------------+-------------------------+------------------------+
  197. | <<<yarn.resourcemanager.scheduler.class>>> | | |
  198. | | <<<ResourceManager>>> Scheduler class. | |
  199. | | | <<<CapacityScheduler>>> (recommended) or <<<FifoScheduler>>> |
  200. *-------------------------+-------------------------+------------------------+
  201. | <<<yarn.scheduler.minimum-allocation-mb>>> | | |
  202. | | Minimum limit of memory to allocate to each container request at the <<<Resource Manager>>>. | |
  203. | | | In MBs |
  204. *-------------------------+-------------------------+------------------------+
  205. | <<<yarn.scheduler.maximum-allocation-mb>>> | | |
  206. | | Maximum limit of memory to allocate to each container request at the <<<Resource Manager>>>. | |
  207. | | | In MBs |
  208. *-------------------------+-------------------------+------------------------+
  209. | <<<yarn.resourcemanager.nodes.include-path>>> / | | |
  210. | <<<yarn.resourcemanager.nodes.exclude-path>>> | | |
  211. | | List of permitted/excluded NodeManagers. | |
  212. | | | If necessary, use these files to control the list of allowable |
  213. | | | NodeManagers. |
  214. *-------------------------+-------------------------+------------------------+
  215. * Configurations for NodeManager:
  216. *-------------------------+-------------------------+------------------------+
  217. || Parameter || Value || Notes |
  218. *-------------------------+-------------------------+------------------------+
  219. | <<<yarn.nodemanager.resource.memory-mb>>> | | |
  220. | | Resource i.e. available physical memory, in MB, for given <<<NodeManager>>> | |
  221. | | | Defines total available resources on the <<<NodeManager>>> to be made |
  222. | | | available to running containers |
  223. *-------------------------+-------------------------+------------------------+
  224. | <<<yarn.nodemanager.vmem-pmem-ratio>>> | | |
  225. | | Maximum ratio by which virtual memory usage of tasks may exceed |
  226. | | physical memory | |
  227. | | | The virtual memory usage of each task may exceed its physical memory |
  228. | | | limit by this ratio. The total amount of virtual memory used by tasks |
  229. | | | on the NodeManager may exceed its physical memory usage by this ratio. |
  230. *-------------------------+-------------------------+------------------------+
  231. | <<<yarn.nodemanager.local-dirs>>> | | |
  232. | | Comma-separated list of paths on the local filesystem where | |
  233. | | intermediate data is written. ||
  234. | | | Multiple paths help spread disk i/o. |
  235. *-------------------------+-------------------------+------------------------+
  236. | <<<yarn.nodemanager.log-dirs>>> | | |
  237. | | Comma-separated list of paths on the local filesystem where logs | |
  238. | | are written. | |
  239. | | | Multiple paths help spread disk i/o. |
  240. *-------------------------+-------------------------+------------------------+
  241. | <<<yarn.nodemanager.log.retain-seconds>>> | | |
  242. | | <10800> | |
  243. | | | Default time (in seconds) to retain log files on the NodeManager |
  244. | | | Only applicable if log-aggregation is disabled. |
  245. *-------------------------+-------------------------+------------------------+
  246. | <<<yarn.nodemanager.remote-app-log-dir>>> | | |
  247. | | </logs> | |
  248. | | | HDFS directory where the application logs are moved on application |
  249. | | | completion. Need to set appropriate permissions. |
  250. | | | Only applicable if log-aggregation is enabled. |
  251. *-------------------------+-------------------------+------------------------+
  252. | <<<yarn.nodemanager.remote-app-log-dir-suffix>>> | | |
  253. | | <logs> | |
  254. | | | Suffix appended to the remote log dir. Logs will be aggregated to |
  255. | | | $\{yarn.nodemanager.remote-app-log-dir\}/$\{user\}/$\{thisParam\} |
  256. | | | Only applicable if log-aggregation is enabled. |
  257. *-------------------------+-------------------------+------------------------+
  258. | <<<yarn.nodemanager.aux-services>>> | | |
  259. | | mapreduce.shuffle | |
  260. | | | Shuffle service that needs to be set for Map Reduce applications. |
  261. *-------------------------+-------------------------+------------------------+
  262. * Configurations for History Server (Needs to be moved elsewhere):
  263. *-------------------------+-------------------------+------------------------+
  264. || Parameter || Value || Notes |
  265. *-------------------------+-------------------------+------------------------+
  266. | <<<yarn.log-aggregation.retain-seconds>>> | | |
  267. | | <-1> | |
  268. | | | How long to keep aggregation logs before deleting them. -1 disables. |
  269. | | | Be careful, set this too small and you will spam the name node. |
  270. *-------------------------+-------------------------+------------------------+
  271. * <<<conf/mapred-site.xml>>>
  272. * Configurations for MapReduce Applications:
  273. *-------------------------+-------------------------+------------------------+
  274. || Parameter || Value || Notes |
  275. *-------------------------+-------------------------+------------------------+
  276. | <<<mapreduce.framework.name>>> | | |
  277. | | yarn | |
  278. | | | Execution framework set to Hadoop YARN. |
  279. *-------------------------+-------------------------+------------------------+
  280. | <<<mapreduce.map.memory.mb>>> | 1536 | |
  281. | | | Larger resource limit for maps. |
  282. *-------------------------+-------------------------+------------------------+
  283. | <<<mapreduce.map.java.opts>>> | -Xmx1024M | |
  284. | | | Larger heap-size for child jvms of maps. |
  285. *-------------------------+-------------------------+------------------------+
  286. | <<<mapreduce.reduce.memory.mb>>> | 3072 | |
  287. | | | Larger resource limit for reduces. |
  288. *-------------------------+-------------------------+------------------------+
  289. | <<<mapreduce.reduce.java.opts>>> | -Xmx2560M | |
  290. | | | Larger heap-size for child jvms of reduces. |
  291. *-------------------------+-------------------------+------------------------+
  292. | <<<mapreduce.task.io.sort.mb>>> | 512 | |
  293. | | | Higher memory-limit while sorting data for efficiency. |
  294. *-------------------------+-------------------------+------------------------+
  295. | <<<mapreduce.task.io.sort.factor>>> | 100 | |
  296. | | | More streams merged at once while sorting files. |
  297. *-------------------------+-------------------------+------------------------+
  298. | <<<mapreduce.reduce.shuffle.parallelcopies>>> | 50 | |
  299. | | | Higher number of parallel copies run by reduces to fetch outputs |
  300. | | | from very large number of maps. |
  301. *-------------------------+-------------------------+------------------------+
  302. * Configurations for MapReduce JobHistory Server:
  303. *-------------------------+-------------------------+------------------------+
  304. || Parameter || Value || Notes |
  305. *-------------------------+-------------------------+------------------------+
  306. | <<<mapreduce.jobhistory.address>>> | | |
  307. | | MapReduce JobHistory Server <host:port> | Default port is 10020. |
  308. *-------------------------+-------------------------+------------------------+
  309. | <<<mapreduce.jobhistory.webapp.address>>> | | |
  310. | | MapReduce JobHistory Server Web UI <host:port> | Default port is 19888. |
  311. *-------------------------+-------------------------+------------------------+
  312. | <<<mapreduce.jobhistory.intermediate-done-dir>>> | /mr-history/tmp | |
  313. | | | Directory where history files are written by MapReduce jobs. |
  314. *-------------------------+-------------------------+------------------------+
  315. | <<<mapreduce.jobhistory.done-dir>>> | /mr-history/done| |
  316. | | | Directory where history files are managed by the MR JobHistory Server. |
  317. *-------------------------+-------------------------+------------------------+
  318. * Hadoop Rack Awareness
  319. The HDFS and the YARN components are rack-aware.
  320. The NameNode and the ResourceManager obtains the rack information of the
  321. slaves in the cluster by invoking an API <resolve> in an administrator
  322. configured module.
  323. The API resolves the DNS name (also IP address) to a rack id.
  324. The site-specific module to use can be configured using the configuration
  325. item <<<topology.node.switch.mapping.impl>>>. The default implementation
  326. of the same runs a script/command configured using
  327. <<<topology.script.file.name>>>. If <<<topology.script.file.name>>> is
  328. not set, the rack id </default-rack> is returned for any passed IP address.
  329. * Monitoring Health of NodeManagers
  330. Hadoop provides a mechanism by which administrators can configure the
  331. NodeManager to run an administrator supplied script periodically to
  332. determine if a node is healthy or not.
  333. Administrators can determine if the node is in a healthy state by
  334. performing any checks of their choice in the script. If the script
  335. detects the node to be in an unhealthy state, it must print a line to
  336. standard output beginning with the string ERROR. The NodeManager spawns
  337. the script periodically and checks its output. If the script's output
  338. contains the string ERROR, as described above, the node's status is
  339. reported as <<<unhealthy>>> and the node is black-listed by the
  340. ResourceManager. No further tasks will be assigned to this node.
  341. However, the NodeManager continues to run the script, so that if the
  342. node becomes healthy again, it will be removed from the blacklisted nodes
  343. on the ResourceManager automatically. The node's health along with the
  344. output of the script, if it is unhealthy, is available to the
  345. administrator in the ResourceManager web interface. The time since the
  346. node was healthy is also displayed on the web interface.
  347. The following parameters can be used to control the node health
  348. monitoring script in <<<conf/yarn-site.xml>>>.
  349. *-------------------------+-------------------------+------------------------+
  350. || Parameter || Value || Notes |
  351. *-------------------------+-------------------------+------------------------+
  352. | <<<yarn.nodemanager.health-checker.script.path>>> | | |
  353. | | Node health script | |
  354. | | | Script to check for node's health status. |
  355. *-------------------------+-------------------------+------------------------+
  356. | <<<yarn.nodemanager.health-checker.script.opts>>> | | |
  357. | | Node health script options | |
  358. | | | Options for script to check for node's health status. |
  359. *-------------------------+-------------------------+------------------------+
  360. | <<<yarn.nodemanager.health-checker.script.interval-ms>>> | | |
  361. | | Node health script interval | |
  362. | | | Time interval for running health script. |
  363. *-------------------------+-------------------------+------------------------+
  364. | <<<yarn.nodemanager.health-checker.script.timeout-ms>>> | | |
  365. | | Node health script timeout interval | |
  366. | | | Timeout for health script execution. |
  367. *-------------------------+-------------------------+------------------------+
  368. The health checker script is not supposed to give ERROR if only some of the
  369. local disks become bad. NodeManager has the ability to periodically check
  370. the health of the local disks (specifically checks nodemanager-local-dirs
  371. and nodemanager-log-dirs) and after reaching the threshold of number of
  372. bad directories based on the value set for the config property
  373. yarn.nodemanager.disk-health-checker.min-healthy-disks, the whole node is
  374. marked unhealthy and this info is sent to resource manager also. The boot
  375. disk is either raided or a failure in the boot disk is identified by the
  376. health checker script.
  377. * {Slaves file}
  378. Typically you choose one machine in the cluster to act as the NameNode and
  379. one machine as to act as the ResourceManager, exclusively. The rest of the
  380. machines act as both a DataNode and NodeManager and are referred to as
  381. <slaves>.
  382. List all slave hostnames or IP addresses in your <<<conf/slaves>>> file,
  383. one per line.
  384. * {Logging}
  385. Hadoop uses the Apache log4j via the Apache Commons Logging framework for
  386. logging. Edit the <<<conf/log4j.properties>>> file to customize the
  387. Hadoop daemons' logging configuration (log-formats and so on).
  388. * {Operating the Hadoop Cluster}
  389. Once all the necessary configuration is complete, distribute the files to the
  390. <<<HADOOP_CONF_DIR>>> directory on all the machines.
  391. * Hadoop Startup
  392. To start a Hadoop cluster you will need to start both the HDFS and YARN
  393. cluster.
  394. Format a new distributed filesystem:
  395. ----
  396. $ $HADOOP_PREFIX/bin/hdfs namenode -format <cluster_name>
  397. ----
  398. Start the HDFS with the following command, run on the designated NameNode:
  399. ----
  400. $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
  401. ----
  402. Run a script to start DataNodes on all slaves:
  403. ----
  404. $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
  405. ----
  406. Start the YARN with the following command, run on the designated
  407. ResourceManager:
  408. ----
  409. $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
  410. ----
  411. Run a script to start NodeManagers on all slaves:
  412. ----
  413. $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
  414. ----
  415. Start a standalone WebAppProxy server. If multiple servers
  416. are used with load balancing it should be run on each of them:
  417. ----
  418. $ $HADOOP_YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR
  419. ----
  420. Start the MapReduce JobHistory Server with the following command, run on the
  421. designated server:
  422. ----
  423. $ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
  424. ----
  425. * Hadoop Shutdown
  426. Stop the NameNode with the following command, run on the designated
  427. NameNode:
  428. ----
  429. $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
  430. ----
  431. Run a script to stop DataNodes on all slaves:
  432. ----
  433. $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
  434. ----
  435. Stop the ResourceManager with the following command, run on the designated
  436. ResourceManager:
  437. ----
  438. $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
  439. ----
  440. Run a script to stop NodeManagers on all slaves:
  441. ----
  442. $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager
  443. ----
  444. Stop the WebAppProxy server. If multiple servers are used with load
  445. balancing it should be run on each of them:
  446. ----
  447. $ $HADOOP_YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR
  448. ----
  449. Stop the MapReduce JobHistory Server with the following command, run on the
  450. designated server:
  451. ----
  452. $ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh stop historyserver --config $HADOOP_CONF_DIR
  453. ----
  454. * {Running Hadoop in Secure Mode}
  455. This section deals with important parameters to be specified in
  456. to run Hadoop in <<secure mode>> with strong, Kerberos-based
  457. authentication.
  458. * <<<User Accounts for Hadoop Daemons>>>
  459. Ensure that HDFS and YARN daemons run as different Unix users, for e.g.
  460. <<<hdfs>>> and <<<yarn>>>. Also, ensure that the MapReduce JobHistory
  461. server runs as user <<<mapred>>>.
  462. It's recommended to have them share a Unix group, for e.g. <<<hadoop>>>.
  463. *--------------------------------------+----------------------------------------------------------------------+
  464. || User:Group || Daemons |
  465. *--------------------------------------+----------------------------------------------------------------------+
  466. | hdfs:hadoop | NameNode, Secondary NameNode, Checkpoint Node, Backup Node, DataNode |
  467. *--------------------------------------+----------------------------------------------------------------------+
  468. | yarn:hadoop | ResourceManager, NodeManager |
  469. *--------------------------------------+----------------------------------------------------------------------+
  470. | mapred:hadoop | MapReduce JobHistory Server |
  471. *--------------------------------------+----------------------------------------------------------------------+
  472. * <<<Permissions for both HDFS and local fileSystem paths>>>
  473. The following table lists various paths on HDFS and local filesystems (on
  474. all nodes) and recommended permissions:
  475. *-------------------+-------------------+------------------+------------------+
  476. || Filesystem || Path || User:Group || Permissions |
  477. *-------------------+-------------------+------------------+------------------+
  478. | local | <<<dfs.namenode.name.dir>>> | hdfs:hadoop | drwx------ |
  479. *-------------------+-------------------+------------------+------------------+
  480. | local | <<<dfs.datanode.data.dir>>> | hdfs:hadoop | drwx------ |
  481. *-------------------+-------------------+------------------+------------------+
  482. | local | $HADOOP_LOG_DIR | hdfs:hadoop | drwxrwxr-x |
  483. *-------------------+-------------------+------------------+------------------+
  484. | local | $YARN_LOG_DIR | yarn:hadoop | drwxrwxr-x |
  485. *-------------------+-------------------+------------------+------------------+
  486. | local | <<<yarn.nodemanager.local-dirs>>> | yarn:hadoop | drwxr-xr-x |
  487. *-------------------+-------------------+------------------+------------------+
  488. | local | <<<yarn.nodemanager.log-dirs>>> | yarn:hadoop | drwxr-xr-x |
  489. *-------------------+-------------------+------------------+------------------+
  490. | local | container-executor | root:hadoop | --Sr-s--- |
  491. *-------------------+-------------------+------------------+------------------+
  492. | local | <<<conf/container-executor.cfg>>> | root:hadoop | r-------- |
  493. *-------------------+-------------------+------------------+------------------+
  494. | hdfs | / | hdfs:hadoop | drwxr-xr-x |
  495. *-------------------+-------------------+------------------+------------------+
  496. | hdfs | /tmp | hdfs:hadoop | drwxrwxrwxt |
  497. *-------------------+-------------------+------------------+------------------+
  498. | hdfs | /user | hdfs:hadoop | drwxr-xr-x |
  499. *-------------------+-------------------+------------------+------------------+
  500. | hdfs | <<<yarn.nodemanager.remote-app-log-dir>>> | yarn:hadoop | drwxrwxrwxt |
  501. *-------------------+-------------------+------------------+------------------+
  502. | hdfs | <<<mapreduce.jobhistory.intermediate-done-dir>>> | mapred:hadoop | |
  503. | | | | drwxrwxrwxt |
  504. *-------------------+-------------------+------------------+------------------+
  505. | hdfs | <<<mapreduce.jobhistory.done-dir>>> | mapred:hadoop | |
  506. | | | | drwxr-x--- |
  507. *-------------------+-------------------+------------------+------------------+
  508. * Kerberos Keytab files
  509. * HDFS
  510. The NameNode keytab file, on the NameNode host, should look like the
  511. following:
  512. ----
  513. $ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/nn.service.keytab
  514. Keytab name: FILE:/etc/security/keytab/nn.service.keytab
  515. KVNO Timestamp Principal
  516. 4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  517. 4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  518. 4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  519. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  520. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  521. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  522. ----
  523. The Secondary NameNode keytab file, on that host, should look like the
  524. following:
  525. ----
  526. $ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/sn.service.keytab
  527. Keytab name: FILE:/etc/security/keytab/sn.service.keytab
  528. KVNO Timestamp Principal
  529. 4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  530. 4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  531. 4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  532. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  533. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  534. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  535. ----
  536. The DataNode keytab file, on each host, should look like the following:
  537. ----
  538. $ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/dn.service.keytab
  539. Keytab name: FILE:/etc/security/keytab/dn.service.keytab
  540. KVNO Timestamp Principal
  541. 4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  542. 4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  543. 4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  544. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  545. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  546. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  547. ----
  548. * YARN
  549. The ResourceManager keytab file, on the ResourceManager host, should look
  550. like the following:
  551. ----
  552. $ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/rm.service.keytab
  553. Keytab name: FILE:/etc/security/keytab/rm.service.keytab
  554. KVNO Timestamp Principal
  555. 4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  556. 4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  557. 4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  558. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  559. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  560. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  561. ----
  562. The NodeManager keytab file, on each host, should look like the following:
  563. ----
  564. $ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/nm.service.keytab
  565. Keytab name: FILE:/etc/security/keytab/nm.service.keytab
  566. KVNO Timestamp Principal
  567. 4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  568. 4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  569. 4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  570. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  571. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  572. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  573. ----
  574. * MapReduce JobHistory Server
  575. The MapReduce JobHistory Server keytab file, on that host, should look
  576. like the following:
  577. ----
  578. $ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/jhs.service.keytab
  579. Keytab name: FILE:/etc/security/keytab/jhs.service.keytab
  580. KVNO Timestamp Principal
  581. 4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  582. 4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  583. 4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  584. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
  585. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
  586. 4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
  587. ----
  588. * Configuration in Secure Mode
  589. * <<<conf/core-site.xml>>>
  590. *-------------------------+-------------------------+------------------------+
  591. || Parameter || Value || Notes |
  592. *-------------------------+-------------------------+------------------------+
  593. | <<<hadoop.security.authentication>>> | <kerberos> | <simple> is non-secure. |
  594. *-------------------------+-------------------------+------------------------+
  595. | <<<hadoop.security.authorization>>> | <true> | |
  596. | | | Enable RPC service-level authorization. |
  597. *-------------------------+-------------------------+------------------------+
  598. * <<<conf/hdfs-site.xml>>>
  599. * Configurations for NameNode:
  600. *-------------------------+-------------------------+------------------------+
  601. || Parameter || Value || Notes |
  602. *-------------------------+-------------------------+------------------------+
  603. | <<<dfs.block.access.token.enable>>> | <true> | |
  604. | | | Enable HDFS block access tokens for secure operations. |
  605. *-------------------------+-------------------------+------------------------+
  606. | <<<dfs.https.enable>>> | <true> | |
  607. *-------------------------+-------------------------+------------------------+
  608. | <<<dfs.namenode.https-address>>> | <nn_host_fqdn:50470> | |
  609. *-------------------------+-------------------------+------------------------+
  610. | <<<dfs.https.port>>> | <50470> | |
  611. *-------------------------+-------------------------+------------------------+
  612. | <<<dfs.namenode.keytab.file>>> | </etc/security/keytab/nn.service.keytab> | |
  613. | | | Kerberos keytab file for the NameNode. |
  614. *-------------------------+-------------------------+------------------------+
  615. | <<<dfs.namenode.kerberos.principal>>> | nn/_HOST@REALM.TLD | |
  616. | | | Kerberos principal name for the NameNode. |
  617. *-------------------------+-------------------------+------------------------+
  618. | <<<dfs.namenode.kerberos.https.principal>>> | host/_HOST@REALM.TLD | |
  619. | | | HTTPS Kerberos principal name for the NameNode. |
  620. *-------------------------+-------------------------+------------------------+
  621. * Configurations for Secondary NameNode:
  622. *-------------------------+-------------------------+------------------------+
  623. || Parameter || Value || Notes |
  624. *-------------------------+-------------------------+------------------------+
  625. | <<<dfs.namenode.secondary.http-address>>> | <c_nn_host_fqdn:50090> | |
  626. *-------------------------+-------------------------+------------------------+
  627. | <<<dfs.namenode.secondary.https-port>>> | <50470> | |
  628. *-------------------------+-------------------------+------------------------+
  629. | <<<dfs.namenode.secondary.keytab.file>>> | | |
  630. | | </etc/security/keytab/sn.service.keytab> | |
  631. | | | Kerberos keytab file for the NameNode. |
  632. *-------------------------+-------------------------+------------------------+
  633. | <<<dfs.namenode.secondary.kerberos.principal>>> | sn/_HOST@REALM.TLD | |
  634. | | | Kerberos principal name for the Secondary NameNode. |
  635. *-------------------------+-------------------------+------------------------+
  636. | <<<dfs.namenode.secondary.kerberos.https.principal>>> | | |
  637. | | host/_HOST@REALM.TLD | |
  638. | | | HTTPS Kerberos principal name for the Secondary NameNode. |
  639. *-------------------------+-------------------------+------------------------+
  640. * Configurations for DataNode:
  641. *-------------------------+-------------------------+------------------------+
  642. || Parameter || Value || Notes |
  643. *-------------------------+-------------------------+------------------------+
  644. | <<<dfs.datanode.data.dir.perm>>> | 700 | |
  645. *-------------------------+-------------------------+------------------------+
  646. | <<<dfs.datanode.address>>> | <0.0.0.0:2003> | |
  647. *-------------------------+-------------------------+------------------------+
  648. | <<<dfs.datanode.https.address>>> | <0.0.0.0:2005> | |
  649. *-------------------------+-------------------------+------------------------+
  650. | <<<dfs.datanode.keytab.file>>> | </etc/security/keytab/dn.service.keytab> | |
  651. | | | Kerberos keytab file for the DataNode. |
  652. *-------------------------+-------------------------+------------------------+
  653. | <<<dfs.datanode.kerberos.principal>>> | dn/_HOST@REALM.TLD | |
  654. | | | Kerberos principal name for the DataNode. |
  655. *-------------------------+-------------------------+------------------------+
  656. | <<<dfs.datanode.kerberos.https.principal>>> | | |
  657. | | host/_HOST@REALM.TLD | |
  658. | | | HTTPS Kerberos principal name for the DataNode. |
  659. *-------------------------+-------------------------+------------------------+
  660. * <<<conf/yarn-site.xml>>>
  661. * WebAppProxy
  662. The <<<WebAppProxy>>> provides a proxy between the web applications
  663. exported by an application and an end user. If security is enabled
  664. it will warn users before accessing a potentially unsafe web application.
  665. Authentication and authorization using the proxy is handled just like
  666. any other privileged web application.
  667. *-------------------------+-------------------------+------------------------+
  668. || Parameter || Value || Notes |
  669. *-------------------------+-------------------------+------------------------+
  670. | <<<yarn.web-proxy.address>>> | | |
  671. | | <<<WebAppProxy>>> host:port for proxy to AM web apps. | |
  672. | | | <host:port> if this is the same as <<<yarn.resourcemanager.webapp.address>>>|
  673. | | | or it is not defined then the <<<ResourceManager>>> will run the proxy|
  674. | | | otherwise a standalone proxy server will need to be launched.|
  675. *-------------------------+-------------------------+------------------------+
  676. | <<<yarn.web-proxy.keytab>>> | | |
  677. | | </etc/security/keytab/web-app.service.keytab> | |
  678. | | | Kerberos keytab file for the WebAppProxy. |
  679. *-------------------------+-------------------------+------------------------+
  680. | <<<yarn.web-proxy.principal>>> | wap/_HOST@REALM.TLD | |
  681. | | | Kerberos principal name for the WebAppProxy. |
  682. *-------------------------+-------------------------+------------------------+
  683. * LinuxContainerExecutor
  684. A <<<ContainerExecutor>>> used by YARN framework which define how any
  685. <container> launched and controlled.
  686. The following are the available in Hadoop YARN:
  687. *--------------------------------------+--------------------------------------+
  688. || ContainerExecutor || Description |
  689. *--------------------------------------+--------------------------------------+
  690. | <<<DefaultContainerExecutor>>> | |
  691. | | The default executor which YARN uses to manage container execution. |
  692. | | The container process has the same Unix user as the NodeManager. |
  693. *--------------------------------------+--------------------------------------+
  694. | <<<LinuxContainerExecutor>>> | |
  695. | | Supported only on GNU/Linux, this executor runs the containers as the |
  696. | | user who submitted the application. It requires all user accounts to be |
  697. | | created on the cluster nodes where the containers are launched. It uses |
  698. | | a <setuid> executable that is included in the Hadoop distribution. |
  699. | | The NodeManager uses this executable to launch and kill containers. |
  700. | | The setuid executable switches to the user who has submitted the |
  701. | | application and launches or kills the containers. For maximum security, |
  702. | | this executor sets up restricted permissions and user/group ownership of |
  703. | | local files and directories used by the containers such as the shared |
  704. | | objects, jars, intermediate files, log files etc. Particularly note that, |
  705. | | because of this, except the application owner and NodeManager, no other |
  706. | | user can access any of the local files/directories including those |
  707. | | localized as part of the distributed cache. |
  708. *--------------------------------------+--------------------------------------+
  709. To build the LinuxContainerExecutor executable run:
  710. ----
  711. $ mvn package -Dcontainer-executor.conf.dir=/etc/hadoop/
  712. ----
  713. The path passed in <<<-Dcontainer-executor.conf.dir>>> should be the
  714. path on the cluster nodes where a configuration file for the setuid
  715. executable should be located. The executable should be installed in
  716. $HADOOP_YARN_HOME/bin.
  717. The executable must have specific permissions: 6050 or --Sr-s---
  718. permissions user-owned by <root> (super-user) and group-owned by a
  719. special group (e.g. <<<hadoop>>>) of which the NodeManager Unix user is
  720. the group member and no ordinary application user is. If any application
  721. user belongs to this special group, security will be compromised. This
  722. special group name should be specified for the configuration property
  723. <<<yarn.nodemanager.linux-container-executor.group>>> in both
  724. <<<conf/yarn-site.xml>>> and <<<conf/container-executor.cfg>>>.
  725. For example, let's say that the NodeManager is run as user <yarn> who is
  726. part of the groups users and <hadoop>, any of them being the primary group.
  727. Let also be that <users> has both <yarn> and another user
  728. (application submitter) <alice> as its members, and <alice> does not
  729. belong to <hadoop>. Going by the above description, the setuid/setgid
  730. executable should be set 6050 or --Sr-s--- with user-owner as <yarn> and
  731. group-owner as <hadoop> which has <yarn> as its member (and not <users>
  732. which has <alice> also as its member besides <yarn>).
  733. The LinuxTaskController requires that paths including and leading up to
  734. the directories specified in <<<yarn.nodemanager.local-dirs>>> and
  735. <<<yarn.nodemanager.log-dirs>>> to be set 755 permissions as described
  736. above in the table on permissions on directories.
  737. * <<<conf/container-executor.cfg>>>
  738. The executable requires a configuration file called
  739. <<<container-executor.cfg>>> to be present in the configuration
  740. directory passed to the mvn target mentioned above.
  741. The configuration file must be owned by the user running NodeManager
  742. (user <<<yarn>>> in the above example), group-owned by anyone and
  743. should have the permissions 0400 or r--------.
  744. The executable requires following configuration items to be present
  745. in the <<<conf/container-executor.cfg>>> file. The items should be
  746. mentioned as simple key=value pairs, one per-line:
  747. *-------------------------+-------------------------+------------------------+
  748. || Parameter || Value || Notes |
  749. *-------------------------+-------------------------+------------------------+
  750. | <<<yarn.nodemanager.linux-container-executor.group>>> | <hadoop> | |
  751. | | | Unix group of the NodeManager. The group owner of the |
  752. | | |<container-executor> binary should be this group. Should be same as the |
  753. | | | value with which the NodeManager is configured. This configuration is |
  754. | | | required for validating the secure access of the <container-executor> |
  755. | | | binary. |
  756. *-------------------------+-------------------------+------------------------+
  757. | <<<banned.users>>> | hfds,yarn,mapred,bin | Banned users. |
  758. *-------------------------+-------------------------+------------------------+
  759. | <<<min.user.id>>> | 1000 | Prevent other super-users. |
  760. *-------------------------+-------------------------+------------------------+
  761. To re-cap, here are the local file-ssytem permissions required for the
  762. various paths related to the <<<LinuxContainerExecutor>>>:
  763. *-------------------+-------------------+------------------+------------------+
  764. || Filesystem || Path || User:Group || Permissions |
  765. *-------------------+-------------------+------------------+------------------+
  766. | local | container-executor | root:hadoop | --Sr-s--- |
  767. *-------------------+-------------------+------------------+------------------+
  768. | local | <<<conf/container-executor.cfg>>> | root:hadoop | r-------- |
  769. *-------------------+-------------------+------------------+------------------+
  770. | local | <<<yarn.nodemanager.local-dirs>>> | yarn:hadoop | drwxr-xr-x |
  771. *-------------------+-------------------+------------------+------------------+
  772. | local | <<<yarn.nodemanager.log-dirs>>> | yarn:hadoop | drwxr-xr-x |
  773. *-------------------+-------------------+------------------+------------------+
  774. * Configurations for ResourceManager:
  775. *-------------------------+-------------------------+------------------------+
  776. || Parameter || Value || Notes |
  777. *-------------------------+-------------------------+------------------------+
  778. | <<<yarn.resourcemanager.keytab>>> | | |
  779. | | </etc/security/keytab/rm.service.keytab> | |
  780. | | | Kerberos keytab file for the ResourceManager. |
  781. *-------------------------+-------------------------+------------------------+
  782. | <<<yarn.resourcemanager.principal>>> | rm/_HOST@REALM.TLD | |
  783. | | | Kerberos principal name for the ResourceManager. |
  784. *-------------------------+-------------------------+------------------------+
  785. * Configurations for NodeManager:
  786. *-------------------------+-------------------------+------------------------+
  787. || Parameter || Value || Notes |
  788. *-------------------------+-------------------------+------------------------+
  789. | <<<yarn.nodemanager.keytab>>> | </etc/security/keytab/nm.service.keytab> | |
  790. | | | Kerberos keytab file for the NodeManager. |
  791. *-------------------------+-------------------------+------------------------+
  792. | <<<yarn.nodemanager.principal>>> | nm/_HOST@REALM.TLD | |
  793. | | | Kerberos principal name for the NodeManager. |
  794. *-------------------------+-------------------------+------------------------+
  795. | <<<yarn.nodemanager.container-executor.class>>> | | |
  796. | | <<<org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor>>> |
  797. | | | Use LinuxContainerExecutor. |
  798. *-------------------------+-------------------------+------------------------+
  799. | <<<yarn.nodemanager.linux-container-executor.group>>> | <hadoop> | |
  800. | | | Unix group of the NodeManager. |
  801. *-------------------------+-------------------------+------------------------+
  802. * <<<conf/mapred-site.xml>>>
  803. * Configurations for MapReduce JobHistory Server:
  804. *-------------------------+-------------------------+------------------------+
  805. || Parameter || Value || Notes |
  806. *-------------------------+-------------------------+------------------------+
  807. | <<<mapreduce.jobhistory.address>>> | | |
  808. | | MapReduce JobHistory Server <host:port> | Default port is 10020. |
  809. *-------------------------+-------------------------+------------------------+
  810. | <<<mapreduce.jobhistory.keytab>>> | |
  811. | | </etc/security/keytab/jhs.service.keytab> | |
  812. | | | Kerberos keytab file for the MapReduce JobHistory Server. |
  813. *-------------------------+-------------------------+------------------------+
  814. | <<<mapreduce.jobhistory.principal>>> | jhs/_HOST@REALM.TLD | |
  815. | | | Kerberos principal name for the MapReduce JobHistory Server. |
  816. *-------------------------+-------------------------+------------------------+
  817. * {Operating the Hadoop Cluster}
  818. Once all the necessary configuration is complete, distribute the files to the
  819. <<<HADOOP_CONF_DIR>>> directory on all the machines.
  820. This section also describes the various Unix users who should be starting the
  821. various components and uses the same Unix accounts and groups used previously:
  822. * Hadoop Startup
  823. To start a Hadoop cluster you will need to start both the HDFS and YARN
  824. cluster.
  825. Format a new distributed filesystem as <hdfs>:
  826. ----
  827. [hdfs]$ $HADOOP_PREFIX/bin/hdfs namenode -format <cluster_name>
  828. ----
  829. Start the HDFS with the following command, run on the designated NameNode
  830. as <hdfs>:
  831. ----
  832. [hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
  833. ----
  834. Run a script to start DataNodes on all slaves as <root> with a special
  835. environment variable <<<HADOOP_SECURE_DN_USER>>> set to <hdfs>:
  836. ----
  837. [root]$ HADOOP_SECURE_DN_USER=hdfs $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
  838. ----
  839. Start the YARN with the following command, run on the designated
  840. ResourceManager as <yarn>:
  841. ----
  842. [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
  843. ----
  844. Run a script to start NodeManagers on all slaves as <yarn>:
  845. ----
  846. [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
  847. ----
  848. Start a standalone WebAppProxy server. Run on the WebAppProxy
  849. server as <yarn>. If multiple servers are used with load balancing
  850. it should be run on each of them:
  851. ----
  852. [yarn]$ $HADOOP_YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR
  853. ----
  854. Start the MapReduce JobHistory Server with the following command, run on the
  855. designated server as <mapred>:
  856. ----
  857. [mapred]$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
  858. ----
  859. * Hadoop Shutdown
  860. Stop the NameNode with the following command, run on the designated NameNode
  861. as <hdfs>:
  862. ----
  863. [hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
  864. ----
  865. Run a script to stop DataNodes on all slaves as <root>:
  866. ----
  867. [root]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
  868. ----
  869. Stop the ResourceManager with the following command, run on the designated
  870. ResourceManager as <yarn>:
  871. ----
  872. [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
  873. ----
  874. Run a script to stop NodeManagers on all slaves as <yarn>:
  875. ----
  876. [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager
  877. ----
  878. Stop the WebAppProxy server. Run on the WebAppProxy server as
  879. <yarn>. If multiple servers are used with load balancing it
  880. should be run on each of them:
  881. ----
  882. [yarn]$ $HADOOP_YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR
  883. ----
  884. Stop the MapReduce JobHistory Server with the following command, run on the
  885. designated server as <mapred>:
  886. ----
  887. [mapred]$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh stop historyserver --config $HADOOP_CONF_DIR
  888. ----
  889. * {Web Interfaces}
  890. Once the Hadoop cluster is up and running check the web-ui of the
  891. components as described below:
  892. *-------------------------+-------------------------+------------------------+
  893. || Daemon || Web Interface || Notes |
  894. *-------------------------+-------------------------+------------------------+
  895. | NameNode | http://<nn_host:port>/ | Default HTTP port is 50070. |
  896. *-------------------------+-------------------------+------------------------+
  897. | ResourceManager | http://<rm_host:port>/ | Default HTTP port is 8088. |
  898. *-------------------------+-------------------------+------------------------+
  899. | MapReduce JobHistory Server | http://<jhs_host:port>/ | |
  900. | | | Default HTTP port is 19888. |
  901. *-------------------------+-------------------------+------------------------+