SingleCluster.apt.vm 5.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194
  1. ~~ Licensed under the Apache License, Version 2.0 (the "License");
  2. ~~ you may not use this file except in compliance with the License.
  3. ~~ You may obtain a copy of the License at
  4. ~~
  5. ~~ http://www.apache.org/licenses/LICENSE-2.0
  6. ~~
  7. ~~ Unless required by applicable law or agreed to in writing, software
  8. ~~ distributed under the License is distributed on an "AS IS" BASIS,
  9. ~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  10. ~~ See the License for the specific language governing permissions and
  11. ~~ limitations under the License. See accompanying LICENSE file.
  12. ---
  13. Hadoop MapReduce Next Generation ${project.version} - Setting up a Single Node Cluster.
  14. ---
  15. ---
  16. ${maven.build.timestamp}
  17. Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.
  18. \[ {{{./index.html}Go Back}} \]
  19. %{toc|section=1|fromDepth=0}
  20. * Mapreduce Tarball
  21. You should be able to obtain the MapReduce tarball from the release.
  22. If not, you should be able to create a tarball from the source.
  23. +---+
  24. $ mvn clean install -DskipTests
  25. $ cd hadoop-mapreduce-project
  26. $ mvn clean install assembly:assembly -Pnative
  27. +---+
  28. <<NOTE:>> You will need protoc 2.5.0 installed.
  29. To ignore the native builds in mapreduce you can omit the <<<-Pnative>>> argument
  30. for maven. The tarball should be available in <<<target/>>> directory.
  31. * Setting up the environment.
  32. Assuming you have installed hadoop-common/hadoop-hdfs and exported
  33. <<$HADOOP_COMMON_HOME>>/<<$HADOOP_HDFS_HOME>>, untar hadoop mapreduce
  34. tarball and set environment variable <<$HADOOP_MAPRED_HOME>> to the
  35. untarred directory. Set <<$HADOOP_YARN_HOME>> the same as <<$HADOOP_MAPRED_HOME>>.
  36. <<NOTE:>> The following instructions assume you have hdfs running.
  37. * Setting up Configuration.
  38. To start the ResourceManager and NodeManager, you will have to update the configs.
  39. Assuming your $HADOOP_CONF_DIR is the configuration directory and has the installed
  40. configs for HDFS and <<<core-site.xml>>>. There are 2 config files you will have to setup
  41. <<<mapred-site.xml>>> and <<<yarn-site.xml>>>.
  42. ** Setting up <<<mapred-site.xml>>>
  43. Add the following configs to your <<<mapred-site.xml>>>.
  44. +---+
  45. <property>
  46. <name>mapreduce.cluster.temp.dir</name>
  47. <value></value>
  48. <description>No description</description>
  49. <final>true</final>
  50. </property>
  51. <property>
  52. <name>mapreduce.cluster.local.dir</name>
  53. <value></value>
  54. <description>No description</description>
  55. <final>true</final>
  56. </property>
  57. +---+
  58. ** Setting up <<<yarn-site.xml>>>
  59. Add the following configs to your <<<yarn-site.xml>>>
  60. +---+
  61. <property>
  62. <name>yarn.resourcemanager.resource-tracker.address</name>
  63. <value>host:port</value>
  64. <description>host is the hostname of the resource manager and
  65. port is the port on which the NodeManagers contact the Resource Manager.
  66. </description>
  67. </property>
  68. <property>
  69. <name>yarn.resourcemanager.scheduler.address</name>
  70. <value>host:port</value>
  71. <description>host is the hostname of the resourcemanager and port is the port
  72. on which the Applications in the cluster talk to the Resource Manager.
  73. </description>
  74. </property>
  75. <property>
  76. <name>yarn.resourcemanager.scheduler.class</name>
  77. <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
  78. <description>In case you do not want to use the default scheduler</description>
  79. </property>
  80. <property>
  81. <name>yarn.resourcemanager.address</name>
  82. <value>host:port</value>
  83. <description>the host is the hostname of the ResourceManager and the port is the port on
  84. which the clients can talk to the Resource Manager. </description>
  85. </property>
  86. <property>
  87. <name>yarn.nodemanager.local-dirs</name>
  88. <value></value>
  89. <description>the local directories used by the nodemanager</description>
  90. </property>
  91. <property>
  92. <name>yarn.nodemanager.address</name>
  93. <value>0.0.0.0:port</value>
  94. <description>the nodemanagers bind to this port</description>
  95. </property>
  96. <property>
  97. <name>yarn.nodemanager.resource.memory-mb</name>
  98. <value>10240</value>
  99. <description>the amount of memory on the NodeManager in GB</description>
  100. </property>
  101. <property>
  102. <name>yarn.nodemanager.remote-app-log-dir</name>
  103. <value>/app-logs</value>
  104. <description>directory on hdfs where the application logs are moved to </description>
  105. </property>
  106. <property>
  107. <name>yarn.nodemanager.log-dirs</name>
  108. <value></value>
  109. <description>the directories used by Nodemanagers as log directories</description>
  110. </property>
  111. <property>
  112. <name>yarn.nodemanager.aux-services</name>
  113. <value>mapreduce.shuffle</value>
  114. <description>shuffle service that needs to be set for Map Reduce to run </description>
  115. </property>
  116. +---+
  117. * Setting up <<<capacity-scheduler.xml>>>
  118. Make sure you populate the root queues in <<<capacity-scheduler.xml>>>.
  119. +---+
  120. <property>
  121. <name>yarn.scheduler.capacity.root.queues</name>
  122. <value>unfunded,default</value>
  123. </property>
  124. <property>
  125. <name>yarn.scheduler.capacity.root.capacity</name>
  126. <value>100</value>
  127. </property>
  128. <property>
  129. <name>yarn.scheduler.capacity.root.unfunded.capacity</name>
  130. <value>50</value>
  131. </property>
  132. <property>
  133. <name>yarn.scheduler.capacity.root.default.capacity</name>
  134. <value>50</value>
  135. </property>
  136. +---+
  137. * Running daemons.
  138. Assuming that the environment variables <<$HADOOP_COMMON_HOME>>, <<$HADOOP_HDFS_HOME>>, <<$HADOO_MAPRED_HOME>>,
  139. <<$HADOOP_YARN_HOME>>, <<$JAVA_HOME>> and <<$HADOOP_CONF_DIR>> have been set appropriately.
  140. Set $<<$YARN_CONF_DIR>> the same as $<<HADOOP_CONF_DIR>>
  141. Run ResourceManager and NodeManager as:
  142. +---+
  143. $ cd $HADOOP_MAPRED_HOME
  144. $ sbin/yarn-daemon.sh start resourcemanager
  145. $ sbin/yarn-daemon.sh start nodemanager
  146. +---+
  147. You should be up and running. You can run randomwriter as:
  148. +---+
  149. $ $HADOOP_COMMON_HOME/bin/hadoop jar hadoop-examples.jar randomwriter out
  150. +---+
  151. Good luck.