Przeglądaj źródła

MAPREDUCE-2854. update INSTALL with config necessary run mapred on yarn. (thomas graves via mahadev)

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1159421 13f79535-47bb-0310-9956-ffa450edef68
Mahadev Konar 13 lat temu
rodzic
commit
fe87d6a085

+ 3 - 0
hadoop-mapreduce/CHANGES.txt

@@ -218,6 +218,9 @@ Trunk (unreleased changes)
     MAPREDUCE-2489. Jobsplits with random hostnames can make the queue 
     unusable (jeffrey naisbit via mahadev)
 
+    MAPREDUCE-2854. update INSTALL with config necessary run mapred on yarn.
+    (thomas graves via mahadev)
+
   OPTIMIZATIONS
 
     MAPREDUCE-2026. Make JobTracker.getJobCounters() and

+ 50 - 41
hadoop-mapreduce/INSTALL

@@ -2,17 +2,17 @@ To compile  Hadoop Mapreduce next following, do the following:
 
 Step 1) Install dependencies for yarn
 
-See http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/yarn/README
+See http://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/README
 Make sure protbuf library is in your library path or set: export LD_LIBRARY_PATH=/usr/local/lib
 
 Step 2) Checkout
 
-svn checkout http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/
+svn checkout http://svn.apache.org/repos/asf/hadoop/common/trunk
 
 Step 3) Build common
 
-Go to common directory
-ant veryclean mvn-install 
+Go to common directory - choose your regular common build command
+Example: mvn clean install package -Pbintar -DskipTests
 
 Step 4) Build HDFS 
 
@@ -24,51 +24,37 @@ Step 5) Build yarn and mapreduce
 Go to mapreduce directory
 export MAVEN_OPTS=-Xmx512m
 
-mvn clean install assembly:assembly
-ant veryclean jar jar-test  -Dresolvers=internal 
-
-In case you want to skip the tests run:
-
 mvn clean install assembly:assembly -DskipTests
-ant veryclean jar jar-test  -Dresolvers=internal 
+
+Copy in build.properties if appropriate - make sure eclipse.home not set
+ant veryclean tar -Dresolvers=internal 
 
 You will see a tarball in
-ls target/hadoop-mapreduce-1.0-SNAPSHOT-bin.tar.gz  
+ls target/hadoop-mapreduce-1.0-SNAPSHOT-all.tar.gz  
 
 Step 6) Untar the tarball in a clean and different directory.
-say HADOOP_YARN_INSTALL
-
-To run Hadoop Mapreduce next applications :
-
-Step 7) cd $HADOOP_YARN_INSTALL
-
-Step 8) export the following variables:
-
-HADOOP_MAPRED_HOME=
-HADOOP_COMMON_HOME=
-HADOOP_HDFS_HOME=
-YARN_HOME=directory where you untarred yarn
-HADOOP_CONF_DIR=
-YARN_CONF_DIR=$HADOOP_CONF_DIR
+say YARN_HOME. 
 
-Step 9) bin/yarn-daemon.sh start resourcemanager
+Make sure you aren't picking up avro-1.3.2.jar, remove:
+  $HADOOP_COMMON_HOME/share/hadoop/common/lib/avro-1.3.2.jar
+  $YARN_HOME/lib/avro-1.3.2.jar
 
-Step 10) bin/yarn-daemon.sh start nodemanager
+Step 7)
+Install hdfs/common and start hdfs
 
-Step 11) bin/yarn-daemon.sh start historyserver
+To run Hadoop Mapreduce next applications: 
 
-Step 12) Create the following symlinks in hadoop-common/lib 
+Step 8) export the following variables to where you have things installed:
+You probably want to export these in hadoop-env.sh and yarn-env.sh also.
 
-ln -s $HADOOP_YARN_INSTALL/modules/hadoop-mapreduce-client-app-1.0-SNAPSHOT.jar .	
-ln -s $HADOOP_YARN_INSTALL/modules/yarn-api-1.0-SNAPSHOT.jar .
-ln -s $HADOOP_YARN_INSTALL/modules/hadoop-mapreduce-client-common-1.0-SNAPSHOT.jar .	
-ln -s $HADOOP_YARN_INSTALL/modules/yarn-common-1.0-SNAPSHOT.jar .
-ln -s $HADOOP_YARN_INSTALL/modules/hadoop-mapreduce-client-core-1.0-SNAPSHOT.jar .	
-ln -s $HADOOP_YARN_INSTALL/modules/yarn-server-common-1.0-SNAPSHOT.jar .
-ln -s $HADOOP_YARN_INSTALL/modules/hadoop-mapreduce-client-jobclient-1.0-SNAPSHOT.jar .
-ln -s $HADOOP_YARN_INSTALL/lib/protobuf-java-2.4.0a.jar .
+export HADOOP_MAPRED_HOME=<mapred loc>
+export HADOOP_COMMON_HOME=<common loc>
+export HADOOP_HDFS_HOME=<hdfs loc>
+export YARN_HOME=directory where you untarred yarn
+export HADOOP_CONF_DIR=<conf loc>
+export YARN_CONF_DIR=$HADOOP_CONF_DIR
 
-Step 13) Yarn daemons are up! But for running mapreduce applications, which now are in user land, you need to setup nodemanager with the following configuration in your yarn-site.xml before you start the nodemanager.
+Step 9) Setup config: for running mapreduce applications, which now are in user land, you need to setup nodemanager with the following configuration in your yarn-site.xml before you start the nodemanager.
     <property>
       <name>nodemanager.auxiluary.services</name>
       <value>mapreduce.shuffle</value>
@@ -79,11 +65,34 @@ Step 13) Yarn daemons are up! But for running mapreduce applications, which now
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
     </property>
 
-Step 14) You are all set, an example on how to run a mapreduce job is:
+Step 10) Modify mapred-site.xml to use yarn framework
+    <property>    
+      <name> mapreduce.framework.name</name>
+      <value>yarn</value>  
+    </property>
+
+Step 11) Create the following symlinks in $HADOOP_COMMON_HOME/share/hadoop/common/lib
+
+ln -s $YARN_HOME/modules/hadoop-mapreduce-client-app-1.0-SNAPSHOT.jar .	
+ln -s $YARN_HOME/modules/hadoop-yarn-api-1.0-SNAPSHOT.jar .
+ln -s $YARN_HOME/modules/hadoop-mapreduce-client-common-1.0-SNAPSHOT.jar .	
+ln -s $YARN_HOME/modules/hadoop-yarn-common-1.0-SNAPSHOT.jar .
+ln -s $YARN_HOME/modules/hadoop-mapreduce-client-core-1.0-SNAPSHOT.jar .	
+ln -s $YARN_HOME/modules/hadoop-yarn-server-common-1.0-SNAPSHOT.jar .
+ln -s $YARN_HOME/modules/hadoop-mapreduce-client-jobclient-1.0-SNAPSHOT.jar .
+
+Step 12) cd $YARN_HOME
+
+Step 13) bin/yarn-daemon.sh start resourcemanager
+
+Step 14) bin/yarn-daemon.sh start nodemanager
+
+Step 15) bin/yarn-daemon.sh start historyserver
 
+Step 16) You are all set, an example on how to run a mapreduce job is:
 cd $HADOOP_MAPRED_HOME
-ant examples -Dresolvers=internal
-$HADOOP_COMMON_HOME/bin/hadoop jar $HADOOP_MAPRED_HOME/build/hadoop-mapred-examples-0.22.0-SNAPSHOT.jar randomwriter -Dmapreduce.job.user.name=$USER -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912 -Ddfs.block.size=536870912 -libjars $HADOOP_YARN_INSTALL/hadoop-mapreduce-1.0-SNAPSHOT/modules/hadoop-mapreduce-client-jobclient-1.0-SNAPSHOT.jar output 
+ant examples -Dresolvers=internal 
+$HADOOP_COMMON_HOME/bin/hadoop jar $HADOOP_MAPRED_HOME/build/hadoop-mapreduce-examples-0.23.0-SNAPSHOT.jar randomwriter -Dmapreduce.job.user.name=$USER -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912 -Ddfs.block.size=536870912 -libjars $YARN_HOME/modules/hadoop-mapreduce-client-jobclient-1.0-SNAPSHOT.jar output 
 
 The output on the command line should be almost similar to what you see in the JT/TT setup (Hadoop 0.20/0.21)
 

+ 7 - 7
hadoop-mapreduce/hadoop-yarn/README

@@ -4,7 +4,7 @@ YARN (YET ANOTHER RESOURCE NEGOTIATOR or YARN Application Resource Negotiator)
 Requirements
 -------------
 Java: JDK 1.6
-Maven: Maven 2
+Maven: Maven 3
 
 Setup
 -----
@@ -63,11 +63,11 @@ Modules
 -------
 YARN consists of multiple modules. The modules are listed below as per the directory structure:
 
-yarn-api - Yarn's cross platform external interface
+hadoop-yarn-api - Yarn's cross platform external interface
 
-yarn-common - Utilities which can be used by yarn clients and server
+hadoop-yarn-common - Utilities which can be used by yarn clients and server
 
-yarn-server - Implementation of the yarn-api
-	yarn-server-common - APIs shared between resourcemanager and nodemanager
-	yarn-server-nodemanager (TaskTracker replacement)
-	yarn-server-resourcemanager (JobTracker replacement)
+hadoop-yarn-server - Implementation of the hadoop-yarn-api
+	hadoop-yarn-server-common - APIs shared between resourcemanager and nodemanager
+	hadoop-yarn-server-nodemanager (TaskTracker replacement)
+	hadoop-yarn-server-resourcemanager (JobTracker replacement)