|
@@ -4,13 +4,191 @@
|
|
|
</head>
|
|
|
<body>
|
|
|
|
|
|
-Hadoop is a distributed computing platform. It primarily consists of
|
|
|
-a distributed filesystem (in <a
|
|
|
+Hadoop is a distributed computing platform.
|
|
|
+
|
|
|
+<p>Hadoop primarily consists of a distributed filesystem (DFS, in <a
|
|
|
href="org/apache/hadoop/dfs/package-summary.html">org.apache.hadoop.dfs</a>)
|
|
|
and an implementation of a MapReduce distributed data processor (in <a
|
|
|
-href="org/apache/hadoop/mapred/package-summary.html">org.apache.hadoop.mapred</a>)
|
|
|
+href="org/apache/hadoop/mapred/package-summary.html">org.apache.hadoop.mapred
|
|
|
+</a>).</p>
|
|
|
+
|
|
|
+<h2>Requirements</h2>
|
|
|
+
|
|
|
+<ol>
|
|
|
+
|
|
|
+<li>Java 1.4.x, preferably from <a
|
|
|
+ href="http://java.sun.com/j2se/downloads.html">Sun</a> Set
|
|
|
+ <tt>JAVA_HOME</tt> to the root of your Java installation.</li>
|
|
|
+
|
|
|
+<li>ssh must be installed and sshd must be running to use Hadoop's
|
|
|
+scripts to manage remote Hadoop daemons. On Ubuntu, this may done
|
|
|
+with <br><tt>sudo apt-get install ssh</tt></li>
|
|
|
+
|
|
|
+<li>rsync must be installed to use Hadoop's scripts to manage remote
|
|
|
+Hadoop installations. On Ubuntu, this may done with <br><tt>sudo
|
|
|
+apt-get install rsync</tt>.</li>
|
|
|
+
|
|
|
+<li>On Win32, <a href="http://www.cygwin.com/">cygwin</a>, for shell
|
|
|
+support. To use Subversion on Win32, select the subversion package
|
|
|
+when you install, in the "Devel" category. Distributed operation has
|
|
|
+not been well tested on Win32, so this should primarily be considered
|
|
|
+a development platform at this point, not a production platform.</li>
|
|
|
+
|
|
|
+</ol>
|
|
|
+
|
|
|
+<h2>Getting Started</h2>
|
|
|
+
|
|
|
+<p>First, you need to get a copy of the Hadoop code.</p>
|
|
|
+
|
|
|
+<p>You can download a nightly build from <a
|
|
|
+href="http://cvs.apache.org/dist/lucene/hadoop/nightly/">http://cvs.apache.org/dist/lucene/hadoop/nightly/</a>.
|
|
|
+Unpack the release and connect to its top-level directory.</p>
|
|
|
+
|
|
|
+<p>Or, check out the code from <a
|
|
|
+href="http://lucene.apache.org/hadoop/version_control.html">subversion</a>
|
|
|
+and build it with <a href="http://ant.apache.org/">Ant</a>.</p>
|
|
|
+
|
|
|
+<p>Try the following command:</p>
|
|
|
+<tt>bin/hadoop</tt>
|
|
|
+<p>This will display the documentation for the Hadoop command script.</p>
|
|
|
+
|
|
|
+<h2>Standalone operation</h2>
|
|
|
+
|
|
|
+<p>By default, Hadoop is configured to run things in a non-distributed
|
|
|
+mode, as a single Java process. This is useful for debugging, and can
|
|
|
+be demonstrated as follows:</p>
|
|
|
+<tt>
|
|
|
+mkdir input<br>
|
|
|
+cp conf/*.xml input<br>
|
|
|
+bin/hadoop org.apache.hadoop.mapred.demo.Grep input output 'dfs[a-z.]+'<br>
|
|
|
+cat output/*
|
|
|
+</tt>
|
|
|
+<p>This will display counts for each match of the <a
|
|
|
+href="http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html">
|
|
|
+regular expression.</a></p>
|
|
|
+
|
|
|
+<p>Note that input is specified as a <em>directory</em> containing input
|
|
|
+files and that output is also specified as a directory where parts are
|
|
|
+written.</p>
|
|
|
+
|
|
|
+<h2>Distributed operation</h2>
|
|
|
+
|
|
|
+To configure Hadoop for distributed operation you must specify the
|
|
|
+following:
|
|
|
+
|
|
|
+<ol>
|
|
|
+
|
|
|
+<li>The {@link org.apache.hadoop.dfs.NameNode} (Distributed Filesystem
|
|
|
+master) host and port. This is specified with the configuration
|
|
|
+property <tt>fs.default.name</tt>.</li>
|
|
|
+
|
|
|
+<li>The {@link org.apache.hadoop.mapred.JobTracker} (MapReduce master)
|
|
|
+host and port. This is specified with the configuration property
|
|
|
+<tt>mapred.job.tracker</tt>.</li>
|
|
|
+
|
|
|
+<li>A <em>slaves</em> file that lists the names of all the hosts in
|
|
|
+the cluster. The default slaves file is <tt>~/.slaves</tt>.
|
|
|
+
|
|
|
+</ol>
|
|
|
+
|
|
|
+<h3>Pseudo-distributed configuration</h3>
|
|
|
+
|
|
|
+You can in fact run everything on a single host. To run things this
|
|
|
+way, put the following in conf/hadoop-site.xml:
|
|
|
+
|
|
|
+<xmp><configuration>
|
|
|
+
|
|
|
+ <property>
|
|
|
+ <name>fs.default.name</name>
|
|
|
+ <value>localhost:9000</value>
|
|
|
+ </property>
|
|
|
+
|
|
|
+ <property>
|
|
|
+ <name>mapred.job.tracker</name>
|
|
|
+ <value>localhost:9001</value>
|
|
|
+ </property>
|
|
|
+
|
|
|
+ <property>
|
|
|
+ <name>dfs.replication</name>
|
|
|
+ <value>1</value>
|
|
|
+ </property>
|
|
|
+
|
|
|
+</configuration></xmp>
|
|
|
+
|
|
|
+<p>Note that we also set the DFS replication level to 1 in order to
|
|
|
+reduce the number of warnings.</p>
|
|
|
+
|
|
|
+Now check that the command <br><tt>ssh localhost</tt><br> does not
|
|
|
+require a password. If it does, execute the following commands:<p>
|
|
|
+
|
|
|
+<p><tt>ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa<br>
|
|
|
+cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
|
|
|
+</tt></p>
|
|
|
+
|
|
|
+<p>Finally, you can create a <tt>.slaves</tt> file with the command:</p>
|
|
|
+
|
|
|
+<p><tt>echo localhost > ~/.slaves</tt></p>
|
|
|
+
|
|
|
+<h3>Bootstrapping</h3>
|
|
|
+
|
|
|
+<p>The Hadoop daemons are started with the following command:</p>
|
|
|
+
|
|
|
+<p><tt>bin/start-all.sh</tt></p>
|
|
|
+
|
|
|
+<p>Daemon log output is written to the <tt>logs/</tt> directory.</p>
|
|
|
+
|
|
|
+<p>Input files are copied into the distributed filesystem as follows:</p>
|
|
|
+
|
|
|
+<p><tt>bin/hadoop dfs -put input input</tt></p>
|
|
|
+
|
|
|
+<h3>Distributed execution</h3>
|
|
|
+
|
|
|
+<p>Things are run as before, but output must be copied locally to
|
|
|
+examine it:</p>
|
|
|
+
|
|
|
+<tt>
|
|
|
+bin/hadoop org.apache.hadoop.mapred.demo.Grep input output 'dfs[a-z.]+'<br>
|
|
|
+bin/hadoop dfs -get output output
|
|
|
+cat output/*
|
|
|
+</tt>
|
|
|
+
|
|
|
+<p>When you're done, stop the daemons with:</p>
|
|
|
+
|
|
|
+<p><tt>bin/stop-all.sh</tt></p>
|
|
|
+
|
|
|
+<h2>Fully-distributed operation</h2>
|
|
|
+
|
|
|
+<p>Distributed operation is just like the pseudo-distributed operation
|
|
|
+described above, except:</p>
|
|
|
+
|
|
|
+<ol>
|
|
|
+
|
|
|
+<li>Specify hostname or IP address of the master server in the values for
|
|
|
+<tt>fs.default.name</tt> and <tt>mapred.job.tracker</tt> in
|
|
|
+<tt>conf/hadoop-site.xml</tt>. These are specified as
|
|
|
+<tt><em>host</em>:<em>port</em></tt> pairs.</li>
|
|
|
+
|
|
|
+<li>Specify directories for <tt>dfs.name.dir</tt> and
|
|
|
+<tt>dfs.data.dir</tt> in <tt>conf/hadoop-site.xml</tt>. These are
|
|
|
+used to hold distributed filesystem data on the master node and slave nodes
|
|
|
+respectively. Note that <tt>dfs.data.dir</tt> may contain a space- or
|
|
|
+comma-separated list of directory names, so that data may be stored on
|
|
|
+multiple devices.</li>
|
|
|
+
|
|
|
+<li>Specify <tt>mapred.local.dir</tt> in
|
|
|
+<tt>conf/hadoop-site.xml</tt>. This determines where temporary
|
|
|
+MapReduce data is written. It also may be a list of directories.</li>
|
|
|
+
|
|
|
+<li>Specify <tt>mapred.map.tasks</tt> and <tt>mapred.reduce.tasks</tt>
|
|
|
+in <tt>conf/mapred-default.xml</tt>. As a rule of thumb, use 10x the
|
|
|
+number of slave processors for <tt>mapred.map.tasks</tt>, and 2x the
|
|
|
+number of slave processors for <tt>mapred.reduce.tasks</tt>.</li>
|
|
|
+
|
|
|
+<li>List all slave hostnames or IP addresses in your
|
|
|
+<tt>~/.slaves</tt> file, one per line.</li>
|
|
|
+
|
|
|
+</ol>
|
|
|
|
|
|
-<p>
|
|
|
</body>
|
|
|
</html>
|
|
|
|