123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212 |
- <html>
- <head>
- <title>Hadoop</title>
- </head>
- <body>
- Hadoop is a distributed computing platform.
- <p>Hadoop primarily consists of a distributed filesystem (DFS, in <a
- href="org/apache/hadoop/dfs/package-summary.html">org.apache.hadoop.dfs</a>)
- and an implementation of a MapReduce distributed data processor (in <a
- href="org/apache/hadoop/mapred/package-summary.html">org.apache.hadoop.mapred
- </a>).</p>
- <h2>Requirements</h2>
- <ol>
-
- <li>Java 1.5.x, preferably from <a
- href="http://java.sun.com/j2se/downloads.html">Sun</a> Set
- <tt>JAVA_HOME</tt> to the root of your Java installation.</li>
-
- <li>ssh must be installed and sshd must be running to use Hadoop's
- scripts to manage remote Hadoop daemons. On Ubuntu, this may done
- with <br><tt>sudo apt-get install ssh</tt></li>
-
- <li>rsync must be installed to use Hadoop's scripts to manage remote
- Hadoop installations. On Ubuntu, this may done with <br><tt>sudo
- apt-get install rsync</tt>.</li>
-
- <li>On Win32, <a href="http://www.cygwin.com/">cygwin</a>, for shell
- support. To use Subversion on Win32, select the subversion package
- when you install, in the "Devel" category. Distributed operation has
- not been well tested on Win32, so this should primarily be considered
- a development platform at this point, not a production platform.</li>
-
- </ol>
- <h2>Getting Started</h2>
- <p>First, you need to get a copy of the Hadoop code.</p>
- <p>You can download a nightly build from <a
- href="http://cvs.apache.org/dist/lucene/hadoop/nightly/">http://cvs.apache.org/dist/lucene/hadoop/nightly/</a>.
- Unpack the release and connect to its top-level directory.</p>
- <p>Or, check out the code from <a
- href="http://lucene.apache.org/hadoop/version_control.html">subversion</a>
- and build it with <a href="http://ant.apache.org/">Ant</a>.</p>
- <p>Edit the file <tt>conf/hadoop-env.sh</tt> to define at least
- <tt>JAVA_HOME</tt>.</p>
- <p>Try the following command:</p>
- <tt>bin/hadoop</tt>
- <p>This will display the documentation for the Hadoop command script.</p>
- <h2>Standalone operation</h2>
- <p>By default, Hadoop is configured to run things in a non-distributed
- mode, as a single Java process. This is useful for debugging, and can
- be demonstrated as follows:</p>
- <tt>
- mkdir input<br>
- cp conf/*.xml input<br>
- bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'<br>
- cat output/*
- </tt>
- <p>This will display counts for each match of the <a
- href="http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html">
- regular expression.</a></p>
- <p>Note that input is specified as a <em>directory</em> containing input
- files and that output is also specified as a directory where parts are
- written.</p>
- <h2>Distributed operation</h2>
- To configure Hadoop for distributed operation you must specify the
- following:
- <ol>
- <li>The {@link org.apache.hadoop.dfs.NameNode} (Distributed Filesystem
- master) host and port. This is specified with the configuration
- property <tt><a
- href="../hadoop-default.html#fs.default.name">fs.default.name</a></tt>.
- </li>
- <li>The {@link org.apache.hadoop.mapred.JobTracker} (MapReduce master)
- host and port. This is specified with the configuration property
- <tt><a
- href="../hadoop-default.html#mapred.job.tracker">mapred.job.tracker</a></tt>.
- </li>
- <li>A <em>slaves</em> file that lists the names of all the hosts in
- the cluster. The default slaves file is <tt>conf/slaves</tt>.
- </ol>
- <h3>Pseudo-distributed configuration</h3>
- You can in fact run everything on a single host. To run things this
- way, put the following in conf/hadoop-site.xml:
- <xmp><configuration>
- <property>
- <name>fs.default.name</name>
- <value>localhost:9000</value>
- </property>
- <property>
- <name>mapred.job.tracker</name>
- <value>localhost:9001</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>1</value>
- </property>
- </configuration></xmp>
- <p>(We also set the DFS replication level to 1 in order to
- reduce warnings when running on a single node.)</p>
- <p>Now check that the command <br><tt>ssh localhost</tt><br> does not
- require a password. If it does, execute the following commands:</p>
- <p><tt>ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa<br>
- cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
- </tt></p>
- <h3>Bootstrapping</h3>
- <p>A new distributed filesystem must be formatted with the following
- command, run on the master node:</p>
- <p><tt>bin/hadoop namenode -format</tt></p>
- <p>The Hadoop daemons are started with the following command:</p>
- <p><tt>bin/start-all.sh</tt></p>
- <p>Daemon log output is written to the <tt>logs/</tt> directory.</p>
- <p>Input files are copied into the distributed filesystem as follows:</p>
- <p><tt>bin/hadoop dfs -put input input</tt></p>
- <h3>Distributed execution</h3>
- <p>Things are run as before, but output must be copied locally to
- examine it:</p>
- <tt>
- bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'<br>
- bin/hadoop dfs -get output output
- cat output/*
- </tt>
- <p>When you're done, stop the daemons with:</p>
- <p><tt>bin/stop-all.sh</tt></p>
- <h2>Fully-distributed operation</h2>
- <p>Distributed operation is just like the pseudo-distributed operation
- described above, except:</p>
- <ol>
- <li>Specify hostname or IP address of the master server in the values
- for <tt><a
- href="../hadoop-default.html#fs.default.name">fs.default.name</a></tt>
- and <tt><a
- href="../hadoop-default.html#mapred.job.tracker">mapred.job.tracker</a></tt>
- in <tt>conf/hadoop-site.xml</tt>. These are specified as
- <tt><em>host</em>:<em>port</em></tt> pairs.</li>
- <li>Specify directories for <tt><a
- href="../hadoop-default.html#dfs.name.dir">dfs.name.dir</a></tt> and
- <tt><a
- href="../hadoop-default.html#dfs.data.dir">dfs.data.dir</a></tt> in
- <tt>conf/hadoop-site.xml</tt>. These are used to hold distributed
- filesystem data on the master node and slave nodes respectively. Note
- that <tt>dfs.data.dir</tt> may contain a space- or comma-separated
- list of directory names, so that data may be stored on multiple
- devices.</li>
- <li>Specify <tt><a
- href="../hadoop-default.html#mapred.local.dir">mapred.local.dir</a></tt>
- in <tt>conf/hadoop-site.xml</tt>. This determines where temporary
- MapReduce data is written. It also may be a list of directories.</li>
- <li>Specify <tt><a
- href="../hadoop-default.html#mapred.map.tasks">mapred.map.tasks</a></tt>
- and <tt><a
- href="../hadoop-default.html#mapred.reduce.tasks">mapred.reduce.tasks</a></tt>
- in <tt>conf/hadoop-site.xml</tt>. As a rule of thumb, use 10x the
- number of slave processors for <tt>mapred.map.tasks</tt>, and 2x the
- number of slave processors for <tt>mapred.reduce.tasks</tt>.</li>
- <li>List all slave hostnames or IP addresses in your
- <tt>conf/slaves</tt> file, one per line.</li>
- </ol>
- </body>
- </html>
|