|
@@ -0,0 +1,611 @@
|
|
|
|
+<?xml version="1.0"?>
|
|
|
|
+<!--
|
|
|
|
+ Copyright 2002-2004 The Apache Software Foundation
|
|
|
|
+
|
|
|
|
+ Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
+ you may not use this file except in compliance with the License.
|
|
|
|
+ You may obtain a copy of the License at
|
|
|
|
+
|
|
|
|
+ http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
+
|
|
|
|
+ Unless required by applicable law or agreed to in writing, software
|
|
|
|
+ distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
+ See the License for the specific language governing permissions and
|
|
|
|
+ limitations under the License.
|
|
|
|
+-->
|
|
|
|
+
|
|
|
|
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
|
|
|
|
+<document>
|
|
|
|
+ <header>
|
|
|
|
+ <title>Commands Manual</title>
|
|
|
|
+ </header>
|
|
|
|
+
|
|
|
|
+ <body>
|
|
|
|
+ <section>
|
|
|
|
+ <title>Overview</title>
|
|
|
|
+ <p>
|
|
|
|
+ All the hadoop commands are invoked by the bin/hadoop script. Running hadoop
|
|
|
|
+ script without any arguments prints the description for all commands.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]</code>
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ Hadoop has an option parsing framework that employs parsing generic options as well as running classes.
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>--config confdir</code></td>
|
|
|
|
+ <td>Overwrites the default Configuration directory. Default is ${HADOOP_HOME}/conf.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>GENERIC_OPTIONS</code></td>
|
|
|
|
+ <td>The common set of options supported by multiple commands.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>COMMAND</code><br/><code>COMMAND_OPTIONS</code></td>
|
|
|
|
+ <td>Various commands with their options are described in the following sections. The commands
|
|
|
|
+ have been grouped into <a href="commands_manual.html#User+Commands">User Commands</a>
|
|
|
|
+ and <a href="commands_manual.html#Administration+Commands">Administration Commands</a>.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ <section>
|
|
|
|
+ <title>Generic Options</title>
|
|
|
|
+ <p>
|
|
|
|
+ Following are supported by <a href="commands_manual.html#dfsadmin">dfsadmin</a>,
|
|
|
|
+ <a href="commands_manual.html#fs">fs</a>, <a href="commands_manual.html#fsck">fsck</a> and
|
|
|
|
+ <a href="commands_manual.html#job">job</a>.
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> GENERIC_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-conf <configuration file></code></td>
|
|
|
|
+ <td>Specify an application configuration file.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-D <property=value></code></td>
|
|
|
|
+ <td>Use value for given property.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-fs <local|namenode:port></code></td>
|
|
|
|
+ <td>Specify a namenode.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-jt <local|jobtracker:port></code></td>
|
|
|
|
+ <td>Specify a job tracker. Applies only to <a href="commands_manual.html#job">job</a>.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-files <comma separated list of files></code></td>
|
|
|
|
+ <td>Specify comma separated files to be copied to the map reduce cluster.
|
|
|
|
+ Applies only to <a href="commands_manual.html#job">job</a>.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-libjars <comma seperated list of jars></code></td>
|
|
|
|
+ <td>Specify comma separated jar files to include in the classpath.
|
|
|
|
+ Applies only to <a href="commands_manual.html#job">job</a>.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-archives <comma separated list of archives></code></td>
|
|
|
|
+ <td>Specify comma separated archives to be unarchived on the compute machines.
|
|
|
|
+ Applies only to <a href="commands_manual.html#job">job</a>.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> User Commands </title>
|
|
|
|
+ <p>Commands useful for users of a hadoop cluster.</p>
|
|
|
|
+ <section>
|
|
|
|
+ <title> archive </title>
|
|
|
|
+ <p>
|
|
|
|
+ Creates a hadoop archive. More information can be found at <a href="hadoop_archives.html">Hadoop Archives</a>.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop archive -archiveName NAME <src>* <dest></code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-archiveName NAME</code></td>
|
|
|
|
+ <td>Name of the archive to be created.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>src</code></td>
|
|
|
|
+ <td>Filesystem pathnames which work as usual with regular expressions.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>dest</code></td>
|
|
|
|
+ <td>Destination directory which would contain the archive.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> distcp </title>
|
|
|
|
+ <p>
|
|
|
|
+ Copy file or directories recursively. More information can be found at <a href="distcp.html">DistCp Guide</a>.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop distcp <srcurl> <desturl></code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>srcurl</code></td>
|
|
|
|
+ <td>Source Url</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>desturl</code></td>
|
|
|
|
+ <td>Destination Url</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> fs </title>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop fs [</code><a href="commands_manual.html#Generic+Options">GENERIC_OPTIONS</a><code>]
|
|
|
|
+ [COMMAND_OPTIONS]</code>
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ Runs a generic filesystem user client.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ The various COMMAND_OPTIONS can be found at <a href="hdfs_shell.html">HDFS Shell Guide</a>.
|
|
|
|
+ </p>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> fsck </title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs a HDFS filesystem checking utility. See <a href="hdfs_user_guide.html#Fsck">Fsck</a> for more info.
|
|
|
|
+ </p>
|
|
|
|
+ <p><code>Usage: hadoop fsck [</code><a href="commands_manual.html#Generic+Options">GENERIC_OPTIONS</a><code>]
|
|
|
|
+ <path> [-move | -delete | -openforwrite] [-files [-blocks
|
|
|
|
+ [-locations | -racks]]]</code></p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code><path></code></td>
|
|
|
|
+ <td>Start checking from this path.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-move</code></td>
|
|
|
|
+ <td>Move corrupted files to /lost+found</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-delete</code></td>
|
|
|
|
+ <td>Delete corrupted files.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-openforwrite</code></td>
|
|
|
|
+ <td>Print out files opened for write.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-files</code></td>
|
|
|
|
+ <td>Print out files being checked.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-blocks</code></td>
|
|
|
|
+ <td>Print out block report.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-locations</code></td>
|
|
|
|
+ <td>Print out locations for every block.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-racks</code></td>
|
|
|
|
+ <td>Print out network topology for data-node locations.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> jar </title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs a jar file. Users can bundle their Map Reduce code in a jar file and execute it using this command.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop jar <jar> [mainClass] args...</code>
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ The streaming jobs are run via this command. Examples can be referred from
|
|
|
|
+ <a href="streaming.html#More+usage+examples">Streaming examples</a>
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ Word count example is also run using jar command. It can be referred from
|
|
|
|
+ <a href="mapred_tutorial.html#Usage">Wordcount example</a>
|
|
|
|
+ </p>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> job </title>
|
|
|
|
+ <p>
|
|
|
|
+ Command to interact with Map Reduce Jobs.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop job [</code><a href="commands_manual.html#Generic+Options">GENERIC_OPTIONS</a><code>]
|
|
|
|
+ [-submit <job-file>] | [-status <job-id>] |
|
|
|
|
+ [-counter <job-id> <group-name> <counter-name>] | [-kill <job-id>] |
|
|
|
|
+ [-events <job-id> <from-event-#> <#-of-events>] | [-history [all] <jobOutputDir>] |
|
|
|
|
+ [-list [all]] | [-kill-task <task-id>] | [-fail-task <task-id>]</code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-submit <job-file></code></td>
|
|
|
|
+ <td>Submits the job.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-status <job-id></code></td>
|
|
|
|
+ <td>Prints the map and reduce completion percentage and all job counters.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-counter <job-id> <group-name> <counter-name></code></td>
|
|
|
|
+ <td>Prints the counter value.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-kill <job-id></code></td>
|
|
|
|
+ <td>Kills the job.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-events <job-id> <from-event-#> <#-of-events></code></td>
|
|
|
|
+ <td>Prints the events' details received by jobtracker for the given range.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-history [all] <jobOutputDir></code></td>
|
|
|
|
+ <td>-history <jobOutputDir> prints job details, failed and killed tip details. More details
|
|
|
|
+ about the job such as successful tasks and task attempts made for each task can be viewed by
|
|
|
|
+ specifying the [all] option. </td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-list [all]</code></td>
|
|
|
|
+ <td>-list all displays all jobs. -list displays only jobs which are yet to complete.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-kill-task <task-id></code></td>
|
|
|
|
+ <td>Kills the task. Killed tasks are NOT counted against failed attempts.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-fail-task <task-id></code></td>
|
|
|
|
+ <td>Fails the task. Failed tasks are counted against failed attempts.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> pipes </title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs a pipes job.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop pipes [-conf <path>] [-jobconf <key=value>, <key=value>, ...]
|
|
|
|
+ [-input <path>] [-output <path>] [-jar <jar file>] [-inputformat <class>]
|
|
|
|
+ [-map <class>] [-partitioner <class>] [-reduce <class>] [-writer <class>]
|
|
|
|
+ [-program <executable>] [-reduces <num>] </code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-conf <path></code></td>
|
|
|
|
+ <td>Configuration for job</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-jobconf <key=value>, <key=value>, ...</code></td>
|
|
|
|
+ <td>Add/override configuration for job</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-input <path></code></td>
|
|
|
|
+ <td>Input directory</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-output <path></code></td>
|
|
|
|
+ <td>Output directory</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-jar <jar file></code></td>
|
|
|
|
+ <td>Jar filename</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-inputformat <class></code></td>
|
|
|
|
+ <td>InputFormat class</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-map <class></code></td>
|
|
|
|
+ <td>Java Map class</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-partitioner <class></code></td>
|
|
|
|
+ <td>Java Partitioner</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-reduce <class></code></td>
|
|
|
|
+ <td>Java Reduce class</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-writer <class></code></td>
|
|
|
|
+ <td>Java RecordWriter</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-program <executable></code></td>
|
|
|
|
+ <td>Executable URI</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-reduces <num></code></td>
|
|
|
|
+ <td>Number of reduces</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> version </title>
|
|
|
|
+ <p>
|
|
|
|
+ Prints the version.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop version</code>
|
|
|
|
+ </p>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> CLASSNAME </title>
|
|
|
|
+ <p>
|
|
|
|
+ hadoop script can be used to invoke any class.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop CLASSNAME</code>
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ Runs the class named CLASSNAME.
|
|
|
|
+ </p>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> Administration Commands </title>
|
|
|
|
+ <p>Commands useful for administrators of a hadoop cluster.</p>
|
|
|
|
+ <section>
|
|
|
|
+ <title> balancer </title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs a cluster balancing utility. An administrator can simply press Ctrl-C to stop the
|
|
|
|
+ rebalancing process. See <a href="hdfs_user_guide.html#Rebalancer">Rebalancer</a> for more details.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop balancer [-threshold <threshold>]</code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-threshold <threshold></code></td>
|
|
|
|
+ <td>Percentage of disk capacity. This overwrites the default threshold.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> daemonlog </title>
|
|
|
|
+ <p>
|
|
|
|
+ Get/Set the log level for each daemon.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop daemonlog -getlevel <host:port> <name></code><br/>
|
|
|
|
+ <code>Usage: hadoop daemonlog -setlevel <host:port> <name> <level></code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-getlevel <host:port> <name></code></td>
|
|
|
|
+ <td>Prints the log level of the daemon running at <host:port>.
|
|
|
|
+ This command internally connects to http://<host:port>/logLevel?log=<name></td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-setlevel <host:port> <name> <level></code></td>
|
|
|
|
+ <td>Sets the log level of the daemon running at <host:port>.
|
|
|
|
+ This command internally connects to http://<host:port>/logLevel?log=<name></td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> datanode</title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs a HDFS datanode.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop datanode [-rollback]</code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-rollback</code></td>
|
|
|
|
+ <td>Rollsback the datanode to the previous version. This should be used after stopping the datanode
|
|
|
|
+ and distributing the old hadoop version.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> dfsadmin </title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs a HDFS dfsadmin client.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop dfsadmin [</code><a href="commands_manual.html#Generic+Options">GENERIC_OPTIONS</a><code>] [-report] [-safemode enter | leave | get | wait] [-refreshNodes]
|
|
|
|
+ [-finalizeUpgrade] [-upgradeProgress status | details | force] [-metasave filename]
|
|
|
|
+ [-setQuota <quota> <dirname>...<dirname>] [-clrQuota <dirname>...<dirname>]
|
|
|
|
+ [-help [cmd]]</code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-report</code></td>
|
|
|
|
+ <td>Reports basic filesystem information and statistics.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-safemode enter | leave | get | wait</code></td>
|
|
|
|
+ <td>Safe mode maintenance command.
|
|
|
|
+ Safe mode is a Namenode state in which it <br/>
|
|
|
|
+ 1. does not accept changes to the name space (read-only) <br/>
|
|
|
|
+ 2. does not replicate or delete blocks. <br/>
|
|
|
|
+ Safe mode is entered automatically at Namenode startup, and
|
|
|
|
+ leaves safe mode automatically when the configured minimum
|
|
|
|
+ percentage of blocks satisfies the minimum replication
|
|
|
|
+ condition. Safe mode can also be entered manually, but then
|
|
|
|
+ it can only be turned off manually as well.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-refreshNodes</code></td>
|
|
|
|
+ <td>Re-read the hosts and exclude files to update the set
|
|
|
|
+ of Datanodes that are allowed to connect to the Namenode
|
|
|
|
+ and those that should be decommissioned or recommissioned.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-finalizeUpgrade</code></td>
|
|
|
|
+ <td>Finalize upgrade of HDFS.
|
|
|
|
+ Datanodes delete their previous version working directories,
|
|
|
|
+ followed by Namenode doing the same.
|
|
|
|
+ This completes the upgrade process.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-upgradeProgress status | details | force</code></td>
|
|
|
|
+ <td>Request current distributed upgrade status,
|
|
|
|
+ a detailed status or force the upgrade to proceed.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-metasave filename</code></td>
|
|
|
|
+ <td>Save Namenode's primary data structures
|
|
|
|
+ to <filename> in the directory specified by hadoop.log.dir property.
|
|
|
|
+ <filename> will contain one line for each of the following <br/>
|
|
|
|
+ 1. Datanodes heart beating with Namenode<br/>
|
|
|
|
+ 2. Blocks waiting to be replicated<br/>
|
|
|
|
+ 3. Blocks currrently being replicated<br/>
|
|
|
|
+ 4. Blocks waiting to be deleted</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-setQuota <quota> <dirname>...<dirname></code></td>
|
|
|
|
+ <td>Set the quota <quota> for each directory <dirname>.
|
|
|
|
+ The directory quota is a long integer that puts a hard limit on the number of names in the directory tree.<br/>
|
|
|
|
+ Best effort for the directory, with faults reported if<br/>
|
|
|
|
+ 1. N is not a positive integer, or<br/>
|
|
|
|
+ 2. user is not an administrator, or<br/>
|
|
|
|
+ 3. the directory does not exist or is a file, or<br/>
|
|
|
|
+ 4. the directory would immediately exceed the new quota.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-clrQuota <dirname>...<dirname></code></td>
|
|
|
|
+ <td>Clear the quota for each directory <dirname>.<br/>
|
|
|
|
+ Best effort for the directory. with fault reported if<br/>
|
|
|
|
+ 1. the directory does not exist or is a file, or<br/>
|
|
|
|
+ 2. user is not an administrator.<br/>
|
|
|
|
+ It does not fault if the directory has no quota.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-help [cmd]</code></td>
|
|
|
|
+ <td> Displays help for the given command or all commands if none
|
|
|
|
+ is specified.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> jobtracker </title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs the MapReduce job Tracker node.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop jobtracker</code>
|
|
|
|
+ </p>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> namenode </title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs the namenode. More info about the upgrade, rollback and finalize is at
|
|
|
|
+ <a href="hdfs_user_guide.html#Upgrade+and+Rollback">Upgrade Rollback</a>
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop namenode [-format] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint]</code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-format</code></td>
|
|
|
|
+ <td>Formats the namenode. It starts the namenode, formats it and then shut it down.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-upgrade</code></td>
|
|
|
|
+ <td>Namenode should be started with upgrade option after the distribution of new hadoop version.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-rollback</code></td>
|
|
|
|
+ <td>Rollsback the namenode to the previous version. This should be used after stopping the cluster
|
|
|
|
+ and distributing the old hadoop version.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-finalize</code></td>
|
|
|
|
+ <td>Finalize will remove the previous state of the files system. Recent upgrade will become permanent.
|
|
|
|
+ Rollback option will not be available anymore. After finalization it shuts the namenode down.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-importCheckpoint</code></td>
|
|
|
|
+ <td>Loads image from a checkpoint directory and save it into the current one. Checkpoint dir
|
|
|
|
+ is read from property fs.checkpoint.dir</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> secondarynamenode </title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs the HDFS secondary namenode. See <a href="hdfs_user_guide.html#Secondary+Namenode">Secondary Namenode</a>
|
|
|
|
+ for more info.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop secondarynamenode [-checkpoint [force]] | [-geteditsize]</code>
|
|
|
|
+ </p>
|
|
|
|
+ <table>
|
|
|
|
+ <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
|
|
|
|
+
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-checkpoint [force]</code></td>
|
|
|
|
+ <td>Checkpoints the Secondary namenode if EditLog size >= fs.checkpoint.size.
|
|
|
|
+ If -force is used, checkpoint irrespective of EditLog size.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ <tr>
|
|
|
|
+ <td><code>-geteditsize</code></td>
|
|
|
|
+ <td>Prints the EditLog size.</td>
|
|
|
|
+ </tr>
|
|
|
|
+ </table>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ <section>
|
|
|
|
+ <title> tasktracker </title>
|
|
|
|
+ <p>
|
|
|
|
+ Runs a MapReduce task Tracker node.
|
|
|
|
+ </p>
|
|
|
|
+ <p>
|
|
|
|
+ <code>Usage: hadoop tasktracker</code>
|
|
|
|
+ </p>
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+ </section>
|
|
|
|
+
|
|
|
|
+
|
|
|
|
+
|
|
|
|
+
|
|
|
|
+ </body>
|
|
|
|
+</document>
|