123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699 |
- <?xml version="1.0"?>
- <!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements. See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License. You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
- -->
- <!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
- <document>
- <header>
- <title>Commands Guide</title>
- </header>
-
- <body>
- <section>
- <title>Overview</title>
- <p>
- All hadoop commands are invoked by the bin/hadoop script. Running the hadoop
- script without any arguments prints the description for all commands.
- </p>
- <p>
- <code>Usage: hadoop [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]</code>
- </p>
- <p>
- Hadoop has an option parsing framework that employs parsing generic options as well as running classes.
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>--config confdir</code></td>
- <td>Overwrites the default Configuration directory. Default is ${HADOOP_HOME}/conf.</td>
- </tr>
- <tr>
- <td><code>GENERIC_OPTIONS</code></td>
- <td>The common set of options supported by multiple commands.</td>
- </tr>
- <tr>
- <td><code>COMMAND</code><br/><code>COMMAND_OPTIONS</code></td>
- <td>Various commands with their options are described in the following sections. The commands
- have been grouped into <a href="commands_manual.html#User+Commands">User Commands</a>
- and <a href="commands_manual.html#Administration+Commands">Administration Commands</a>.</td>
- </tr>
- </table>
- <section>
- <title>Generic Options</title>
- <p>
- The following options are supported by <a href="commands_manual.html#dfsadmin">dfsadmin</a>,
- <a href="commands_manual.html#fs">fs</a>, <a href="commands_manual.html#fsck">fsck</a> and
- <a href="commands_manual.html#job">job</a>.
- Applications should implement
- <a href="ext:api/org/apache/hadoop/util/tool">Tool</a> to support
- <a href="ext:api/org/apache/hadoop/util/genericoptionsparser">
- GenericOptions</a>.
- </p>
- <table>
- <tr><th> GENERIC_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>-conf <configuration file></code></td>
- <td>Specify an application configuration file.</td>
- </tr>
- <tr>
- <td><code>-D <property=value></code></td>
- <td>Use value for given property.</td>
- </tr>
- <tr>
- <td><code>-fs <local|namenode:port></code></td>
- <td>Specify a namenode.</td>
- </tr>
- <tr>
- <td><code>-jt <local|jobtracker:port></code></td>
- <td>Specify a job tracker. Applies only to <a href="commands_manual.html#job">job</a>.</td>
- </tr>
- <tr>
- <td><code>-files <comma separated list of files></code></td>
- <td>Specify comma separated files to be copied to the map reduce cluster.
- Applies only to <a href="commands_manual.html#job">job</a>.</td>
- </tr>
- <tr>
- <td><code>-libjars <comma seperated list of jars></code></td>
- <td>Specify comma separated jar files to include in the classpath.
- Applies only to <a href="commands_manual.html#job">job</a>.</td>
- </tr>
- <tr>
- <td><code>-archives <comma separated list of archives></code></td>
- <td>Specify comma separated archives to be unarchived on the compute machines.
- Applies only to <a href="commands_manual.html#job">job</a>.</td>
- </tr>
- </table>
- </section>
- </section>
-
- <section>
- <title> User Commands </title>
- <p>Commands useful for users of a hadoop cluster.</p>
- <section>
- <title> archive </title>
- <p>
- Creates a hadoop archive. More information can be found at <a href="hadoop_archives.html">Hadoop Archives</a>.
- </p>
- <p>
- <code>Usage: hadoop archive -archiveName NAME <src>* <dest></code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
- <tr>
- <td><code>-archiveName NAME</code></td>
- <td>Name of the archive to be created.</td>
- </tr>
- <tr>
- <td><code>src</code></td>
- <td>Filesystem pathnames which work as usual with regular expressions.</td>
- </tr>
- <tr>
- <td><code>dest</code></td>
- <td>Destination directory which would contain the archive.</td>
- </tr>
- </table>
- </section>
-
- <section>
- <title> distcp </title>
- <p>
- Copy file or directories recursively. More information can be found at <a href="distcp.html">Hadoop DistCp Guide</a>.
- </p>
- <p>
- <code>Usage: hadoop distcp <srcurl> <desturl></code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>srcurl</code></td>
- <td>Source Url</td>
- </tr>
- <tr>
- <td><code>desturl</code></td>
- <td>Destination Url</td>
- </tr>
- </table>
- </section>
-
- <section>
- <title> fs </title>
- <p>
- <code>Usage: hadoop fs [</code><a href="commands_manual.html#Generic+Options">GENERIC_OPTIONS</a><code>]
- [COMMAND_OPTIONS]</code>
- </p>
- <p>
- Runs a generic filesystem user client.
- </p>
- <p>
- The various COMMAND_OPTIONS can be found at <a href="hdfs_shell.html">Hadoop FS Shell Guide</a>.
- </p>
- </section>
-
- <section>
- <title> fsck </title>
- <p>
- Runs a HDFS filesystem checking utility. See <a href="hdfs_user_guide.html#Fsck">Fsck</a> for more info.
- </p>
- <p><code>Usage: hadoop fsck [</code><a href="commands_manual.html#Generic+Options">GENERIC_OPTIONS</a><code>]
- <path> [-move | -delete | -openforwrite] [-files [-blocks
- [-locations | -racks]]]</code></p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
- <tr>
- <td><code><path></code></td>
- <td>Start checking from this path.</td>
- </tr>
- <tr>
- <td><code>-move</code></td>
- <td>Move corrupted files to /lost+found</td>
- </tr>
- <tr>
- <td><code>-delete</code></td>
- <td>Delete corrupted files.</td>
- </tr>
- <tr>
- <td><code>-openforwrite</code></td>
- <td>Print out files opened for write.</td>
- </tr>
- <tr>
- <td><code>-files</code></td>
- <td>Print out files being checked.</td>
- </tr>
- <tr>
- <td><code>-blocks</code></td>
- <td>Print out block report.</td>
- </tr>
- <tr>
- <td><code>-locations</code></td>
- <td>Print out locations for every block.</td>
- </tr>
- <tr>
- <td><code>-racks</code></td>
- <td>Print out network topology for data-node locations.</td>
- </tr>
- </table>
- </section>
-
- <section>
- <title> jar </title>
- <p>
- Runs a jar file. Users can bundle their Map Reduce code in a jar file and execute it using this command.
- </p>
- <p>
- <code>Usage: hadoop jar <jar> [mainClass] args...</code>
- </p>
- <p>
- The streaming jobs are run via this command. Examples can be referred from
- <a href="streaming.html#More+usage+examples">Streaming examples</a>
- </p>
- <p>
- Word count example is also run using jar command. It can be referred from
- <a href="mapred_tutorial.html#Usage">Wordcount example</a>
- </p>
- </section>
-
- <section>
- <title> job </title>
- <p>
- Command to interact with Map Reduce Jobs.
- </p>
- <p>
- <code>Usage: hadoop job [</code><a href="commands_manual.html#Generic+Options">GENERIC_OPTIONS</a><code>]
- [-submit <job-file>] | [-status <job-id>] |
- [-counter <job-id> <group-name> <counter-name>] | [-kill <job-id>] |
- [-events <job-id> <from-event-#> <#-of-events>] | [-history [all] <jobOutputDir>] |
- [-list [all]] | [-kill-task <task-id>] | [-fail-task <task-id>] |
- [-set-priority <job-id> <priority>]</code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>-submit <job-file></code></td>
- <td>Submits the job.</td>
- </tr>
- <tr>
- <td><code>-status <job-id></code></td>
- <td>Prints the map and reduce completion percentage and all job counters.</td>
- </tr>
- <tr>
- <td><code>-counter <job-id> <group-name> <counter-name></code></td>
- <td>Prints the counter value.</td>
- </tr>
- <tr>
- <td><code>-kill <job-id></code></td>
- <td>Kills the job.</td>
- </tr>
- <tr>
- <td><code>-events <job-id> <from-event-#> <#-of-events></code></td>
- <td>Prints the events' details received by jobtracker for the given range.</td>
- </tr>
- <tr>
- <td><code>-history [all] <jobOutputDir></code></td>
- <td>-history <jobOutputDir> prints job details, failed and killed tip details. More details
- about the job such as successful tasks and task attempts made for each task can be viewed by
- specifying the [all] option. </td>
- </tr>
- <tr>
- <td><code>-list [all]</code></td>
- <td>-list all displays all jobs. -list displays only jobs which are yet to complete.</td>
- </tr>
- <tr>
- <td><code>-kill-task <task-id></code></td>
- <td>Kills the task. Killed tasks are NOT counted against failed attempts.</td>
- </tr>
- <tr>
- <td><code>-fail-task <task-id></code></td>
- <td>Fails the task. Failed tasks are counted against failed attempts.</td>
- </tr>
- <tr>
- <td><code>-set-priority <job-id> <priority></code></td>
- <td>Changes the priority of the job.
- Allowed priority values are VERY_HIGH, HIGH, NORMAL, LOW, VERY_LOW</td>
- </tr>
- </table>
- </section>
-
- <section>
- <title> pipes </title>
- <p>
- Runs a pipes job.
- </p>
- <p>
- <code>Usage: hadoop pipes [-conf <path>] [-jobconf <key=value>, <key=value>, ...]
- [-input <path>] [-output <path>] [-jar <jar file>] [-inputformat <class>]
- [-map <class>] [-partitioner <class>] [-reduce <class>] [-writer <class>]
- [-program <executable>] [-reduces <num>] </code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>-conf <path></code></td>
- <td>Configuration for job</td>
- </tr>
- <tr>
- <td><code>-jobconf <key=value>, <key=value>, ...</code></td>
- <td>Add/override configuration for job</td>
- </tr>
- <tr>
- <td><code>-input <path></code></td>
- <td>Input directory</td>
- </tr>
- <tr>
- <td><code>-output <path></code></td>
- <td>Output directory</td>
- </tr>
- <tr>
- <td><code>-jar <jar file></code></td>
- <td>Jar filename</td>
- </tr>
- <tr>
- <td><code>-inputformat <class></code></td>
- <td>InputFormat class</td>
- </tr>
- <tr>
- <td><code>-map <class></code></td>
- <td>Java Map class</td>
- </tr>
- <tr>
- <td><code>-partitioner <class></code></td>
- <td>Java Partitioner</td>
- </tr>
- <tr>
- <td><code>-reduce <class></code></td>
- <td>Java Reduce class</td>
- </tr>
- <tr>
- <td><code>-writer <class></code></td>
- <td>Java RecordWriter</td>
- </tr>
- <tr>
- <td><code>-program <executable></code></td>
- <td>Executable URI</td>
- </tr>
- <tr>
- <td><code>-reduces <num></code></td>
- <td>Number of reduces</td>
- </tr>
- </table>
- </section>
- <section>
- <title> queue </title>
- <p>
- command to interact and view Job Queue information
- </p>
- <p>
- <code>Usage : hadoop queue [-list] | [-info <job-queue-name> [-showJobs]] | [-showacls]</code>
- </p>
- <table>
- <tr>
- <th> COMMAND_OPTION </th><th> Description </th>
- </tr>
- <tr>
- <td><code>-list</code> </td>
- <td>Gets list of Job Queues configured in the system. Along with scheduling information
- associated with the job queues.
- </td>
- </tr>
- <tr>
- <td><code>-info <job-queue-name> [-showJobs]</code></td>
- <td>
- Displays the job queue information and associated scheduling information of particular
- job queue. If -showJobs options is present a list of jobs submitted to the particular job
- queue is displayed.
- </td>
- </tr>
- <tr>
- <td><code>-showacls</code></td>
- <td>Displays the queue name and associated queue operations allowed for the current user.
- The list consists of only those queues to which the user has access.
- </td>
- </tr>
- </table>
- </section>
- <section>
- <title> version </title>
- <p>
- Prints the version.
- </p>
- <p>
- <code>Usage: hadoop version</code>
- </p>
- </section>
- <section>
- <title> CLASSNAME </title>
- <p>
- hadoop script can be used to invoke any class.
- </p>
- <p>
- <code>Usage: hadoop CLASSNAME</code>
- </p>
- <p>
- Runs the class named CLASSNAME.
- </p>
- </section>
- </section>
- <section>
- <title> Administration Commands </title>
- <p>Commands useful for administrators of a hadoop cluster.</p>
- <section>
- <title> balancer </title>
- <p>
- Runs a cluster balancing utility. An administrator can simply press Ctrl-C to stop the
- rebalancing process. See <a href="hdfs_user_guide.html#Rebalancer">Rebalancer</a> for more details.
- </p>
- <p>
- <code>Usage: hadoop balancer [-threshold <threshold>]</code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>-threshold <threshold></code></td>
- <td>Percentage of disk capacity. This overwrites the default threshold.</td>
- </tr>
- </table>
- </section>
-
- <section>
- <title> daemonlog </title>
- <p>
- Get/Set the log level for each daemon.
- </p>
- <p>
- <code>Usage: hadoop daemonlog -getlevel <host:port> <name></code><br/>
- <code>Usage: hadoop daemonlog -setlevel <host:port> <name> <level></code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>-getlevel <host:port> <name></code></td>
- <td>Prints the log level of the daemon running at <host:port>.
- This command internally connects to http://<host:port>/logLevel?log=<name></td>
- </tr>
- <tr>
- <td><code>-setlevel <host:port> <name> <level></code></td>
- <td>Sets the log level of the daemon running at <host:port>.
- This command internally connects to http://<host:port>/logLevel?log=<name></td>
- </tr>
- </table>
- </section>
-
- <section>
- <title> datanode</title>
- <p>
- Runs a HDFS datanode.
- </p>
- <p>
- <code>Usage: hadoop datanode [-rollback]</code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>-rollback</code></td>
- <td>Rollsback the datanode to the previous version. This should be used after stopping the datanode
- and distributing the old hadoop version.</td>
- </tr>
- </table>
- </section>
-
- <section>
- <title> dfsadmin </title>
- <p>
- Runs a HDFS dfsadmin client.
- </p>
- <p>
- <code>Usage: hadoop dfsadmin [</code><a href="commands_manual.html#Generic+Options">GENERIC_OPTIONS</a><code>] [-report] [-safemode enter | leave | get | wait] [-refreshNodes]
- [-finalizeUpgrade] [-upgradeProgress status | details | force] [-metasave filename]
- [-setQuota <quota> <dirname>...<dirname>] [-clrQuota <dirname>...<dirname>]
- [-restoreFailedStorage true|false|check]
- [-help [cmd]]</code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>-report</code></td>
- <td>Reports basic filesystem information and statistics.</td>
- </tr>
- <tr>
- <td><code>-safemode enter | leave | get | wait</code></td>
- <td>Safe mode maintenance command.
- Safe mode is a Namenode state in which it <br/>
- 1. does not accept changes to the name space (read-only) <br/>
- 2. does not replicate or delete blocks. <br/>
- Safe mode is entered automatically at Namenode startup, and
- leaves safe mode automatically when the configured minimum
- percentage of blocks satisfies the minimum replication
- condition. Safe mode can also be entered manually, but then
- it can only be turned off manually as well.</td>
- </tr>
- <tr>
- <td><code>-refreshNodes</code></td>
- <td>Re-read the hosts and exclude files to update the set
- of Datanodes that are allowed to connect to the Namenode
- and those that should be decommissioned or recommissioned.</td>
- </tr>
- <tr>
- <td><code>-finalizeUpgrade</code></td>
- <td>Finalize upgrade of HDFS.
- Datanodes delete their previous version working directories,
- followed by Namenode doing the same.
- This completes the upgrade process.</td>
- </tr>
- <tr>
- <td><code>-printTopology</code></td>
- <td>Print a tree of the rack/datanode topology of the
- cluster as seen by the NameNode.</td>
- </tr>
- <tr>
- <td><code>-upgradeProgress status | details | force</code></td>
- <td>Request current distributed upgrade status,
- a detailed status or force the upgrade to proceed.</td>
- </tr>
- <tr>
- <td><code>-metasave filename</code></td>
- <td>Save Namenode's primary data structures
- to <filename> in the directory specified by hadoop.log.dir property.
- <filename> will contain one line for each of the following <br/>
- 1. Datanodes heart beating with Namenode<br/>
- 2. Blocks waiting to be replicated<br/>
- 3. Blocks currrently being replicated<br/>
- 4. Blocks waiting to be deleted</td>
- </tr>
- <tr>
- <td><code>-setQuota <quota> <dirname>...<dirname></code></td>
- <td>Set the quota <quota> for each directory <dirname>.
- The directory quota is a long integer that puts a hard limit on the number of names in the directory tree.<br/>
- Best effort for the directory, with faults reported if<br/>
- 1. N is not a positive integer, or<br/>
- 2. user is not an administrator, or<br/>
- 3. the directory does not exist or is a file, or<br/>
- 4. the directory would immediately exceed the new quota.</td>
- </tr>
- <tr>
- <td><code>-clrQuota <dirname>...<dirname></code></td>
- <td>Clear the quota for each directory <dirname>.<br/>
- Best effort for the directory. with fault reported if<br/>
- 1. the directory does not exist or is a file, or<br/>
- 2. user is not an administrator.<br/>
- It does not fault if the directory has no quota.</td>
- </tr>
- <tr>
- <td><code>-restoreFailedStorage true | false | check</code></td>
- <td>This option will turn on/off automatic attempt to restore failed storage replicas.
- If a failed storage becomes available again the system will attempt to restore
- edits and/or fsimage during checkpoint. 'check' option will return current setting.</td>
- </tr>
- <tr>
- <td><code>-help [cmd]</code></td>
- <td> Displays help for the given command or all commands if none
- is specified.</td>
- </tr>
- </table>
- </section>
- <section>
- <title>mradmin</title>
- <p>Runs MR admin client</p>
- <p><code>Usage: hadoop mradmin [</code>
- <a href="commands_manual.html#Generic+Options">GENERIC_OPTIONS</a>
- <code>] [-refreshQueues] </code></p>
- <table>
- <tr>
- <th> COMMAND_OPTION </th><th> Description </th>
- </tr>
- <tr>
- <td><code>-refreshQueues</code></td>
- <td>Refresh the access control lists and state of queues configured in
- the system. These properties are loaded from
- <code>mapred-queues.xml</code>. If the file is malformed, then the
- existing properties are not disturbed. New operations on jobs in the
- queues will be subjected to checks against the refreshed ACLs. Likewise,
- new jobs will be accepted to queues only if the queue state is running.
- </tr>
- </table>
- </section>
- <section>
- <title> jobtracker </title>
- <p>
- Runs the MapReduce job Tracker node.
- </p>
- <p>
- <code>Usage: hadoop jobtracker</code>
- </p>
- </section>
-
- <section>
- <title> namenode </title>
- <p>
- Runs the namenode. More info about the upgrade, rollback and finalize is at
- <a href="hdfs_user_guide.html#Upgrade+and+Rollback">Upgrade Rollback</a>
- </p>
- <p>
- <code>Usage: hadoop namenode [-format] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint]</code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>-regular</code></td>
- <td>Start namenode in standard, active role rather than as backup or checkpoint node. This is the default role.</td>
- </tr>
- <tr>
- <td><code>-checkpoint</code></td>
- <td>Start namenode in checkpoint role, creating periodic checkpoints of the active namenode metadata.</td>
- </tr>
- <tr>
- <td><code>-backup</code></td>
- <td>Start namenode in backup role, maintaining an up-to-date in-memory copy of the namespace and creating periodic checkpoints.</td>
- </tr>
- <tr>
- <td><code>-format</code></td>
- <td>Formats the namenode. It starts the namenode, formats it and then shut it down.</td>
- </tr>
- <tr>
- <td><code>-upgrade</code></td>
- <td>Namenode should be started with upgrade option after the distribution of new hadoop version.</td>
- </tr>
- <tr>
- <td><code>-rollback</code></td>
- <td>Rollsback the namenode to the previous version. This should be used after stopping the cluster
- and distributing the old hadoop version.</td>
- </tr>
- <tr>
- <td><code>-finalize</code></td>
- <td>Finalize will remove the previous state of the files system. Recent upgrade will become permanent.
- Rollback option will not be available anymore. After finalization it shuts the namenode down.</td>
- </tr>
- <tr>
- <td><code>-importCheckpoint</code></td>
- <td>Loads image from a checkpoint directory and saves it into the current one. Checkpoint directory
- is read from property fs.checkpoint.dir</td>
- </tr>
- </table>
- </section>
-
- <section>
- <title> secondarynamenode </title>
- <p>
- Use of the Secondary NameNode has been deprecated. Instead, consider using a
- <a href="hdfs_user_guide.html#Checkpoint+node">Checkpoint node</a> or
- <a href="hdfs_user_guide.html#Backup+node">Backup node</a>. Runs the HDFS secondary
- namenode. See <a href="hdfs_user_guide.html#Secondary+NameNode">Secondary NameNode</a>
- for more info.
- </p>
- <p>
- <code>Usage: hadoop secondarynamenode [-checkpoint [force]] | [-geteditsize]</code>
- </p>
- <table>
- <tr><th> COMMAND_OPTION </th><th> Description </th></tr>
-
- <tr>
- <td><code>-checkpoint [force]</code></td>
- <td>Checkpoints the Secondary namenode if EditLog size >= fs.checkpoint.size.
- If -force is used, checkpoint irrespective of EditLog size.</td>
- </tr>
- <tr>
- <td><code>-geteditsize</code></td>
- <td>Prints the EditLog size.</td>
- </tr>
- </table>
- </section>
-
- <section>
- <title> tasktracker </title>
- <p>
- Runs a MapReduce task Tracker node.
- </p>
- <p>
- <code>Usage: hadoop tasktracker</code>
- </p>
- </section>
-
- </section>
-
-
-
- </body>
- </document>
|