فهرست منبع

HDFS-8256. '-storagepolicies , -blockId ,-replicaDetails ' options are missed out in usage and from documentation (Contributed by J.Andreina)

(cherry picked from commit a2bd6217ebd68ae8cdd7814722659eebcf53004b)

Conflicts:
	hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
Vinayakumar B 10 سال پیش
والد
کامیت
d1da842f86

+ 3 - 0
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

@@ -494,6 +494,9 @@ Release 2.8.0 - UNRELEASED
     HDFS-8490. Typo in trace enabled log in ExceptionHandler of WebHDFS.
     (Archana T via ozawa)
 
+    HDFS-8256. "-storagepolicies , -blockId ,-replicaDetails " options are missed
+    out in usage and from documentation (J.Andreina via vinayakumarb)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

+ 6 - 4
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java

@@ -78,7 +78,9 @@ public class DFSck extends Configured implements Tool {
   private static final String USAGE = "Usage: hdfs fsck <path> "
       + "[-list-corruptfileblocks | "
       + "[-move | -delete | -openforwrite] "
-      + "[-files [-blocks [-locations | -racks]]]]\n"
+      + "[-files [-blocks [-locations | -racks | -replicaDetails]]]] "
+      + "[-includeSnapshots] "
+      + "[-storagepolicies] [-blockId <blk_Id>]\n"
       + "\t<path>\tstart checking from this path\n"
       + "\t-move\tmove corrupted files to /lost+found\n"
       + "\t-delete\tdelete corrupted files\n"
@@ -93,11 +95,11 @@ public class DFSck extends Configured implements Tool {
       + "\t-files -blocks -locations\tprint out locations for every block\n"
       + "\t-files -blocks -racks" 
       + "\tprint out network topology for data-node locations\n"
-      + "\t-storagepolicies\tprint out storage policy summary for the blocks\n\n"
+      + "\t-files -blocks -replicaDetails\tprint out each replica details \n"
+      + "\t-storagepolicies\tprint out storage policy summary for the blocks\n"
       + "\t-blockId\tprint out which file this blockId belongs to, locations"
       + " (nodes, racks) of this block, and other diagnostics info"
-      + " (under replicated, corrupted or not, etc)\n"
-      + "\t-replicaDetails\tprint out each replica details \n\n"
+      + " (under replicated, corrupted or not, etc)\n\n"
       + "Please Note:\n"
       + "\t1. By default fsck ignores files opened for write, "
       + "use -openforwrite to report such files. They are usually "

+ 5 - 1
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md

@@ -99,8 +99,9 @@ Usage:
        hdfs fsck <path>
               [-list-corruptfileblocks |
               [-move | -delete | -openforwrite]
-              [-files [-blocks [-locations | -racks]]]
+              [-files [-blocks [-locations | -racks | -replicaDetails]]]
               [-includeSnapshots] [-showprogress]
+              [-storagepolicies] [-blockId <blk_Id>]
 
 | COMMAND\_OPTION | Description |
 |:---- |:---- |
@@ -110,11 +111,14 @@ Usage:
 | `-files` `-blocks` | Print out the block report |
 | `-files` `-blocks` `-locations` | Print out locations for every block. |
 | `-files` `-blocks` `-racks` | Print out network topology for data-node locations. |
+| `-files` `-blocks` `-replicaDetails` | Print out each replica details. |
 | `-includeSnapshots` | Include snapshot data if the given path indicates a snapshottable directory or there are snapshottable directories under it. |
 | `-list-corruptfileblocks` | Print out list of missing blocks and files they belong to. |
 | `-move` | Move corrupted files to /lost+found. |
 | `-openforwrite` | Print out files opened for write. |
 | `-showprogress` | Print out dots for progress in output. Default is OFF (no progress). |
+| `-storagepolicies` | Print out storage policy summary for the blocks. |
+| `-blockId` | Print out information about the block. |
 
 Runs the HDFS filesystem checking utility. See [fsck](./HdfsUserGuide.html#fsck) for more info.