Browse Source

HADOOP-11865. Incorrect path mentioned in document for accessing script files (J.Andreina via aw)

Allen Wittenauer 10 years ago
parent
commit
8b69c825e5

+ 3 - 0
hadoop-common-project/hadoop-common/CHANGES.txt

@@ -445,6 +445,9 @@ Trunk (Unreleased)
 
 
     HADOOP-11797. releasedocmaker.py needs to put ASF headers on output (aw)
     HADOOP-11797. releasedocmaker.py needs to put ASF headers on output (aw)
 
 
+    HADOOP-11865. Incorrect path mentioned in document for accessing script
+    files (J.Andreina via aw)
+
   OPTIMIZATIONS
   OPTIMIZATIONS
 
 
     HADOOP-7761. Improve the performance of raw comparisons. (todd)
     HADOOP-7761. Improve the performance of raw comparisons. (todd)

+ 1 - 1
hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm

@@ -101,7 +101,7 @@ The Aggregation interval is configured via the property :
 
 
 $H3 Start/Stop the KMS
 $H3 Start/Stop the KMS
 
 
-To start/stop KMS use KMS's bin/kms.sh script. For example:
+To start/stop KMS use KMS's sbin/kms.sh script. For example:
 
 
     hadoop-${project.version} $ sbin/kms.sh start
     hadoop-${project.version} $ sbin/kms.sh start
 
 

+ 2 - 2
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsUserGuide.md

@@ -307,7 +307,7 @@ When Hadoop is upgraded on an existing cluster, as with any software upgrade, it
 
 
 *   Stop the cluster and distribute new version of Hadoop.
 *   Stop the cluster and distribute new version of Hadoop.
 
 
-*   Run the new version with `-upgrade` option (`bin/start-dfs.sh -upgrade`).
+*   Run the new version with `-upgrade` option (`sbin/start-dfs.sh -upgrade`).
 
 
 *   Most of the time, cluster works just fine. Once the new HDFS is
 *   Most of the time, cluster works just fine. Once the new HDFS is
     considered working well (may be after a few days of operation),
     considered working well (may be after a few days of operation),
@@ -319,7 +319,7 @@ When Hadoop is upgraded on an existing cluster, as with any software upgrade, it
 
 
     * stop the cluster and distribute earlier version of Hadoop.
     * stop the cluster and distribute earlier version of Hadoop.
 
 
-    * start the cluster with rollback option. (`bin/start-dfs.sh -rollback`).
+    * start the cluster with rollback option. (`sbin/start-dfs.sh -rollback`).
 
 
 When upgrading to a new version of HDFS, it is necessary to rename or delete any paths that are reserved in the new version of HDFS. If the NameNode encounters a reserved path during upgrade, it will print an error like the following:
 When upgrading to a new version of HDFS, it is necessary to rename or delete any paths that are reserved in the new version of HDFS. If the NameNode encounters a reserved path during upgrade, it will print an error like the following: