Forráskód Böngészése

HDFS-2893. The start/stop scripts don't start/stop the 2NN when using the default configuration. Contributed by Eli Collins

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1240928 13f79535-47bb-0310-9956-ffa450edef68
Eli Collins 13 éve
szülő
commit
b837bbb7d5

+ 3 - 0
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

@@ -409,6 +409,9 @@ Release 0.23.1 - UNRELEASED
     HDFS-2889. getNumCurrentReplicas is package private but should be public on
     0.23 (see HDFS-2408). (Gregory Chanan via atm)
 
+    HDFS-2893. The start/stop scripts don't start/stop the 2NN when
+    using the default configuration. (eli)
+
 Release 0.23.0 - 2011-11-01 
 
   INCOMPATIBLE CHANGES

+ 3 - 12
hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

@@ -59,7 +59,7 @@ echo "Starting namenodes on [$NAMENODES]"
   --script "$bin/hdfs" start namenode $nameStartOpt
 
 #---------------------------------------------------------
-# datanodes (using defalut slaves file)
+# datanodes (using default slaves file)
 
 if [ -n "$HADOOP_SECURE_DN_USER" ]; then
   echo \
@@ -74,22 +74,13 @@ fi
 #---------------------------------------------------------
 # secondary namenodes (if any)
 
-# if there are no secondary namenodes configured it returns
-# 0.0.0.0 or empty string
 SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>&-)
-SECONDARY_NAMENODES=${SECONDARY_NAMENODES:='0.0.0.0'}
 
-if [ "$SECONDARY_NAMENODES" = '0.0.0.0' ] ; then
-  echo \
-    "Secondary namenodes are not configured. " \
-    "Cannot start secondary namenodes."
-else
-  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"
+echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"
 
-  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
     --config "$HADOOP_CONF_DIR" \
     --hostnames "$SECONDARY_NAMENODES" \
     --script "$bin/hdfs" start secondarynamenode
-fi
 
 # eof

+ 2 - 11
hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-dfs.sh

@@ -50,22 +50,13 @@ fi
 #---------------------------------------------------------
 # secondary namenodes (if any)
 
-# if there are no secondary namenodes configured it returns
-# 0.0.0.0 or empty string
 SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>&-)
-SECONDARY_NAMENODES=${SECONDARY_NAMENODES:-'0.0.0.0'}
 
-if [ "$SECONDARY_NAMENODES" = '0.0.0.0' ] ; then
-  echo \
-    "Secondary namenodes are not configured. " \
-    "Cannot stop secondary namenodes."
-else
-  echo "Stopping secondary namenodes [$SECONDARY_NAMENODES]"
+echo "Stopping secondary namenodes [$SECONDARY_NAMENODES]"
 
-  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
     --config "$HADOOP_CONF_DIR" \
     --hostnames "$SECONDARY_NAMENODES" \
     --script "$bin/hdfs" stop secondarynamenode
-fi
 
 # eof

+ 1 - 3
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml

@@ -253,9 +253,7 @@
      The secondary NameNode merges the fsimage and the edits log files periodically
      and keeps edits log size within a limit. It is usually run on a
      different machine than the primary NameNode since its memory requirements
-     are on the same order as the primary NameNode. The secondary
-     NameNode is started by <code>bin/start-dfs.sh</code> on the nodes 
-     specified in <code>conf/masters</code> file.
+     are on the same order as the primary NameNode.
    </p>
    <p>
      The start of the checkpoint process on the secondary NameNode is