瀏覽代碼

HADOOP-13932. Fix indefinite article in comments (Contributed by LiXin Ge via Daniel Templeton)

Daniel Templeton 8 年之前
父節點
當前提交
e216e8e233

+ 1 - 1
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImagePreTransactionalStorageInspector.java

@@ -41,7 +41,7 @@ import org.apache.hadoop.hdfs.server.namenode.NNStorage.NameNodeFile;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.io.IOUtils;
 
 
 /**
 /**
- * Inspects a FSImage storage directory in the "old" (pre-HDFS-1073) format.
+ * Inspects an FSImage storage directory in the "old" (pre-HDFS-1073) format.
  * This format has the following data files:
  * This format has the following data files:
  *   - fsimage
  *   - fsimage
  *   - fsimage.ckpt (when checkpoint is being uploaded)
  *   - fsimage.ckpt (when checkpoint is being uploaded)

+ 4 - 4
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsRollingUpgrade.md

@@ -65,7 +65,7 @@ Note that rolling upgrade is supported only from Hadoop-2.4.0 onwards.
 
 
 ### Upgrade without Downtime
 ### Upgrade without Downtime
 
 
-In a HA cluster, there are two or more *NameNodes (NNs)*, many *DataNodes (DNs)*,
+In an HA cluster, there are two or more *NameNodes (NNs)*, many *DataNodes (DNs)*,
 a few *JournalNodes (JNs)* and a few *ZooKeeperNodes (ZKNs)*.
 a few *JournalNodes (JNs)* and a few *ZooKeeperNodes (ZKNs)*.
 *JNs* is relatively stable and does not require upgrade when upgrading HDFS in most of the cases.
 *JNs* is relatively stable and does not require upgrade when upgrading HDFS in most of the cases.
 In the rolling upgrade procedure described here,
 In the rolling upgrade procedure described here,
@@ -76,7 +76,7 @@ Upgrading *JNs* and *ZKNs* may incur cluster downtime.
 
 
 Suppose there are two namenodes *NN1* and *NN2*,
 Suppose there are two namenodes *NN1* and *NN2*,
 where *NN1* and *NN2* are respectively in active and standby states.
 where *NN1* and *NN2* are respectively in active and standby states.
-The following are the steps for upgrading a HA cluster:
+The following are the steps for upgrading an HA cluster:
 
 
 1. Prepare Rolling Upgrade
 1. Prepare Rolling Upgrade
     1. Run "[`hdfs dfsadmin -rollingUpgrade prepare`](#dfsadmin_-rollingUpgrade)"
     1. Run "[`hdfs dfsadmin -rollingUpgrade prepare`](#dfsadmin_-rollingUpgrade)"
@@ -133,7 +133,7 @@ However, datanodes can still be upgraded in a rolling manner.
 
 
 In a non-HA cluster, there are a *NameNode (NN)*, a *SecondaryNameNode (SNN)*
 In a non-HA cluster, there are a *NameNode (NN)*, a *SecondaryNameNode (SNN)*
 and many *DataNodes (DNs)*.
 and many *DataNodes (DNs)*.
-The procedure for upgrading a non-HA cluster is similar to upgrading a HA cluster
+The procedure for upgrading a non-HA cluster is similar to upgrading an HA cluster
 except that Step 2 "Upgrade Active and Standby *NNs*" is changed to below:
 except that Step 2 "Upgrade Active and Standby *NNs*" is changed to below:
 
 
 * Upgrade *NN* and *SNN*
 * Upgrade *NN* and *SNN*
@@ -175,7 +175,7 @@ A newer release is downgradable to the pre-upgrade release
 only if both the namenode layout version and the datanode layout version
 only if both the namenode layout version and the datanode layout version
 are not changed between these two releases.
 are not changed between these two releases.
 
 
-In a HA cluster,
+In an HA cluster,
 when a rolling upgrade from an old software release to a new software release is in progress,
 when a rolling upgrade from an old software release to a new software release is in progress,
 it is possible to downgrade, in a rolling fashion, the upgraded machines back to the old software release.
 it is possible to downgrade, in a rolling fashion, the upgraded machines back to the old software release.
 Same as before, suppose *NN1* and *NN2* are respectively in active and standby states.
 Same as before, suppose *NN1* and *NN2* are respectively in active and standby states.

+ 1 - 1
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/LibHdfs.md

@@ -76,7 +76,7 @@ libdhfs is thread safe.
 
 
 *   Concurrency and Hadoop FS "handles"
 *   Concurrency and Hadoop FS "handles"
 
 
-    The Hadoop FS implementation includes a FS handle cache which
+    The Hadoop FS implementation includes an FS handle cache which
     caches based on the URI of the namenode along with the user
     caches based on the URI of the namenode along with the user
     connecting. So, all calls to `hdfsConnect` will return the same
     connecting. So, all calls to `hdfsConnect` will return the same
     handle but calls to `hdfsConnectAsUser` with different users will
     handle but calls to `hdfsConnectAsUser` with different users will

+ 1 - 1
hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_2.6.0.xml

@@ -9801,7 +9801,7 @@
       <param name="defaultPort" type="int"/>
       <param name="defaultPort" type="int"/>
       <doc>
       <doc>
       <![CDATA[Get the socket address for <code>name</code> property as a
       <![CDATA[Get the socket address for <code>name</code> property as a
- <code>InetSocketAddress</code>. On a HA cluster,
+ <code>InetSocketAddress</code>. On an HA cluster,
  this fetches the address corresponding to the RM identified by
  this fetches the address corresponding to the RM identified by
  {@link #RM_HA_ID}.
  {@link #RM_HA_ID}.
  @param name property name.
  @param name property name.

+ 1 - 1
hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_2.7.2.xml

@@ -9416,7 +9416,7 @@
       <param name="defaultPort" type="int"/>
       <param name="defaultPort" type="int"/>
       <doc>
       <doc>
       <![CDATA[Get the socket address for <code>name</code> property as a
       <![CDATA[Get the socket address for <code>name</code> property as a
- <code>InetSocketAddress</code>. On a HA cluster,
+ <code>InetSocketAddress</code>. On an HA cluster,
  this fetches the address corresponding to the RM identified by
  this fetches the address corresponding to the RM identified by
  {@link #RM_HA_ID}.
  {@link #RM_HA_ID}.
  @param name property name.
  @param name property name.

+ 2 - 2
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java

@@ -208,7 +208,7 @@ public class HAUtil {
 
 
   @VisibleForTesting
   @VisibleForTesting
   static String getNeedToSetValueMessage(String confKey) {
   static String getNeedToSetValueMessage(String confKey) {
-    return confKey + " needs to be set in a HA configuration.";
+    return confKey + " needs to be set in an HA configuration.";
   }
   }
 
 
   @VisibleForTesting
   @VisibleForTesting
@@ -223,7 +223,7 @@ public class HAUtil {
                                                         String rmId) {
                                                         String rmId) {
     return YarnConfiguration.RM_HA_IDS + "("
     return YarnConfiguration.RM_HA_IDS + "("
       + ids +  ") need to contain " + YarnConfiguration.RM_HA_ID + "("
       + ids +  ") need to contain " + YarnConfiguration.RM_HA_ID + "("
-      + rmId + ") in a HA configuration.";
+      + rmId + ") in an HA configuration.";
   }
   }
 
 
   @VisibleForTesting
   @VisibleForTesting

+ 1 - 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java

@@ -2767,7 +2767,7 @@ public class YarnConfiguration extends Configuration {
 
 
   /**
   /**
    * Get the socket address for <code>name</code> property as a
    * Get the socket address for <code>name</code> property as a
-   * <code>InetSocketAddress</code>. On a HA cluster,
+   * <code>InetSocketAddress</code>. On an HA cluster,
    * this fetches the address corresponding to the RM identified by
    * this fetches the address corresponding to the RM identified by
    * {@link #RM_HA_ID}.
    * {@link #RM_HA_ID}.
    * @param name property name.
    * @param name property name.

+ 4 - 4
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml

@@ -441,7 +441,7 @@
   <property>
   <property>
     <description>Host:Port of the ZooKeeper server to be used by the RM. This
     <description>Host:Port of the ZooKeeper server to be used by the RM. This
       must be supplied when using the ZooKeeper based implementation of the
       must be supplied when using the ZooKeeper based implementation of the
-      RM state store and/or embedded automatic failover in a HA setting.
+      RM state store and/or embedded automatic failover in an HA setting.
     </description>
     </description>
     <name>yarn.resourcemanager.zk-address</name>
     <name>yarn.resourcemanager.zk-address</name>
     <!--value>127.0.0.1:2181</value-->
     <!--value>127.0.0.1:2181</value-->
@@ -490,7 +490,7 @@
 
 
   <property>
   <property>
     <description>
     <description>
-      ACLs to be used for the root znode when using ZKRMStateStore in a HA
+      ACLs to be used for the root znode when using ZKRMStateStore in an HA
       scenario for fencing.
       scenario for fencing.
 
 
       ZKRMStateStore supports implicit fencing to allow a single
       ZKRMStateStore supports implicit fencing to allow a single
@@ -606,7 +606,7 @@
   </property>
   </property>
 
 
   <property>
   <property>
-    <description>Name of the cluster. In a HA setting,
+    <description>Name of the cluster. In an HA setting,
       this is used to ensure the RM participates in leader
       this is used to ensure the RM participates in leader
       election for this cluster and ensures it does not affect
       election for this cluster and ensures it does not affect
       other clusters</description>
       other clusters</description>
@@ -2188,7 +2188,7 @@
   <property>
   <property>
     <name>yarn.timeline-service.client.fd-retain-secs</name>
     <name>yarn.timeline-service.client.fd-retain-secs</name>
     <description>
     <description>
-      How long the ATS v1.5 writer will keep a FSStream open.
+      How long the ATS v1.5 writer will keep an FSStream open.
       If this fsstream does not write anything for this configured time,
       If this fsstream does not write anything for this configured time,
       it will be close.
       it will be close.
     </description>
     </description>

+ 1 - 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java

@@ -740,7 +740,7 @@ public class FileSystemRMStateStore extends RMStateStore {
         try {
         try {
           return run();
           return run();
         } catch (IOException e) {
         } catch (IOException e) {
-          LOG.info("Exception while executing a FS operation.", e);
+          LOG.info("Exception while executing an FS operation.", e);
           if (++retry > fsNumRetries) {
           if (++retry > fsNumRetries) {
             LOG.info("Maxed out FS retries. Giving up!");
             LOG.info("Maxed out FS retries. Giving up!");
             throw e;
             throw e;

+ 1 - 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueueMetrics.java

@@ -200,7 +200,7 @@ public class FSQueueMetrics extends QueueMetrics {
    * @param parent parent queue
    * @param parent parent queue
    * @param enableUserMetrics  if user metrics is needed
    * @param enableUserMetrics  if user metrics is needed
    * @param conf configuration
    * @param conf configuration
-   * @return a FSQueueMetrics object
+   * @return an FSQueueMetrics object
    */
    */
   @VisibleForTesting
   @VisibleForTesting
   public synchronized
   public synchronized

+ 1 - 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java

@@ -385,7 +385,7 @@ public class MiniYARNCluster extends CompositeService {
   }
   }
 
 
   /**
   /**
-   * In a HA cluster, go through all the RMs and find the Active RM. In a
+   * In an HA cluster, go through all the RMs and find the Active RM. In a
    * non-HA cluster, return the index of the only RM.
    * non-HA cluster, return the index of the only RM.
    *
    *
    * @return index of the active RM or -1 if none of them turn active
    * @return index of the active RM or -1 if none of them turn active