瀏覽代碼

HDFS-2054 BlockSender.sendChunk() prints ERROR for connection closures encountered during transferToFully()

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22@1145752 13f79535-47bb-0310-9956-ffa450edef68
Michael Stack 14 年之前
父節點
當前提交
2554056e79
共有 2 個文件被更改,包括 16 次插入3 次删除
  1. 4 0
      hdfs/CHANGES.txt
  2. 12 3
      hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java

+ 4 - 0
hdfs/CHANGES.txt

@@ -270,6 +270,10 @@ Release 0.22.0 - Unreleased
     HADOOP-7106. Reorganize project SVN layout to "unsplit" the projects.
     (todd, nigel)
 
+    HDFS-2054  BlockSender.sendChunk() prints ERROR for connection closures
+    encountered during transferToFully() (Kihwal Lee via stack)
+
+
   OPTIMIZATIONS
 
     HDFS-1140. Speedup INode.getPathComponents. (Dmytro Molkov via shv)

+ 12 - 3
hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java

@@ -401,10 +401,19 @@ class BlockSender implements java.io.Closeable, FSConstants {
       }
       
     } catch (IOException e) {
-      /* exception while writing to the client (well, with transferTo(),
-       * it could also be while reading from the local file).
+      /* Exception while writing to the client. Connection closure from
+       * the other end is mostly the case and we do not care much about
+       * it. But other things can go wrong, especially in transferTo(),
+       * which we do not want to ignore.
+       *
+       * The message parsing below should not be considered as a good
+       * coding example. NEVER do it to drive a program logic. NEVER.
+       * It was done here because the NIO throws an IOException for EPIPE.
        */
-      LOG.error("BlockSender.sendChunks() exception: " + StringUtils.stringifyException(e));
+      String ioem = e.getMessage();
+      if (!ioem.startsWith("Broken pipe") && !ioem.startsWith("Connection reset")) {
+        LOG.error("BlockSender.sendChunks() exception: ", e);
+      }
       throw ioeToSocketException(e);
     }