Forráskód Böngészése

HDFS-2054 BlockSender.sendChunk() prints ERROR for connection closures encountered during transferToFully()

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1145751 13f79535-47bb-0310-9956-ffa450edef68
Michael Stack 14 éve
szülő
commit
714edd65ac

+ 3 - 0
hdfs/CHANGES.txt

@@ -546,6 +546,9 @@ Trunk (unreleased changes)
     HDFS-2134. Move DecommissionManager to the blockmanagement package.
     (szetszwo)
 
+    HDFS-2054  BlockSender.sendChunk() prints ERROR for connection closures
+    encountered during transferToFully() (Kihwal Lee via stack)
+
   OPTIMIZATIONS
 
     HDFS-1458. Improve checkpoint performance by avoiding unnecessary image

+ 12 - 3
hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java

@@ -401,10 +401,19 @@ class BlockSender implements java.io.Closeable, FSConstants {
       }
       
     } catch (IOException e) {
-      /* exception while writing to the client (well, with transferTo(),
-       * it could also be while reading from the local file).
+      /* Exception while writing to the client. Connection closure from
+       * the other end is mostly the case and we do not care much about
+       * it. But other things can go wrong, especially in transferTo(),
+       * which we do not want to ignore.
+       *
+       * The message parsing below should not be considered as a good
+       * coding example. NEVER do it to drive a program logic. NEVER.
+       * It was done here because the NIO throws an IOException for EPIPE.
        */
-      LOG.error("BlockSender.sendChunks() exception: " + StringUtils.stringifyException(e));
+      String ioem = e.getMessage();
+      if (!ioem.startsWith("Broken pipe") && !ioem.startsWith("Connection reset")) {
+        LOG.error("BlockSender.sendChunks() exception: ", e);
+      }
       throw ioeToSocketException(e);
     }