Przeglądaj źródła

HADOOP-2890. If different datanodes report the same block but
with different sizes to the namenode, the namenode picks the
replica(s) with the largest size as the only valid replica(s). (dhruba)



git-svn-id: https://svn.apache.org/repos/asf/hadoop/core/trunk@637724 13f79535-47bb-0310-9956-ffa450edef68

Dhruba Borthakur 17 lat temu
rodzic
commit
7356125370
2 zmienionych plików z 42 dodań i 3 usunięć
  1. 3 0
      CHANGES.txt
  2. 39 3
      src/java/org/apache/hadoop/dfs/FSNamesystem.java

+ 3 - 0
CHANGES.txt

@@ -237,6 +237,9 @@ Trunk (unreleased changes)
     bugs in JSPs to do with analysis - HADOOP-2742, HADOOP-2792.
     (Amareshwari Sri Ramadasu via ddas)
 
+    HADOOP-2890. If different datanodes report the same block but
+    with different sizes to the namenode, the namenode picks the
+    replica(s) with the largest size as the only valid replica(s). (dhruba)
 
 Release 0.16.1 - 2008-03-13
 

+ 39 - 3
src/java/org/apache/hadoop/dfs/FSNamesystem.java

@@ -2589,9 +2589,45 @@ class FSNamesystem implements FSConstants, FSNamesystemMBean {
                    " reported from " + node.getName() + 
                    " current size is " + cursize +
                    " reported size is " + block.getNumBytes());
-          // Accept this block even if there is a problem with its
-          // size. Clients should detect data corruption because of
-          // CRC mismatch.
+          try {
+            if (cursize > block.getNumBytes()) {
+              // new replica is smaller in size than existing block.
+              // Delete new replica.
+              LOG.warn("Deleting block " + block + " from " + node.getName());
+              invalidateBlock(block, node);
+            } else {
+              // new replica is larger in size than existing block.
+              // Delete pre-existing replicas.
+              int numNodes = blocksMap.numNodes(block);
+              int count = 0;
+              DatanodeDescriptor nodes[] = new DatanodeDescriptor[numNodes];
+              Iterator<DatanodeDescriptor> it = blocksMap.nodeIterator(block);
+              for (; it != null && it.hasNext(); ) {
+                DatanodeDescriptor dd = it.next();
+                if (!dd.equals(node)) {
+                  nodes[count++] = dd;
+                }
+              }
+              for (int j = 0; j < count; j++) {
+                LOG.warn("Deleting block " + block + " from " + 
+                         nodes[j].getName());
+                invalidateBlock(block, nodes[j]);
+              }
+              //
+              // change the size of block in blocksMap
+              //
+              storedBlock = blocksMap.getStoredBlock(block); //extra look up!
+              if (storedBlock == null) {
+                LOG.warn("Block " + block + 
+                   " reported from " + node.getName() + 
+                   " does not exist in blockMap. Surprise! Surprise!");
+              } else {
+                storedBlock.setNumBytes(block.getNumBytes());
+              }
+            }
+          } catch (IOException e) {
+            LOG.warn("Error in deleting bad block " + block + e);
+          }
         }
       }
       block = storedBlock;