瀏覽代碼

HADOOP-6097. Fix Path conversion in makeQualified and reset LineReader byte
count at the start of each block in Hadoop archives. Contributed by Ben Slusky,
Tom White, and Mahadev Konar


git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21@827838 13f79535-47bb-0310-9956-ffa450edef68

Christopher Douglas 15 年之前
父節點
當前提交
9a49890a55
共有 2 個文件被更改,包括 16 次插入21 次删除
  1. 12 7
      CHANGES.txt
  2. 4 14
      src/java/org/apache/hadoop/fs/HarFileSystem.java

+ 12 - 7
CHANGES.txt

@@ -559,9 +559,6 @@ Trunk (unreleased changes)
     HADOOP-6224. Add a method to WritableUtils performing a bounded read of an
     encoded String. (Jothi Padmanabhan via cdouglas)
 
-    HADOOP-6231. Allow caching of filesystem instances to be disabled on a
-    per-instance basis. (tomwhite)
-
     HADOOP-6133. Add a caching layer to Configuration::getClassByName to
     alleviate a performance regression introduced in a compatibility layer.
     (Todd Lipcon via cdouglas)
@@ -898,10 +895,6 @@ Trunk (unreleased changes)
     HADOOP-5809. Fix job submission, broken by errant directory creation.
     (Sreekanth Ramakrishnan and Jothi Padmanabhan via cdouglas)
 
-    HADOOP-5759. Fix for  IllegalArgumentException when 
-    CombineFileInputFormat is used as job InputFormat.
-    (Amareshwari Sriramadasu via dhruba)
-
     HADOOP-5635. Change distributed cache to work with other distributed file
     systems. (Andrew Hitchcock via tomwhite)
 
@@ -1087,6 +1080,18 @@ Trunk (unreleased changes)
     HADOOP-6286. Fix bugs in related to URI handling in glob methods in 
     FileContext. (Boris Shkolnik via suresh)
 
+Release 0.20.2 - Unreleased
+
+    HADOOP-6231. Allow caching of filesystem instances to be disabled on a
+    per-instance basis. (tomwhite)
+
+    HADOOP-5759. Fix for IllegalArgumentException when CombineFileInputFormat
+    is used as job InputFormat. (Amareshwari Sriramadasu via dhruba)
+
+    HADOOP-6097. Fix Path conversion in makeQualified and reset LineReader byte
+    count at the start of each block in Hadoop archives. (Ben Slusky, Tom
+    White, and Mahadev Konar via cdouglas)
+
 Release 0.20.1 - 2009-09-01
 
   INCOMPATIBLE CHANGES

+ 4 - 14
src/java/org/apache/hadoop/fs/HarFileSystem.java

@@ -302,19 +302,8 @@ public class HarFileSystem extends FilterFileSystem {
     }
 
     URI tmpURI = fsPath.toUri();
-    fsPath = new Path(tmpURI.getPath());
     //change this to Har uri 
-    URI tmp = null;
-    try {
-      tmp = new URI(uri.getScheme(), harAuth, fsPath.toString(),
-                    tmpURI.getQuery(), tmpURI.getFragment());
-    } catch(URISyntaxException ue) {
-      LOG.error("Error in URI ", ue);
-    }
-    if (tmp != null) {
-      return new Path(tmp.toString());
-    }
-    return null;
+    return new Path(uri.getScheme(), harAuth, tmpURI.getPath());
   }
   
   /**
@@ -426,12 +415,13 @@ public class HarFileSystem extends FilterFileSystem {
       // do nothing just a read.
     }
     FSDataInputStream aIn = fs.open(archiveIndex);
-    LineReader aLin = new LineReader(aIn, getConf());
+    LineReader aLin;
     String retStr = null;
     // now start reading the real index file
-     read = 0;
     for (Store s: stores) {
+      read = 0;
       aIn.seek(s.begin);
+      aLin = new LineReader(aIn, getConf());
       while (read + s.begin < s.end) {
         int tmp = aLin.readLine(line);
         read += tmp;