فهرست منبع

HADOOP-2993. Merge of -r 647526:647527 from trunk to branch 0.17.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17@647528 13f79535-47bb-0310-9956-ffa450edef68
Nigel Daley 17 سال پیش
والد
کامیت
bd40a6ec49

+ 3 - 0
CHANGES.txt

@@ -219,6 +219,9 @@ Release 0.17.0 - Unreleased
     HADOOP-3174. Illustrative example for MultipleFileInputFormat. (Enis
     Soztutar via acmurthy)  
 
+    HADOOP-2993. Clarify the usage of JAVA_HOME in the Quick Start guide.
+    (acmurthy via nigel)
+
   OPTIMIZATIONS
 
     HADOOP-2790.  Fixed inefficient method hasSpeculativeTask by removing

+ 122 - 21
docs/changes.html

@@ -36,7 +36,7 @@
     function collapse() {
       for (var i = 0; i < document.getElementsByTagName("ul").length; i++) {
         var list = document.getElementsByTagName("ul")[i];
-        if (list.id != 'trunk_(unreleased_changes)_' && list.id != 'release_0.16.2_-_2008-04-02_') {
+        if (list.id != 'release_0.17.0_-_unreleased_' && list.id != 'release_0.16.3_-_2008-04-16_') {
           list.style.display = "none";
         }
       }
@@ -52,12 +52,12 @@
 <a href="http://hadoop.apache.org/core/"><img class="logoImage" alt="Hadoop" src="images/hadoop-logo.jpg" title="Scalable Computing Platform"></a>
 <h1>Hadoop Change Log</h1>
 
-<h2><a href="javascript:toggleList('trunk_(unreleased_changes)_')">Trunk (unreleased changes)
+<h2><a href="javascript:toggleList('release_0.17.0_-_unreleased_')">Release 0.17.0 - Unreleased
 </a></h2>
-<ul id="trunk_(unreleased_changes)_">
-  <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._incompatible_changes_')">  INCOMPATIBLE CHANGES
-</a>&nbsp;&nbsp;&nbsp;(19)
-    <ol id="trunk_(unreleased_changes)_._incompatible_changes_">
+<ul id="release_0.17.0_-_unreleased_">
+  <li><a href="javascript:toggleList('release_0.17.0_-_unreleased_._incompatible_changes_')">  INCOMPATIBLE CHANGES
+</a>&nbsp;&nbsp;&nbsp;(23)
+    <ol id="release_0.17.0_-_unreleased_._incompatible_changes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2786">HADOOP-2786</a>.  Move hbase out of hadoop core
 </li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2345">HADOOP-2345</a>.  New HDFS transactions to support appending
@@ -95,11 +95,19 @@ org.apache.hadoop.record.compiler.generated.SimpleCharStream.<br />(Amareshwari
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2563">HADOOP-2563</a>. Remove deprecated FileSystem::listPaths.<br />(lohit vijayarenu via cdouglas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2818">HADOOP-2818</a>.  Remove deprecated methods in Counters.<br />(Amareshwari Sriramadasu via tomwhite)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2831">HADOOP-2831</a>. Remove deprecated o.a.h.dfs.INode::getAbsoluteName()<br />(lohit vijayarenu via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2839">HADOOP-2839</a>. Remove deprecated FileSystem::globPaths.<br />(lohit vijayarenu via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2634">HADOOP-2634</a>. Deprecate ClientProtocol::exists.<br />(lohit vijayarenu via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2410">HADOOP-2410</a>.  Make EC2 cluster nodes more independent of each other.
+Multiple concurrent EC2 clusters are now supported, and nodes may be
+added to a cluster on the fly with new nodes starting in the same EC2
+availability zone as the cluster.  Ganglia monitoring and large
+instance sizes have also been added.<br />(Chris K Wensel via tomwhite)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2826">HADOOP-2826</a>. Deprecated FileSplit.getFile(), LineRecordReader.readLine().<br />(Amareshwari Sriramadasu via ddas)</li>
     </ol>
   </li>
-  <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._new_features_')">  NEW FEATURES
-</a>&nbsp;&nbsp;&nbsp;(9)
-    <ol id="trunk_(unreleased_changes)_._new_features_">
+  <li><a href="javascript:toggleList('release_0.17.0_-_unreleased_._new_features_')">  NEW FEATURES
+</a>&nbsp;&nbsp;&nbsp;(12)
+    <ol id="release_0.17.0_-_unreleased_._new_features_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-1398">HADOOP-1398</a>.  Add HBase in-memory block cache.<br />(tomwhite)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2178">HADOOP-2178</a>.  Job History on DFS.<br />(Amareshwari Sri Ramadasu via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2063">HADOOP-2063</a>. A new parameter to dfs -get command to fetch a file
@@ -116,11 +124,17 @@ DFSClient and DataNode sockets have 10min write timeout.<br />(rangadi)</li>
 build or update Lucene indexes using Map/Reduce.<br />(Ning Li via cutting)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-1622">HADOOP-1622</a>.  Allow multiple jar files for map reduce.<br />(Mahadev Konar via dhruba)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2055">HADOOP-2055</a>. Allows users to set PathFilter on the FileInputFormat.<br />(Alejandro Abdelnur via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2551">HADOOP-2551</a>. More environment variables like HADOOP_NAMENODE_OPTS
+for better control of HADOOP_OPTS for each component.<br />(rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3001">HADOOP-3001</a>. Add job counters that measure the number of bytes
+read and written to HDFS, S3, KFS, and local file systems.<br />(omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3048">HADOOP-3048</a>.  A new Interface and a default implementation to convert
+and restore serializations of objects to/from strings.<br />(enis)</li>
     </ol>
   </li>
-  <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._improvements_')">  IMPROVEMENTS
-</a>&nbsp;&nbsp;&nbsp;(26)
-    <ol id="trunk_(unreleased_changes)_._improvements_">
+  <li><a href="javascript:toggleList('release_0.17.0_-_unreleased_._improvements_')">  IMPROVEMENTS
+</a>&nbsp;&nbsp;&nbsp;(32)
+    <ol id="release_0.17.0_-_unreleased_._improvements_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2655">HADOOP-2655</a>. Copy on write for data and metadata files in the
 presence of snapshots. Needed for supporting appends to HDFS
 files.<br />(dhruba)</li>
@@ -166,11 +180,23 @@ HOD's exit code.<br />(Hemanth Yamijala via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3093">HADOOP-3093</a>. Adds Configuration.getStrings(name, default-value) and
 the corresponding setStrings.<br />(Amareshwari Sriramadasu via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3106">HADOOP-3106</a>. Adds documentation in forrest for debugging.<br />(Amareshwari Sriramadasu via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3099">HADOOP-3099</a>. Add an option to distcp to preserve user, group, and
+permission information. (Tsz Wo (Nicholas), SZE via cdouglas)
+</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2841">HADOOP-2841</a>. Unwrap AccessControlException and FileNotFoundException
+from RemoteException for DFSClient.<br />(shv)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3152">HADOOP-3152</a>.  Make index interval configuable when using
+MapFileOutputFormat for map-reduce job.<br />(Rong-En Fan via cutting)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3143">HADOOP-3143</a>. Decrease number of slaves from 4 to 3 in TestMiniMRDFSSort,
+as Hudson generates false negatives under the current load.<br />(Nigel Daley via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3174">HADOOP-3174</a>. Illustrative example for MultipleFileInputFormat.<br />(Enis
+Soztutar via acmurthy)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2993">HADOOP-2993</a>. Clarify the usage of JAVA_HOME in the Quick Start guide.<br />(acmurthy via nigel)</li>
     </ol>
   </li>
-  <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._optimizations_')">  OPTIMIZATIONS
-</a>&nbsp;&nbsp;&nbsp;(10)
-    <ol id="trunk_(unreleased_changes)_._optimizations_">
+  <li><a href="javascript:toggleList('release_0.17.0_-_unreleased_._optimizations_')">  OPTIMIZATIONS
+</a>&nbsp;&nbsp;&nbsp;(12)
+    <ol id="release_0.17.0_-_unreleased_._optimizations_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2790">HADOOP-2790</a>.  Fixed inefficient method hasSpeculativeTask by removing
 repetitive calls to get the current time and late checking to see if
 we want speculation on at all.<br />(omalley)</li>
@@ -198,11 +224,16 @@ io.sort.spill.percent - the percentages of io.sort.mb that should
                         cause a spill (default 80%)
 io.sort.record.percent - the percent of io.sort.mb that should
                          hold key/value indexes (default 5%)<br />(cdouglas via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3140">HADOOP-3140</a>. Doesn't add a task in the commit queue if the task hadn't
+generated any output.<br />(Amar Kamat via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3168">HADOOP-3168</a>. Reduce the amount of logging in streaming to an
+exponentially increasing number of records (up to 10,000
+records/log).<br />(Zheng Shao via omalley)</li>
     </ol>
   </li>
-  <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._bug_fixes_')">  BUG FIXES
-</a>&nbsp;&nbsp;&nbsp;(68)
-    <ol id="trunk_(unreleased_changes)_._bug_fixes_">
+  <li><a href="javascript:toggleList('release_0.17.0_-_unreleased_._bug_fixes_')">  BUG FIXES
+</a>&nbsp;&nbsp;&nbsp;(93)
+    <ol id="release_0.17.0_-_unreleased_._bug_fixes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2195">HADOOP-2195</a>. '-mkdir' behaviour is now closer to Linux shell in case of
 errors.<br />(Mahadev Konar via rangadi)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2190">HADOOP-2190</a>. bring behaviour '-ls' and '-du' closer to Linux shell
@@ -327,11 +358,83 @@ cannot be determined.<br />(Devaraj Das via dhruba)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2997">HADOOP-2997</a>. Adds test for non-writable serialier. Also fixes a problem
 introduced by <a href="http://issues.apache.org/jira/browse/HADOOP-2399">HADOOP-2399</a>.<br />(Tom White via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3114">HADOOP-3114</a>. Fix TestDFSShell on Windows.<br />(Lohit Vijaya Renu via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3118">HADOOP-3118</a>.  Fix Namenode NPE while loading fsimage after a cluster
+upgrade from older disk format.<br />(dhruba)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3161">HADOOP-3161</a>. Fix FIleUtil.HardLink.getLinkCount on Mac OS.<br />(nigel
+via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2927">HADOOP-2927</a>. Fix TestDU to acurately calculate the expected file size.<br />(shv via nigel)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3123">HADOOP-3123</a>. Fix the native library build scripts to work on Solaris.<br />(tomwhite via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3089">HADOOP-3089</a>.  Streaming should accept stderr from task before
+first key arrives.<br />(Rick Cox via tomwhite)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3146">HADOOP-3146</a>. A DFSOutputStream.flush method is renamed as
+DFSOutputStream.fsync.<br />(dhruba)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3165">HADOOP-3165</a>. -put/-copyFromLocal did not treat input file "-" as stdin.<br />(Lohit Vijayarenu via rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3138">HADOOP-3138</a>. DFS mkdirs() should not throw an exception if the directory
+already exists.<br />(rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3041">HADOOP-3041</a>. Deprecate JobConf.setOutputPath and JobConf.getOutputPath.
+Deprecate OutputFormatBase. Add FileOutputFormat. Existing output formats
+extending OutputFormatBase, now extend FileOutputFormat. Add the following
+APIs in FileOutputFormat: setOutputPath, getOutputPath, getWorkOutputPath.<br />(Amareshwari Sriramadasu via nigel)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3083">HADOOP-3083</a>. The fsimage does not store leases. This would have to be
+reworked in the next release to support appends.<br />(dhruba)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3166">HADOOP-3166</a>. Fix an ArrayIndexOutOfBoundsException in the spill thread
+and make exception handling more promiscuous to catch this condition.<br />(cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3050">HADOOP-3050</a>. DataNode sends one and only one block report after
+it registers with the namenode.<br />(Hairong Kuang)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3044">HADOOP-3044</a>. NNBench sets the right configuration for the mapper.<br />(Hairong Kuang)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3178">HADOOP-3178</a>. Fix GridMix scripts for small and medium jobs
+to handle input paths differently.<br />(Mukund Madhugiri via nigel)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-1911">HADOOP-1911</a>. Fix an infinite loop in DFSClient when all replicas of a
+block are bad<br />(cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3157">HADOOP-3157</a>. Fix path handling in DistributedCache and TestMiniMRLocalFS.<br />(Doug Cutting via rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3018">HADOOP-3018</a>. Fix the eclipse plug-in contrib wrt removed deprecated
+methods<br />(taton)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3183">HADOOP-3183</a>. Fix TestJobShell to use 'ls' instead of java.io.File::exists
+since cygwin symlinks are unsupported.<br />(Mahadev konar via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3175">HADOOP-3175</a>. Fix FsShell.CommandFormat to handle "-" in arguments.<br />(Edward J. Yoon via rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3220">HADOOP-3220</a>. Safemode message corrected.<br />(shv)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3208">HADOOP-3208</a>. Fix WritableDeserializer to set the Configuration on
+deserialized Writables.<br />(Enis Soztutar via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3224">HADOOP-3224</a>. 'dfs -du /dir' does not return correct size.<br />(Lohit Vjayarenu via rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3223">HADOOP-3223</a>. Fix typo in help message for -chmod.<br />(rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-1373">HADOOP-1373</a>. checkPath() should ignore case when it compares authoriy.<br />(Edward J. Yoon via rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3204">HADOOP-3204</a>. Fixes a problem to do with ReduceTask's LocalFSMerger not
+catching Throwable.<br />(Amar Ramesh Kamat via ddas)</li>
     </ol>
   </li>
 </ul>
-<h2><a href="javascript:toggleList('release_0.16.2_-_2008-04-02_')">Release 0.16.2 - 2008-04-02
+<h2><a href="javascript:toggleList('release_0.16.3_-_2008-04-16_')">Release 0.16.3 - 2008-04-16
 </a></h2>
+<ul id="release_0.16.3_-_2008-04-16_">
+  <li><a href="javascript:toggleList('release_0.16.3_-_2008-04-16_._bug_fixes_')">  BUG FIXES
+</a>&nbsp;&nbsp;&nbsp;(7)
+    <ol id="release_0.16.3_-_2008-04-16_._bug_fixes_">
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3010">HADOOP-3010</a>. Fix ConcurrentModificationException in ipc.Server.Responder.<br />(rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3154">HADOOP-3154</a>. Catch all Throwables from the SpillThread in MapTask, rather
+than IOExceptions only.<br />(ddas via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3159">HADOOP-3159</a>. Avoid file system cache being overwritten whenever
+configuration is modified. (Tsz Wo (Nicholas), SZE via hairong)
+</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3139">HADOOP-3139</a>. Remove the consistency check for the FileSystem cache in
+closeAll() that causes spurious warnings and a deadlock.
+(Tsz Wo (Nicholas), SZE via cdouglas)
+</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3195">HADOOP-3195</a>. Fix TestFileSystem to be deterministic.
+(Tsz Wo (Nicholas), SZE via cdouglas)
+</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3069">HADOOP-3069</a>. Primary name-node should not truncate image when transferring
+it from the secondary.<br />(shv)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3182">HADOOP-3182</a>. Change permissions of the job-submission directory to 777
+from 733 to ensure sharing of HOD clusters works correctly. (Tsz Wo
+(Nicholas), Sze and Amareshwari Sri Ramadasu via acmurthy)
+</li>
+    </ol>
+  </li>
+</ul>
+<h2><a href="javascript:toggleList('older')">Older Releases</a></h2>
+<ul id="older">
+<h3><a href="javascript:toggleList('release_0.16.2_-_2008-04-02_')">Release 0.16.2 - 2008-04-02
+</a></h3>
 <ul id="release_0.16.2_-_2008-04-02_">
   <li><a href="javascript:toggleList('release_0.16.2_-_2008-04-02_._bug_fixes_')">  BUG FIXES
 </a>&nbsp;&nbsp;&nbsp;(19)
@@ -380,8 +483,6 @@ DistributedFileSystem.<br />(shv via nigel)</li>
     </ol>
   </li>
 </ul>
-<h2><a href="javascript:toggleList('older')">Older Releases</a></h2>
-<ul id="older">
 <h3><a href="javascript:toggleList('release_0.16.1_-_2008-03-13_')">Release 0.16.1 - 2008-03-13
 </a></h3>
 <ul id="release_0.16.1_-_2008-03-13_">

+ 1 - 1
docs/cluster_setup.html

@@ -67,7 +67,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 29 - 1
docs/hadoop-default.html

@@ -44,6 +44,18 @@ creations/deletions), or "all".</td>
   should minimize seeks.</td>
 </tr>
 <tr>
+<td><a name="io.sort.record.percent">io.sort.record.percent</a></td><td>0.05</td><td>The percentage of io.sort.mb dedicated to tracking record
+  boundaries. Let this value be r, io.sort.mb be x. The maximum number
+  of records collected before the collection thread must block is equal
+  to (r * x) / 4</td>
+</tr>
+<tr>
+<td><a name="io.sort.spill.percent">io.sort.spill.percent</a></td><td>0.80</td><td>The soft limit in either the buffer or record collection
+  buffers. Once reached, a thread will begin to spill the contents to disk
+  in the background. Note that this does not imply any chunking of data to
+  the spill. A value less than 0.5 is not recommended.</td>
+</tr>
+<tr>
 <td><a name="io.file.buffer.size">io.file.buffer.size</a></td><td>4096</td><td>The size of buffer for use in sequence files.
   The size of this buffer should probably be a multiple of hardware
   page size (4096 on Intel x86), and it determines how much data is
@@ -99,6 +111,9 @@ creations/deletions), or "all".</td>
 <td><a name="fs.hftp.impl">fs.hftp.impl</a></td><td>org.apache.hadoop.dfs.HftpFileSystem</td><td></td>
 </tr>
 <tr>
+<td><a name="fs.hsftp.impl">fs.hsftp.impl</a></td><td>org.apache.hadoop.dfs.HsftpFileSystem</td><td></td>
+</tr>
+<tr>
 <td><a name="fs.ramfs.impl">fs.ramfs.impl</a></td><td>org.apache.hadoop.fs.InMemoryFileSystem</td><td>The FileSystem for ramfs: uris.</td>
 </tr>
 <tr>
@@ -106,7 +121,9 @@ creations/deletions), or "all".</td>
 </tr>
 <tr>
 <td><a name="fs.checkpoint.dir">fs.checkpoint.dir</a></td><td>${hadoop.tmp.dir}/dfs/namesecondary</td><td>Determines where on the local filesystem the DFS secondary
-      name node should store the temporary images and edits to merge.  
+      name node should store the temporary images and edits to merge.
+      If this is a comma-delimited list of directories then the image is
+      replicated in all of the directories for redundancy.
   </td>
 </tr>
 <tr>
@@ -143,6 +160,17 @@ creations/deletions), or "all".</td>
   </td>
 </tr>
 <tr>
+<td><a name="dfs.datanode.https.address">dfs.datanode.https.address</a></td><td>0.0.0.0:50475</td><td></td>
+</tr>
+<tr>
+<td><a name="dfs.https.address">dfs.https.address</a></td><td>0.0.0.0:50470</td><td></td>
+</tr>
+<tr>
+<td><a name="https.keystore.info.rsrc">https.keystore.info.rsrc</a></td><td>sslinfo.xml</td><td>The name of the resource from which ssl keystore information
+  will be extracted
+  </td>
+</tr>
+<tr>
 <td><a name="dfs.datanode.dns.interface">dfs.datanode.dns.interface</a></td><td>default</td><td>The name of the Network Interface from which a data node should 
   report its IP address.
   </td>

+ 1 - 1
docs/hdfs_design.html

@@ -69,7 +69,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 1 - 1
docs/hdfs_permissions_guide.html

@@ -69,7 +69,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 17 - 11
docs/hdfs_shell.html

@@ -67,7 +67,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+
@@ -621,10 +621,10 @@ document.write("Last Published: " + document.lastModified);
 <div class="section">
 <p>
 				
-<span class="codefrag">Usage: hadoop dfs -put &lt;localsrc&gt; &lt;dst&gt;</span>
+<span class="codefrag">Usage: hadoop dfs -put &lt;localsrc&gt; ... &lt;dst&gt;</span>
 			
 </p>
-<p>Copy src from local file system to the destination filesystem. Also reads input from stdin and writes to destination filesystem.<br>
+<p>Copy single src, or multiple srcs from local file system to the destination filesystem. Also reads input from stdin and writes to destination filesystem.<br>
 	   
 </p>
 <ul>
@@ -637,6 +637,12 @@ document.write("Last Published: " + document.lastModified);
 				
 <li>
 					
+<span class="codefrag"> hadoop dfs -put localfile1 localfile2 /user/hadoop/hadoopdir</span>
+				
+</li>
+				
+<li>
+					
 <span class="codefrag"> hadoop dfs -put localfile hdfs://host:port/hadoop/hadoopfile</span>
 				
 </li>
@@ -654,7 +660,7 @@ document.write("Last Published: " + document.lastModified);
 </p>
 </div>
 		
-<a name="N1024E"></a><a name="rm"></a>
+<a name="N10254"></a><a name="rm"></a>
 <h2 class="h3"> rm </h2>
 <div class="section">
 <p>
@@ -683,7 +689,7 @@ document.write("Last Published: " + document.lastModified);
 </p>
 </div>
 		
-<a name="N10272"></a><a name="rmr"></a>
+<a name="N10278"></a><a name="rmr"></a>
 <h2 class="h3"> rmr </h2>
 <div class="section">
 <p>
@@ -717,7 +723,7 @@ document.write("Last Published: " + document.lastModified);
 </p>
 </div>
 		
-<a name="N1029C"></a><a name="setrep"></a>
+<a name="N102A2"></a><a name="setrep"></a>
 <h2 class="h3"> setrep </h2>
 <div class="section">
 <p>
@@ -746,7 +752,7 @@ document.write("Last Published: " + document.lastModified);
 </p>
 </div>
 		
-<a name="N102C1"></a><a name="stat"></a>
+<a name="N102C7"></a><a name="stat"></a>
 <h2 class="h3"> stat </h2>
 <div class="section">
 <p>
@@ -773,7 +779,7 @@ document.write("Last Published: " + document.lastModified);
 </p>
 </div>
 		
-<a name="N102E4"></a><a name="tail"></a>
+<a name="N102EA"></a><a name="tail"></a>
 <h2 class="h3"> tail </h2>
 <div class="section">
 <p>
@@ -800,7 +806,7 @@ document.write("Last Published: " + document.lastModified);
 </p>
 </div>
 		
-<a name="N10307"></a><a name="test"></a>
+<a name="N1030D"></a><a name="test"></a>
 <h2 class="h3"> test </h2>
 <div class="section">
 <p>
@@ -826,7 +832,7 @@ document.write("Last Published: " + document.lastModified);
 </ul>
 </div>
 		
-<a name="N1032A"></a><a name="text"></a>
+<a name="N10330"></a><a name="text"></a>
 <h2 class="h3"> text </h2>
 <div class="section">
 <p>
@@ -841,7 +847,7 @@ document.write("Last Published: " + document.lastModified);
 	  </p>
 </div>
 		
-<a name="N1033C"></a><a name="touchz"></a>
+<a name="N10342"></a><a name="touchz"></a>
 <h2 class="h3"> touchz </h2>
 <div class="section">
 <p>

+ 76 - 76
docs/hdfs_shell.pdf

@@ -499,10 +499,10 @@ endobj
 >>
 endobj
 85 0 obj
-<< /Length 1308 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 1358 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gb!#[95iQE&AJ$Cka3>VPE[If_4j2a-MmcY]:6C0[Kfg,D#og`-0_r8pIi3DOY-A@m%Npb)KLT(c#Deiq-c"I5B2_e#^_Kk%"##mnZ_s(0?//d%"(J@cs)akQQ$R2>ORTXHK(fL\b3[^Ma(g7MqPkrW&FQref+>bMSd^J)U1@\lTZ].n$@LU+)6S97qh4-k_p,gnCH3(SZKcCFVj&Wh)`ZRgc.;uC+%fe@!^f'oC#HL16B4Jbe?+-QV;)aXp\b=`$S;`A_NjS;n`P)Ej3cGJZ*,D,V?JT'i$[8IFjMN.P\K85Y;jY*3>IKZ:q:B%4"@cG<hkW(+.oQim8uJ8\k'q'SVbAWf)38.uQC(?*(Y\Sur*a7)NE`@Peu^=:(,Sn`[^igB9'k/!1aL?9q2"_XhM"RIMP&;BKetX#?[TK.s9i1o#&3+%9Jj)kkF0(m1Lhbn-YSdsV(UPW&R9q[+P-'uXo%@Fj.[V3LB?O8FIcW&*WViUXjXDm`F5<e"FgJt/4'B0ju`79>'6?(Y@3H>TSJkOZ+2k'eP?3<VN,:ChWG"$g,]NgfbkQF4\%c7qTr]./+(OW.A&%l+5u0L9UXVu+nf(1sTsO;.%FRkF!?oes2e%J3gnkVGfMXb9SMTfjI;mmB3g7>Ya!m4mh9c.5ROHf4dWqfGX%/4@l3j;q-Vr;ItPNnkX6HdXNWRM8r%mi[a"^uKk`];+^YDI*?^WcX-;)fJ)q!Hnq&N0E%BL3-r0;ZFJX?kF[t8>:C[VIKJ90j+ueD[PTn^\in]\Bi#ej[Mq*oQeNUq83^fjHK'?!i[br&+,73BXXkoQnc0fs&t+4Ql/9P3m?0,rBH[1.-Z'Q!C?6un1=h=Dh*?3-J6@ufqG(31S9FK5A6h`ofmb7q\[i\)i0tQJLes:J[m59ULDXRZFgMMURWTI)DK+iBM[DC7e@>QVERF7InX*;gbkPDYN;\HRhC<d&*'E5KFt[lf0[._Bthdp]KF)9UKVAnZbNLAS-<'%o\djt#Z-a)"-ra6lN-GaDuj%d@O]edhl7;_/>X78A.hE#`3u8H9'7ke#lA9K&6m(MDgU`nQ%U->LgInG[7/`[\\'smmqWSahX#g<r4.\.6/W+E<81>)6G\dE-hcdtj4=.;4qc!uMKqb+RnrA<6+%e[BO>Qo[UB'.DUDjP3:"0%hNV1!Hcs.ta=)_oDAhmSr,!Bki/Ij:"Lmo2'N#o*n;8UeJ;Q_Vg"e$?\BBrU=6h1V4DJ)^&LUG7\W[d\"h2AG9p.j=P?aP^U;4j6gfBhig3.(6WZ%T$Xar%im_619QVH7b6d"aN~>
+Gb!#\997gc&AJ$CkdUZ)1`ql//W'PMG./!TBY/7Z8IgVGgB&YJ-0a7P^G#788%)r4Bb/0(<^Y,F^)>4D6i-arZ$bURnIbIqlNQu?$fYUl8%]\n[ErQBYg?U>hPV;*Tf:g[%I+`gjmsG[MmrP"nZZ3pD2i;JT,4UAo2-3VT&45<&mkT:_Y^q>C[o,^(jgqE%Yr=X5sB48IFQA.gf\hEDVT)m9ZioiZaFtnD%3?Y5!Jr1KMM^71W@0H-iTJ>D;KGE/0BKN<*#E1W.()Z6G"5=o3rgl=ZT-t]h4!BNHELSj1IuX/h<%3eM[Y>94_6Q<TY'[U6%#nU"gG2h\G/',u@1Ko8>"(-fn>EpE8$0iWsb5^23C_$fE9K#.#&>dC@.fLeNk"rtX05i%qnVWKHH\Ur4TBY;VR^:JrM3nV4\k,>,0GP?LO6)E2:WlJgY,<T_;W%#o"V(knOC_mcGKq7f!j&/h[E^buc7>+<j&"pb+U4s2iM%/NH'^@7J9/_%)V?e1VfBAV$oTP9@Z=eM2KPNtMQmAmk\E\"Q.gIYt:JWIOC7m#0#+KfrZc.JH1g>X?DR)"bgPrsH;N@V&VR=R,\TJrLl6g6]>ZW@l4$?hJ)(Nn&MTA##(H7^B6R3^fn67Jn\!KZ:8+WfO8`3eRu(j+SOD^NT^B+/cLB9Tr'0k1),RE+_)S-*IA1//+Xd^O*'+TtsH$;tMVm,m$8gq<b%[=g=Y<L*RA,cT'%YY$;+6%aMT<PYg^X`:qbbdEK)^a?cGZlER0('Q,S6WYI_/=fjZY/^d9a`?<]0UlBJDSu\6?\Ws"M-(T<7`$$B<*SiU`#5u5Cf.YrA&R("CDAI0%K]&\@C?rf^VNT5f'(dP/E^;t_^?k((PcQ:p>=s1KV[dL"1<h:%mC080E_=AGJQISV[`k24o.#%d/b7mDV3?8%#q6%ORF#tPS7-"s+a`HI;H),;%(H?)AeGa-V#%c'ErJu(hGZroU8WHmBEDbi1)aKC,FOL<6ifD&k?hgs!.q[5[bWQPk@eIq)T>]3W[RSC84)?rhp/X&@:NJK0qI<E`)65<:1:e'8*$q"^F3I[/_Edc4*d*(Mg"i$CO<UWnHRNHJU0C5Dp3AUJ4!i&Vn'e[:aat0GI$!#,@9q7t0phqW\n\_+r@&ZNlT:V^dE5=1IGISpD">:Wth-LsWXhs.td0EfOTM;X`bc[_#ui<pn^(XG/</s2NE[_&W!<lTTZ>#jU8NpUs'd+iJQ9H&(<RgNWg=*8i3SNSi.cjaI2Yf-#7``aRfYas!"Z';3K,gEn(+C0NA7:8/SH-`pPZDM<;H,_bdo\Mc*t!\9=P?Y/Z2hjJVdC4sU9#-tJFqs:JY2KA`s[gFbT~>
 endstream
 endobj
 86 0 obj
@@ -940,7 +940,7 @@ endobj
 47 0 obj
 <<
 /S /GoTo
-/D [86 0 R /XYZ 85.0 269.932 null]
+/D [86 0 R /XYZ 85.0 256.732 null]
 >>
 endobj
 49 0 obj
@@ -1005,70 +1005,70 @@ endobj
 xref
 0 126
 0000000000 65535 f 
-0000023529 00000 n 
-0000023643 00000 n 
-0000023735 00000 n 
+0000023579 00000 n 
+0000023693 00000 n 
+0000023785 00000 n 
 0000000015 00000 n 
 0000000071 00000 n 
 0000001185 00000 n 
 0000001305 00000 n 
 0000001491 00000 n 
-0000023887 00000 n 
+0000023937 00000 n 
 0000001626 00000 n 
-0000023950 00000 n 
+0000024000 00000 n 
 0000001763 00000 n 
-0000024016 00000 n 
+0000024066 00000 n 
 0000001900 00000 n 
-0000024082 00000 n 
+0000024132 00000 n 
 0000002037 00000 n 
-0000024148 00000 n 
+0000024198 00000 n 
 0000002174 00000 n 
-0000024212 00000 n 
+0000024262 00000 n 
 0000002311 00000 n 
-0000024278 00000 n 
+0000024328 00000 n 
 0000002448 00000 n 
-0000024344 00000 n 
+0000024394 00000 n 
 0000002585 00000 n 
-0000024410 00000 n 
+0000024460 00000 n 
 0000002720 00000 n 
-0000024476 00000 n 
+0000024526 00000 n 
 0000002857 00000 n 
-0000024540 00000 n 
+0000024590 00000 n 
 0000002994 00000 n 
-0000024606 00000 n 
+0000024656 00000 n 
 0000003131 00000 n 
-0000024672 00000 n 
+0000024722 00000 n 
 0000003268 00000 n 
-0000024738 00000 n 
+0000024788 00000 n 
 0000003405 00000 n 
-0000024802 00000 n 
+0000024852 00000 n 
 0000003540 00000 n 
-0000024868 00000 n 
+0000024918 00000 n 
 0000003677 00000 n 
-0000024934 00000 n 
+0000024984 00000 n 
 0000003814 00000 n 
-0000025000 00000 n 
+0000025050 00000 n 
 0000003951 00000 n 
-0000025064 00000 n 
+0000025114 00000 n 
 0000004088 00000 n 
-0000025130 00000 n 
+0000025180 00000 n 
 0000004225 00000 n 
-0000025196 00000 n 
+0000025246 00000 n 
 0000004362 00000 n 
-0000025260 00000 n 
+0000025310 00000 n 
 0000004499 00000 n 
-0000025326 00000 n 
+0000025376 00000 n 
 0000004636 00000 n 
-0000025392 00000 n 
+0000025442 00000 n 
 0000004773 00000 n 
 0000005291 00000 n 
 0000005414 00000 n 
 0000005455 00000 n 
-0000025458 00000 n 
+0000025508 00000 n 
 0000005588 00000 n 
-0000025522 00000 n 
+0000025572 00000 n 
 0000005719 00000 n 
-0000025588 00000 n 
+0000025638 00000 n 
 0000005852 00000 n 
 0000008060 00000 n 
 0000008183 00000 n 
@@ -1080,9 +1080,9 @@ xref
 0000010594 00000 n 
 0000010774 00000 n 
 0000010952 00000 n 
-0000025654 00000 n 
+0000025704 00000 n 
 0000011091 00000 n 
-0000025713 00000 n 
+0000025763 00000 n 
 0000011228 00000 n 
 0000012817 00000 n 
 0000012940 00000 n 
@@ -1090,46 +1090,46 @@ xref
 0000013136 00000 n 
 0000014677 00000 n 
 0000014785 00000 n 
-0000016186 00000 n 
-0000016294 00000 n 
-0000017463 00000 n 
-0000017571 00000 n 
-0000018852 00000 n 
-0000025772 00000 n 
-0000018960 00000 n 
-0000019093 00000 n 
-0000019217 00000 n 
-0000019353 00000 n 
-0000019489 00000 n 
-0000019625 00000 n 
-0000019809 00000 n 
-0000019981 00000 n 
-0000020100 00000 n 
-0000020220 00000 n 
-0000020352 00000 n 
-0000020508 00000 n 
-0000020640 00000 n 
-0000020802 00000 n 
-0000020928 00000 n 
-0000021060 00000 n 
-0000021204 00000 n 
-0000021396 00000 n 
-0000021522 00000 n 
-0000021654 00000 n 
-0000021780 00000 n 
-0000021912 00000 n 
-0000022062 00000 n 
-0000022200 00000 n 
-0000022338 00000 n 
-0000022476 00000 n 
-0000022614 00000 n 
-0000022749 00000 n 
-0000022863 00000 n 
-0000022974 00000 n 
-0000023086 00000 n 
-0000023195 00000 n 
-0000023302 00000 n 
-0000023419 00000 n 
+0000016236 00000 n 
+0000016344 00000 n 
+0000017513 00000 n 
+0000017621 00000 n 
+0000018902 00000 n 
+0000025822 00000 n 
+0000019010 00000 n 
+0000019143 00000 n 
+0000019267 00000 n 
+0000019403 00000 n 
+0000019539 00000 n 
+0000019675 00000 n 
+0000019859 00000 n 
+0000020031 00000 n 
+0000020150 00000 n 
+0000020270 00000 n 
+0000020402 00000 n 
+0000020558 00000 n 
+0000020690 00000 n 
+0000020852 00000 n 
+0000020978 00000 n 
+0000021110 00000 n 
+0000021254 00000 n 
+0000021446 00000 n 
+0000021572 00000 n 
+0000021704 00000 n 
+0000021830 00000 n 
+0000021962 00000 n 
+0000022112 00000 n 
+0000022250 00000 n 
+0000022388 00000 n 
+0000022526 00000 n 
+0000022664 00000 n 
+0000022799 00000 n 
+0000022913 00000 n 
+0000023024 00000 n 
+0000023136 00000 n 
+0000023245 00000 n 
+0000023352 00000 n 
+0000023469 00000 n 
 trailer
 <<
 /Size 126
@@ -1137,5 +1137,5 @@ trailer
 /Info 4 0 R
 >>
 startxref
-25824
+25874
 %%EOF

+ 1 - 1
docs/hdfs_user_guide.html

@@ -69,7 +69,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 1 - 1
docs/hod.html

@@ -69,7 +69,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 1 - 1
docs/hod_admin_guide.html

@@ -69,7 +69,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 1 - 1
docs/hod_config_guide.html

@@ -69,7 +69,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 1 - 1
docs/hod_user_guide.html

@@ -69,7 +69,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 1 - 1
docs/index.html

@@ -67,7 +67,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 1 - 1
docs/linkmap.html

@@ -67,7 +67,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 39 - 27
docs/mapred_tutorial.html

@@ -67,7 +67,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+
@@ -292,7 +292,7 @@ document.write("Last Published: " + document.lastModified);
 <a href="#Example%3A+WordCount+v2.0">Example: WordCount v2.0</a>
 <ul class="minitoc">
 <li>
-<a href="#Source+Code-N10C63">Source Code</a>
+<a href="#Source+Code-N10C76">Source Code</a>
 </li>
 <li>
 <a href="#Sample+Runs">Sample Runs</a>
@@ -954,7 +954,7 @@ document.write("Last Published: " + document.lastModified);
 <td colspan="1" rowspan="1">53.</td>
             <td colspan="1" rowspan="1">
               &nbsp;&nbsp;&nbsp;&nbsp;
-              <span class="codefrag">conf.setOutputPath(new Path(args[1]));</span>
+              <span class="codefrag">FileOutputFormat.setOutputPath(conf, new Path(args[1]));</span>
             </td>
           
 </tr>
@@ -1383,7 +1383,7 @@ document.write("Last Published: " + document.lastModified);
             no reduction is desired.</p>
 <p>In this case the outputs of the map-tasks go directly to the
             <span class="codefrag">FileSystem</span>, into the output path set by 
-            <a href="api/org/apache/hadoop/mapred/JobConf.html#setOutputPath(org.apache.hadoop.fs.Path)">
+            <a href="api/org/apache/hadoop/mapred/FileInputFormat.html#setOutputPath(org.apache.hadoop.mapred.JobConf,%20org.apache.hadoop.fs.Path)">
             setOutputPath(Path)</a>. The framework does not sort the 
             map-outputs before writing them out to the <span class="codefrag">FileSystem</span>.
             </p>
@@ -1468,7 +1468,7 @@ document.write("Last Published: " + document.lastModified);
         indicates the set of input files 
         (<a href="api/org/apache/hadoop/mapred/JobConf.html#setInputPath(org.apache.hadoop.fs.Path)">setInputPath(Path)</a>/<a href="api/org/apache/hadoop/mapred/JobConf.html#addInputPath(org.apache.hadoop.fs.Path)">addInputPath(Path)</a>)
         and where the output files should be written
-        (<a href="api/org/apache/hadoop/mapred/JobConf.html#setOutputPath(org.apache.hadoop.fs.Path)">setOutputPath(Path)</a>).</p>
+        (<a href="api/org/apache/hadoop/mapred/FileInputFormat.html#setOutputPath(org.apache.hadoop.mapred.JobConf,%20org.apache.hadoop.fs.Path)">setOutputPath(Path)</a>).</p>
 <p>Optionally, <span class="codefrag">JobConf</span> is used to specify other advanced 
         facets of the job such as the <span class="codefrag">Comparator</span> to be used, files 
         to be put in the <span class="codefrag">DistributedCache</span>, whether intermediate 
@@ -1791,6 +1791,7 @@ document.write("Last Published: " + document.lastModified);
           not just per task.</p>
 <p>To avoid these issues the Map-Reduce framework maintains a special 
           <span class="codefrag">${mapred.output.dir}/_temporary/_${taskid}</span> sub-directory
+          accessible via <span class="codefrag">${mapred.work.output.dir}</span>
           for each task-attempt on the <span class="codefrag">FileSystem</span> where the output
           of the task-attempt is stored. On successful completion of the 
           task-attempt, the files in the 
@@ -1799,13 +1800,24 @@ document.write("Last Published: " + document.lastModified);
           the framework discards the sub-directory of unsuccessful task-attempts. 
           This process is completely transparent to the application.</p>
 <p>The application-writer can take advantage of this feature by 
-          creating any side-files required in <span class="codefrag">${mapred.output.dir}</span> 
+          creating any side-files required in <span class="codefrag">${mapred.work.output.dir}</span>
           during execution of a task via 
-          <a href="api/org/apache/hadoop/mapred/JobConf.html#getOutputPath()">
-          JobConf.getOutputPath()</a>, and the framework will promote them 
+          <a href="api/org/apache/hadoop/mapred/FileInputFormat.html#getWorkOutputPath(org.apache.hadoop.mapred.JobConf)">
+          FileOutputFormat.getWorkOutputPath()</a>, and the framework will promote them 
           similarly for succesful task-attempts, thus eliminating the need to 
           pick unique paths per task-attempt.</p>
-<a name="N10A84"></a><a name="RecordWriter"></a>
+<p>Note: The value of <span class="codefrag">${mapred.work.output.dir}</span> during 
+          execution of a particular task-attempt is actually 
+          <span class="codefrag">${mapred.output.dir}/_temporary/_{$taskid}</span>, and this value is 
+          set by the map-reduce framework. So, just create any side-files in the 
+          path  returned by
+          <a href="api/org/apache/hadoop/mapred/FileInputFormat.html#getWorkOutputPath(org.apache.hadoop.mapred.JobConf)">
+          FileOutputFormat.getWorkOutputPath() </a>from map/reduce 
+          task to take advantage of this feature.</p>
+<p>The entire discussion holds true for maps of jobs with 
+           reducer=NONE (i.e. 0 reduces) since output of the map, in that case, 
+           goes directly to HDFS.</p>
+<a name="N10A97"></a><a name="RecordWriter"></a>
 <h4>RecordWriter</h4>
 <p>
 <a href="api/org/apache/hadoop/mapred/RecordWriter.html">
@@ -1813,9 +1825,9 @@ document.write("Last Published: " + document.lastModified);
           pairs to an output file.</p>
 <p>RecordWriter implementations write the job outputs to the 
           <span class="codefrag">FileSystem</span>.</p>
-<a name="N10A9B"></a><a name="Other+Useful+Features"></a>
+<a name="N10AAE"></a><a name="Other+Useful+Features"></a>
 <h3 class="h4">Other Useful Features</h3>
-<a name="N10AA1"></a><a name="Counters"></a>
+<a name="N10AB4"></a><a name="Counters"></a>
 <h4>Counters</h4>
 <p>
 <span class="codefrag">Counters</span> represent global counters, defined either by 
@@ -1829,7 +1841,7 @@ document.write("Last Published: " + document.lastModified);
           Reporter.incrCounter(Enum, long)</a> in the <span class="codefrag">map</span> and/or 
           <span class="codefrag">reduce</span> methods. These counters are then globally 
           aggregated by the framework.</p>
-<a name="N10ACC"></a><a name="DistributedCache"></a>
+<a name="N10ADF"></a><a name="DistributedCache"></a>
 <h4>DistributedCache</h4>
 <p>
 <a href="api/org/apache/hadoop/filecache/DistributedCache.html">
@@ -1862,7 +1874,7 @@ document.write("Last Published: " + document.lastModified);
           <a href="api/org/apache/hadoop/filecache/DistributedCache.html#createSymlink(org.apache.hadoop.conf.Configuration)">
           DistributedCache.createSymlink(Configuration)</a> api. Files 
           have <em>execution permissions</em> set.</p>
-<a name="N10B0A"></a><a name="Tool"></a>
+<a name="N10B1D"></a><a name="Tool"></a>
 <h4>Tool</h4>
 <p>The <a href="api/org/apache/hadoop/util/Tool.html">Tool</a> 
           interface supports the handling of generic Hadoop command-line options.
@@ -1902,7 +1914,7 @@ document.write("Last Published: " + document.lastModified);
             </span>
           
 </p>
-<a name="N10B3C"></a><a name="IsolationRunner"></a>
+<a name="N10B4F"></a><a name="IsolationRunner"></a>
 <h4>IsolationRunner</h4>
 <p>
 <a href="api/org/apache/hadoop/mapred/IsolationRunner.html">
@@ -1926,7 +1938,7 @@ document.write("Last Published: " + document.lastModified);
 <p>
 <span class="codefrag">IsolationRunner</span> will run the failed task in a single 
           jvm, which can be in the debugger, over precisely the same input.</p>
-<a name="N10B6F"></a><a name="Debugging"></a>
+<a name="N10B82"></a><a name="Debugging"></a>
 <h4>Debugging</h4>
 <p>Map/Reduce framework provides a facility to run user-provided 
           scripts for debugging. When map/reduce task fails, user can run 
@@ -1937,7 +1949,7 @@ document.write("Last Published: " + document.lastModified);
 <p> In the following sections we discuss how to submit debug script
           along with the job. For submitting debug script, first it has to
           distributed. Then the script has to supplied in Configuration. </p>
-<a name="N10B7B"></a><a name="How+to+distribute+script+file%3A"></a>
+<a name="N10B8E"></a><a name="How+to+distribute+script+file%3A"></a>
 <h5> How to distribute script file: </h5>
 <p>
           To distribute  the debug script file, first copy the file to the dfs.
@@ -1960,7 +1972,7 @@ document.write("Last Published: " + document.lastModified);
           <a href="api/org/apache/hadoop/filecache/DistributedCache.html#createSymlink(org.apache.hadoop.conf.Configuration)">
           DistributedCache.createSymLink(Configuration) </a> api.
           </p>
-<a name="N10B94"></a><a name="How+to+submit+script%3A"></a>
+<a name="N10BA7"></a><a name="How+to+submit+script%3A"></a>
 <h5> How to submit script: </h5>
 <p> A quick way to submit debug script is to set values for the 
           properties "mapred.map.task.debug.script" and 
@@ -1984,17 +1996,17 @@ document.write("Last Published: " + document.lastModified);
 <span class="codefrag">$script $stdout $stderr $syslog $jobconf $program </span>  
           
 </p>
-<a name="N10BB6"></a><a name="Default+Behavior%3A"></a>
+<a name="N10BC9"></a><a name="Default+Behavior%3A"></a>
 <h5> Default Behavior: </h5>
 <p> For pipes, a default script is run to process core dumps under
           gdb, prints stack trace and gives info about running threads. </p>
-<a name="N10BC1"></a><a name="JobControl"></a>
+<a name="N10BD4"></a><a name="JobControl"></a>
 <h4>JobControl</h4>
 <p>
 <a href="api/org/apache/hadoop/mapred/jobcontrol/package-summary.html">
           JobControl</a> is a utility which encapsulates a set of Map-Reduce jobs
           and their dependencies.</p>
-<a name="N10BCE"></a><a name="Data+Compression"></a>
+<a name="N10BE1"></a><a name="Data+Compression"></a>
 <h4>Data Compression</h4>
 <p>Hadoop Map-Reduce provides facilities for the application-writer to
           specify compression for both intermediate map-outputs and the
@@ -2008,7 +2020,7 @@ document.write("Last Published: " + document.lastModified);
           codecs for reasons of both performance (zlib) and non-availability of
           Java libraries (lzo). More details on their usage and availability are
           available <a href="native_libraries.html">here</a>.</p>
-<a name="N10BEE"></a><a name="Intermediate+Outputs"></a>
+<a name="N10C01"></a><a name="Intermediate+Outputs"></a>
 <h5>Intermediate Outputs</h5>
 <p>Applications can control compression of intermediate map-outputs
             via the 
@@ -2029,7 +2041,7 @@ document.write("Last Published: " + document.lastModified);
             <a href="api/org/apache/hadoop/mapred/JobConf.html#setMapOutputCompressionType(org.apache.hadoop.io.SequenceFile.CompressionType)">
             JobConf.setMapOutputCompressionType(SequenceFile.CompressionType)</a> 
             api.</p>
-<a name="N10C1A"></a><a name="Job+Outputs"></a>
+<a name="N10C2D"></a><a name="Job+Outputs"></a>
 <h5>Job Outputs</h5>
 <p>Applications can control compression of job-outputs via the
             <a href="api/org/apache/hadoop/mapred/OutputFormatBase.html#setCompressOutput(org.apache.hadoop.mapred.JobConf,%20boolean)">
@@ -2049,7 +2061,7 @@ document.write("Last Published: " + document.lastModified);
 </div>
 
     
-<a name="N10C49"></a><a name="Example%3A+WordCount+v2.0"></a>
+<a name="N10C5C"></a><a name="Example%3A+WordCount+v2.0"></a>
 <h2 class="h3">Example: WordCount v2.0</h2>
 <div class="section">
 <p>Here is a more complete <span class="codefrag">WordCount</span> which uses many of the
@@ -2059,7 +2071,7 @@ document.write("Last Published: " + document.lastModified);
       <a href="quickstart.html#SingleNodeSetup">pseudo-distributed</a> or
       <a href="quickstart.html#Fully-Distributed+Operation">fully-distributed</a> 
       Hadoop installation.</p>
-<a name="N10C63"></a><a name="Source+Code-N10C63"></a>
+<a name="N10C76"></a><a name="Source+Code-N10C76"></a>
 <h3 class="h4">Source Code</h3>
 <table class="ForrestTable" cellspacing="1" cellpadding="4">
           
@@ -3158,7 +3170,7 @@ document.write("Last Published: " + document.lastModified);
 <td colspan="1" rowspan="1">112.</td>
             <td colspan="1" rowspan="1">
               &nbsp;&nbsp;&nbsp;&nbsp;
-              <span class="codefrag">conf.setOutputPath(new Path(other_args.get(1)));</span>
+              <span class="codefrag">FileOutputFormat.setOutputPath(conf, new Path(other_args.get(1)));</span>
             </td>
           
 </tr>
@@ -3269,7 +3281,7 @@ document.write("Last Published: " + document.lastModified);
 </tr>
         
 </table>
-<a name="N113C5"></a><a name="Sample+Runs"></a>
+<a name="N113D8"></a><a name="Sample+Runs"></a>
 <h3 class="h4">Sample Runs</h3>
 <p>Sample text-files as input:</p>
 <p>
@@ -3437,7 +3449,7 @@ document.write("Last Published: " + document.lastModified);
 <br>
         
 </p>
-<a name="N11499"></a><a name="Highlights"></a>
+<a name="N114AC"></a><a name="Highlights"></a>
 <h3 class="h4">Highlights</h3>
 <p>The second version of <span class="codefrag">WordCount</span> improves upon the 
         previous one by using some features offered by the Map-Reduce framework:

تفاوت فایلی نمایش داده نمی شود زیرا این فایل بسیار بزرگ است
+ 3 - 3
docs/mapred_tutorial.pdf


+ 1 - 1
docs/native_libraries.html

@@ -67,7 +67,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 42 - 20
docs/quickstart.html

@@ -67,7 +67,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+
@@ -201,10 +201,13 @@ document.write("Last Published: " + document.lastModified);
 <a href="#Download">Download</a>
 </li>
 <li>
-<a href="#Standalone+Operation">Standalone Operation</a>
+<a href="#Prepare+to+Start+the+Hadoop+Cluster">Prepare to Start the Hadoop Cluster</a>
 </li>
 <li>
-<a href="#SingleNodeSetup">Pseudo-Distributed Operation</a>
+<a href="#Local">Standalone Operation</a>
+</li>
+<li>
+<a href="#PseudoDistributed">Pseudo-Distributed Operation</a>
 <ul class="minitoc">
 <li>
 <a href="#Configuration">Configuration</a>
@@ -218,7 +221,7 @@ document.write("Last Published: " + document.lastModified);
 </ul>
 </li>
 <li>
-<a href="#Fully-Distributed+Operation">Fully-Distributed Operation</a>
+<a href="#FullyDistributed">Fully-Distributed Operation</a>
 </li>
 </ul>
 </div>
@@ -259,8 +262,7 @@ document.write("Last Published: " + document.lastModified);
 <ol>
           
 <li>
-            Java<sup>TM</sup> 1.5.x, preferably from Sun, must be installed. Set 
-            <span class="codefrag">JAVA_HOME</span> to the root of your Java installation.
+            Java<sup>TM</sup> 1.5.x, preferably from Sun, must be installed.
           </li>
           
 <li>
@@ -271,7 +273,7 @@ document.write("Last Published: " + document.lastModified);
           </li>
         
 </ol>
-<a name="N10056"></a><a name="Additional+requirements+for+Windows"></a>
+<a name="N10053"></a><a name="Additional+requirements+for+Windows"></a>
 <h4>Additional requirements for Windows</h4>
 <ol>
             
@@ -282,7 +284,7 @@ document.write("Last Published: " + document.lastModified);
             </li>
           
 </ol>
-<a name="N10068"></a><a name="Installing+Software"></a>
+<a name="N10065"></a><a name="Installing+Software"></a>
 <h3 class="h4">Installing Software</h3>
 <p>If your cluster doesn't have the requisite software you will need to
         install it.</p>
@@ -305,16 +307,24 @@ document.write("Last Published: " + document.lastModified);
 </div>
     
     
-<a name="N1008C"></a><a name="Download"></a>
+<a name="N10089"></a><a name="Download"></a>
 <h2 class="h3">Download</h2>
 <div class="section">
 <p>
-        First, you need to get a Hadoop distribution: download a recent 
-        <a href="http://hadoop.apache.org/core/releases.html">stable release</a> and unpack it.
+        To get a Hadoop distribution, download a recent 
+        <a href="http://hadoop.apache.org/core/releases.html">stable release</a> from one of the Apache Download
+        Mirrors.
       </p>
+</div>
+
+    
+<a name="N10097"></a><a name="Prepare+to+Start+the+Hadoop+Cluster"></a>
+<h2 class="h3">Prepare to Start the Hadoop Cluster</h2>
+<div class="section">
 <p>
-        Once done, in the distribution edit the file 
-        <span class="codefrag">conf/hadoop-env.sh</span> to define at least <span class="codefrag">JAVA_HOME</span>.
+        Unpack the downloaded Hadoop distribution. In the distribution, edit the
+        file <span class="codefrag">conf/hadoop-env.sh</span> to define at least 
+        <span class="codefrag">JAVA_HOME</span> to be the root of your Java installation.
       </p>
 <p>
 	    Try the following command:<br>
@@ -324,10 +334,22 @@ document.write("Last Published: " + document.lastModified);
         This will display the usage documentation for the <strong>hadoop</strong> 
         script.
       </p>
+<p>Now you are ready to start your Hadoop cluster in one of the three supported
+      modes:
+      </p>
+<ul>
+        
+<li>Local (Standalone) Mode</li>
+        
+<li>Pseudo-Distributed Mode</li>
+        
+<li>Fully-Distributed Mode</li>
+      
+</ul>
 </div>
     
     
-<a name="N100AF"></a><a name="Standalone+Operation"></a>
+<a name="N100C2"></a><a name="Local"></a>
 <h2 class="h3">Standalone Operation</h2>
 <div class="section">
 <p>By default, Hadoop is configured to run things in a non-distributed 
@@ -355,12 +377,12 @@ document.write("Last Published: " + document.lastModified);
 </div>
     
     
-<a name="N100D3"></a><a name="SingleNodeSetup"></a>
+<a name="N100E6"></a><a name="PseudoDistributed"></a>
 <h2 class="h3">Pseudo-Distributed Operation</h2>
 <div class="section">
 <p>Hadoop can also be run on a single-node in a pseudo-distributed mode 
 	  where each Hadoop daemon runs in a separate Java process.</p>
-<a name="N100DC"></a><a name="Configuration"></a>
+<a name="N100EF"></a><a name="Configuration"></a>
 <h3 class="h4">Configuration</h3>
 <p>Use the following <span class="codefrag">conf/hadoop-site.xml</span>:</p>
 <table class="ForrestTable" cellspacing="1" cellpadding="4">
@@ -426,7 +448,7 @@ document.write("Last Published: " + document.lastModified);
 </tr>
         
 </table>
-<a name="N10140"></a><a name="Setup+passphraseless"></a>
+<a name="N10153"></a><a name="Setup+passphraseless"></a>
 <h3 class="h4">Setup passphraseless ssh</h3>
 <p>
           Now check that you can ssh to the localhost without a passphrase:<br>
@@ -444,7 +466,7 @@ document.write("Last Published: " + document.lastModified);
 <span class="codefrag">$ cat ~/.ssh/id_dsa.pub &gt;&gt; ~/.ssh/authorized_keys</span>
 		
 </p>
-<a name="N1015D"></a><a name="Execution"></a>
+<a name="N10170"></a><a name="Execution"></a>
 <h3 class="h4">Execution</h3>
 <p>
           Format a new distributed-filesystem:<br>
@@ -521,10 +543,10 @@ document.write("Last Published: " + document.lastModified);
 </div>
     
     
-<a name="N101CA"></a><a name="Fully-Distributed+Operation"></a>
+<a name="N101DD"></a><a name="FullyDistributed"></a>
 <h2 class="h3">Fully-Distributed Operation</h2>
 <div class="section">
-<p>Information on setting up fully-distributed non-trivial clusters
+<p>Information on setting up fully-distributed, non-trivial clusters
 	  can be found <a href="cluster_setup.html">here</a>.</p>
 </div>
     

تفاوت فایلی نمایش داده نمی شود زیرا این فایل بسیار بزرگ است
+ 20 - 9
docs/quickstart.pdf


+ 1 - 1
docs/streaming.html

@@ -70,7 +70,7 @@
 <a class="unselected" href="http://wiki.apache.org/hadoop">Wiki</a>
 </li>
 <li class="current">
-<a class="selected" href="index.html">Hadoop 0.16 Documentation</a>
+<a class="selected" href="index.html">Hadoop 0.17 Documentation</a>
 </li>
 </ul>
 <!--+

+ 23 - 10
src/docs/src/documentation/content/xdocs/quickstart.xml

@@ -59,8 +59,7 @@
         
         <ol>
           <li>
-            Java<sup>TM</sup> 1.5.x, preferably from Sun, must be installed. Set 
-            <code>JAVA_HOME</code> to the root of your Java installation.
+            Java<sup>TM</sup> 1.5.x, preferably from Sun, must be installed.
           </li>
           <li>
             <strong>ssh</strong> must be installed and <strong>sshd</strong> must 
@@ -107,13 +106,18 @@
       <title>Download</title>
       
       <p>
-        First, you need to get a Hadoop distribution: download a recent 
-        <a href="ext:releases">stable release</a> and unpack it.
+        To get a Hadoop distribution, download a recent 
+        <a href="ext:releases">stable release</a> from one of the Apache Download
+        Mirrors.
       </p>
+    </section>
 
+    <section>
+      <title>Prepare to Start the Hadoop Cluster</title>
       <p>
-        Once done, in the distribution edit the file 
-        <code>conf/hadoop-env.sh</code> to define at least <code>JAVA_HOME</code>.
+        Unpack the downloaded Hadoop distribution. In the distribution, edit the
+        file <code>conf/hadoop-env.sh</code> to define at least 
+        <code>JAVA_HOME</code> to be the root of your Java installation.
       </p>
 
 	  <p>
@@ -122,9 +126,18 @@
         This will display the usage documentation for the <strong>hadoop</strong> 
         script.
       </p>
+      
+      <p>Now you are ready to start your Hadoop cluster in one of the three supported
+      modes:
+      </p>
+      <ul>
+        <li>Local (Standalone) Mode</li>
+        <li>Pseudo-Distributed Mode</li>
+        <li>Fully-Distributed Mode</li>
+      </ul>
     </section>
     
-    <section>
+    <section id="Local">
       <title>Standalone Operation</title>
       
       <p>By default, Hadoop is configured to run things in a non-distributed 
@@ -144,7 +157,7 @@
       </p>
     </section>
     
-    <section id="SingleNodeSetup">
+    <section id="PseudoDistributed">
       <title>Pseudo-Distributed Operation</title>
 
 	  <p>Hadoop can also be run on a single-node in a pseudo-distributed mode 
@@ -253,10 +266,10 @@
       </section>
     </section>
     
-    <section>
+    <section id="FullyDistributed">
       <title>Fully-Distributed Operation</title>
       
-	  <p>Information on setting up fully-distributed non-trivial clusters
+	  <p>Information on setting up fully-distributed, non-trivial clusters
 	  can be found <a href="cluster_setup.html">here</a>.</p>  
     </section>
     

برخی فایل ها در این مقایسه diff نمایش داده نمی شوند زیرا تعداد فایل ها بسیار زیاد است