Bladeren bron

HADOOP-4726. Fix documentation typos "the the". (Edward J. Yoon via szetszwo)

git-svn-id: https://svn.apache.org/repos/asf/hadoop/core/trunk@722582 13f79535-47bb-0310-9956-ffa450edef68
Tsz-wo Sze 16 jaren geleden
bovenliggende
commit
9ce32cbf3d

+ 3 - 0
CHANGES.txt

@@ -1285,6 +1285,9 @@ Release 0.18.3 - Unreleased
     HADOOP-4714. Report status between merges and make the number of records
     between progress reports configurable. (Jothi Padmanabhan via cdouglas)
 
+    HADOOP-4726. Fix documentation typos "the the". (Edward J. Yoon via
+    szetszwo)
+
 Release 0.18.2 - 2008-11-03
 
   BUG FIXES

+ 1 - 1
conf/hadoop-default.xml

@@ -1208,7 +1208,7 @@ creations/deletions), or "all".</description>
     <value>false</value>
     <description>To set whether the system should collect profiler
      information for some of the tasks in this job? The information is stored
-     in the the user log directory. The value is "true" if task profiling
+     in the user log directory. The value is "true" if task profiling
      is enabled.</description>
   </property>
 

+ 3 - 0
docs/SLG_user_guide.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <div class="menupagetitle">HDFS Utilities</div>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 3 - 0
docs/capacity_scheduler.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 250 - 43
docs/changes.html

@@ -36,7 +36,7 @@
     function collapse() {
       for (var i = 0; i < document.getElementsByTagName("ul").length; i++) {
         var list = document.getElementsByTagName("ul")[i];
-        if (list.id != 'trunk_(unreleased_changes)_' && list.id != 'release_0.19.0_-_unreleased_') {
+        if (list.id != 'trunk_(unreleased_changes)_' && list.id != 'release_0.19.1_-_unreleased_') {
           list.style.display = "none";
         }
       }
@@ -56,7 +56,7 @@
 </a></h2>
 <ul id="trunk_(unreleased_changes)_">
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._incompatible_changes_')">  INCOMPATIBLE CHANGES
-</a>&nbsp;&nbsp;&nbsp;(2)
+</a>&nbsp;&nbsp;&nbsp;(10)
     <ol id="trunk_(unreleased_changes)_._incompatible_changes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4210">HADOOP-4210</a>. Fix findbugs warnings for equals implementations of mapred ID
 classes. Removed public, static ID::read and ID::forName; made ID an
@@ -66,15 +66,38 @@ Following deprecated methods in RawLocalFileSystem are removed:
  public String getName()
  public void lock(Path p, boolean shared)
  public void release(Path p)<br />(Suresh Srinivas via johan)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4618">HADOOP-4618</a>. Move http server from FSNamesystem into NameNode.
+FSNamesystem.getNameNodeInfoPort() is removed.
+FSNamesystem.getDFSNameNodeMachine() and FSNamesystem.getDFSNameNodePort()
+  replaced by FSNamesystem.getDFSNameNodeAddress().
+NameNode(bindAddress, conf) is removed.<br />(shv)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4567">HADOOP-4567</a>. GetFileBlockLocations returns the NetworkTopology
+information of the machines where the blocks reside.<br />(dhruba)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4435">HADOOP-4435</a>. The JobTracker WebUI displays the amount of heap memory
+in use.<br />(dhruba)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4628">HADOOP-4628</a>. Move Hive into a standalone subproject.<br />(omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4188">HADOOP-4188</a>. Removes task's dependency on concrete filesystems.<br />(Sharad Agarwal via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-1650">HADOOP-1650</a>. Upgrade to Jetty 6.<br />(cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3986">HADOOP-3986</a>. Remove static Configuration from JobClient. (Amareshwari
+Sriramadasu via cdouglas)
+  JobClient::setCommandLineConfig is removed
+  JobClient::getCommandLineConfig is removed
+  JobShell, TestJobShell classes are removed
+</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4422">HADOOP-4422</a>. S3 file systems should not create bucket.<br />(David Phillips via tomwhite)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._new_features_')">  NEW FEATURES
-</a>&nbsp;&nbsp;&nbsp;(none)
+</a>&nbsp;&nbsp;&nbsp;(2)
     <ol id="trunk_(unreleased_changes)_._new_features_">
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4575">HADOOP-4575</a>. Add a proxy service for relaying HsftpFileSystem requests.
+Includes client authentication via user certificates and config-based
+access control.<br />(Kan Zhang via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4661">HADOOP-4661</a>. Add DistCh, a new tool for distributed ch{mod,own,grp}.<br />(szetszwo)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._improvements_')">  IMPROVEMENTS
-</a>&nbsp;&nbsp;&nbsp;(9)
+</a>&nbsp;&nbsp;&nbsp;(34)
     <ol id="trunk_(unreleased_changes)_._improvements_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4234">HADOOP-4234</a>. Fix KFS "glue" layer to allow applications to interface
 with multiple KFS metaservers.<br />(Sriram Rao via lohit)</li>
@@ -92,15 +115,57 @@ understandable.<br />(Yuri Pradkin via cdouglas)</li>
 print NA instead of empty output.<br />(Sreekanth Ramakrishnan via johan)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4284">HADOOP-4284</a>. Support filters that apply to all requests, or global filters,
 to HttpServer.<br />(Kan Zhang via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4276">HADOOP-4276</a>. Improve the hashing functions and deserialization of the
+mapred ID classes.<br />(omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4485">HADOOP-4485</a>. Add a compile-native ant task, as a shorthand.<br />(enis)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4454">HADOOP-4454</a>. Allow # comments in slaves file.<br />(Rama Ramasamy via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3461">HADOOP-3461</a>. Remove hdfs.StringBytesWritable.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4437">HADOOP-4437</a>. Use Halton sequence instead of java.util.Random in
+PiEstimator.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4572">HADOOP-4572</a>. Change INode and its sub-classes to package private.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4187">HADOOP-4187</a>. Does a runtime lookup for JobConf/JobConfigurable, and if
+found, invokes the appropriate configure method.<br />(Sharad Agarwal via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4453">HADOOP-4453</a>. Improve ssl configuration and handling in HsftpFileSystem,
+particularly when used with DistCp.<br />(Kan Zhang via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4583">HADOOP-4583</a>. Several code optimizations in HDFS.<br />(Suresh Srinivas via
+szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3923">HADOOP-3923</a>. Remove org.apache.hadoop.mapred.StatusHttpServer.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4622">HADOOP-4622</a>. Explicitly specify interpretor for non-native
+pipes binaries.<br />(Fredrik Hedberg via johan)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4505">HADOOP-4505</a>. Add a unit test to test faulty setup task and cleanup
+task killing the job.<br />(Amareshwari Sriramadasu via johan)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4608">HADOOP-4608</a>. Don't print a stack trace when the example driver gets an
+unknown program to run.<br />(Edward Yoon via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4645">HADOOP-4645</a>. Package HdfsProxy contrib project without the extra level
+of directories.<br />(Kan Zhang via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4126">HADOOP-4126</a>. Allow access to HDFS web UI on EC2<br />(tomwhite via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4612">HADOOP-4612</a>. Removes RunJar's dependency on JobClient.<br />(Sharad Agarwal via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4185">HADOOP-4185</a>. Adds setVerifyChecksum() method to FileSystem.<br />(Sharad Agarwal via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4523">HADOOP-4523</a>. Prevent too many tasks scheduled on a node from bringing
+it down by monitoring for cumulative memory usage across tasks.<br />(Vinod Kumar Vavilapalli via yhemanth)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4640">HADOOP-4640</a>. Adds an input format that can split lzo compressed
+text files.<br />(johan)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4666">HADOOP-4666</a>. Launch reduces only after a few maps have run in the
+Fair Scheduler.<br />(Matei Zaharia via johan)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4339">HADOOP-4339</a>. Remove redundant calls from FileSystem/FsShell when
+generating/processing ContentSummary.<br />(David Phillips via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2774">HADOOP-2774</a>. Add counters tracking records spilled to disk in MapTask and
+ReduceTask.<br />(Ravi Gummadi via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4513">HADOOP-4513</a>. Initialize jobs asynchronously in the capacity scheduler.<br />(Sreekanth Ramakrishnan via yhemanth)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4649">HADOOP-4649</a>. Improve abstraction for spill indices.<br />(cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3770">HADOOP-3770</a>. Add gridmix2, an iteration on the gridmix benchmark.<br />(Runping
+Qi via cdouglas)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._optimizations_')">  OPTIMIZATIONS
-</a>&nbsp;&nbsp;&nbsp;(none)
+</a>&nbsp;&nbsp;&nbsp;(1)
     <ol id="trunk_(unreleased_changes)_._optimizations_">
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3293">HADOOP-3293</a>. Fixes FileInputFormat to do provide locations for splits
+based on the rack/host that has the most number of bytes.<br />(Jothi Padmanabhan via ddas)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._bug_fixes_')">  BUG FIXES
-</a>&nbsp;&nbsp;&nbsp;(4)
+</a>&nbsp;&nbsp;&nbsp;(26)
     <ol id="trunk_(unreleased_changes)_._bug_fixes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4204">HADOOP-4204</a>. Fix findbugs warnings related to unused variables, naive
 Number subclass instantiation, Map iteration, and badly scoped inner
@@ -108,15 +173,77 @@ classes.<br />(Suresh Srinivas via cdouglas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4207">HADOOP-4207</a>. Update derby jar file to release 10.4.2 release.<br />(Prasad Chakka via dhruba)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4325">HADOOP-4325</a>. SocketInputStream.read() should return -1 in case EOF.<br />(Raghu Angadi)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4408">HADOOP-4408</a>. FsAction functions need not create new objects.<br />(cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4440">HADOOP-4440</a>.  TestJobInProgressListener tests for jobs killed in queued
+state<br />(Amar Kamat via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4346">HADOOP-4346</a>. Implement blocking connect so that Hadoop is not affected
+by selector problem with JDK default implementation.<br />(Raghu Angadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4388">HADOOP-4388</a>. If there are invalid blocks in the transfer list, Datanode
+should handle them and keep transferring the remaining blocks.<br />(Suresh
+Srinivas via szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4587">HADOOP-4587</a>. Fix a typo in Mapper javadoc.<br />(Koji Noguchi via szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4530">HADOOP-4530</a>. In fsck, HttpServletResponse sendError fails with
+IllegalStateException.<br />(hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4377">HADOOP-4377</a>. Fix a race condition in directory creation in
+NativeS3FileSystem.<br />(David Phillips via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4621">HADOOP-4621</a>. Fix javadoc warnings caused by duplicate jars.<br />(Kan Zhang via
+cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4566">HADOOP-4566</a>. Deploy new hive code to support more types.<br />(Zheng Shao via dhruba)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4571">HADOOP-4571</a>. Add chukwa conf files to svn:ignore list.<br />(Eric Yang via
+szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4589">HADOOP-4589</a>. Correct PiEstimator output messages and improve the code
+readability.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4650">HADOOP-4650</a>. Correct a mismatch between the default value of
+local.cache.size in the config and the source.<br />(Jeff Hammerbacher via
+cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4606">HADOOP-4606</a>. Fix cygpath error if the log directory does not exist.<br />(szetszwo via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4141">HADOOP-4141</a>. Fix bug in ScriptBasedMapping causing potential infinite
+loop on misconfigured hadoop-site.<br />(Aaron Kimball via tomwhite)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4691">HADOOP-4691</a>. Correct a link in the javadoc of IndexedSortable.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4598">HADOOP-4598</a>. '-setrep' command skips under-replicated blocks.<br />(hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4429">HADOOP-4429</a>. Set defaults for user, group in UnixUserGroupInformation so
+login fails more predictably when misconfigured.<br />(Alex Loddengaard via
+cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4676">HADOOP-4676</a>. Fix broken URL in blacklisted tasktrackers page.<br />(Amareshwari
+Sriramadasu via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3422">HADOOP-3422</a>  Ganglia counter metrics are all reported with the metric
+name "value", so the counter values can not be seen.<br />(Jason Attributor
+and Brian Bockelman via stack)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4704">HADOOP-4704</a>. Fix javadoc typos "the the".<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4677">HADOOP-4677</a>. Fix semantics of FileSystem::getBlockLocations to return
+meaningful values.<br />(Hong Tang via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4669">HADOOP-4669</a>. Use correct operator when evaluating whether access time is
+enabled<br />(Dhruba Borthakur via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4732">HADOOP-4732</a>. Pass connection and read timeouts in the correct order when
+setting up fetch in reduce.<br />(Amareshwari Sriramadasu via cdouglas)</li>
     </ol>
   </li>
 </ul>
-<h2><a href="javascript:toggleList('release_0.19.0_-_unreleased_')">Release 0.19.0 - Unreleased
+<h2><a href="javascript:toggleList('release_0.19.1_-_unreleased_')">Release 0.19.1 - Unreleased
 </a></h2>
-<ul id="release_0.19.0_-_unreleased_">
-  <li><a href="javascript:toggleList('release_0.19.0_-_unreleased_._incompatible_changes_')">  INCOMPATIBLE CHANGES
-</a>&nbsp;&nbsp;&nbsp;(21)
-    <ol id="release_0.19.0_-_unreleased_._incompatible_changes_">
+<ul id="release_0.19.1_-_unreleased_">
+  <li><a href="javascript:toggleList('release_0.19.1_-_unreleased_._improvements_')">  IMPROVEMENTS
+</a>&nbsp;&nbsp;&nbsp;(1)
+    <ol id="release_0.19.1_-_unreleased_._improvements_">
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4739">HADOOP-4739</a>. Fix spelling and grammar, improve phrasing of some sections in
+mapred tutorial.<br />(Vivek Ratan via cdouglas)</li>
+    </ol>
+  </li>
+  <li><a href="javascript:toggleList('release_0.19.1_-_unreleased_._bug_fixes_')">  BUG FIXES
+</a>&nbsp;&nbsp;&nbsp;(1)
+    <ol id="release_0.19.1_-_unreleased_._bug_fixes_">
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4697">HADOOP-4697</a>. Fix getBlockLocations in KosmosFileSystem to handle multiple
+blocks correctly.<br />(Sriram Rao via cdouglas)</li>
+    </ol>
+  </li>
+</ul>
+<h2><a href="javascript:toggleList('older')">Older Releases</a></h2>
+<ul id="older">
+<h3><a href="javascript:toggleList('release_0.19.0_-_2008-11-18_')">Release 0.19.0 - 2008-11-18
+</a></h3>
+<ul id="release_0.19.0_-_2008-11-18_">
+  <li><a href="javascript:toggleList('release_0.19.0_-_2008-11-18_._incompatible_changes_')">  INCOMPATIBLE CHANGES
+</a>&nbsp;&nbsp;&nbsp;(23)
+    <ol id="release_0.19.0_-_2008-11-18_._incompatible_changes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3595">HADOOP-3595</a>. Remove deprecated methods for mapred.combine.once
 functionality, which was necessary to providing backwards
 compatible combiner semantics for 0.18.<br />(cdouglas via omalley)</li>
@@ -183,11 +310,13 @@ changed in <a href="http://issues.apache.org/jira/browse/HADOOP-2816">HADOOP-281
 DFS command line report reflects the same change. Config parameter
 dfs.datanode.du.pct is no longer used and is removed from the
 hadoop-default.xml.<br />(Suresh Srinivas via hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4116">HADOOP-4116</a>. Balancer should provide better resource management.<br />(hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4599">HADOOP-4599</a>. BlocksMap and BlockInfo made package private.<br />(shv)</li>
     </ol>
   </li>
-  <li><a href="javascript:toggleList('release_0.19.0_-_unreleased_._new_features_')">  NEW FEATURES
-</a>&nbsp;&nbsp;&nbsp;(40)
-    <ol id="release_0.19.0_-_unreleased_._new_features_">
+  <li><a href="javascript:toggleList('release_0.19.0_-_2008-11-18_._new_features_')">  NEW FEATURES
+</a>&nbsp;&nbsp;&nbsp;(39)
+    <ol id="release_0.19.0_-_2008-11-18_._new_features_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3341">HADOOP-3341</a>. Allow streaming jobs to specify the field separator for map
 and reduce input and output. The new configuration values are:
   stream.map.input.field.separator
@@ -268,13 +397,11 @@ Enis Soztutar via acmurthy)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3019">HADOOP-3019</a>. A new library to support total order partitions.<br />(cdouglas via omalley)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3924">HADOOP-3924</a>. Added a 'KILLED' job status.<br />(Subramaniam Krishnan via
 acmurthy)</li>
-      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2421">HADOOP-2421</a>.  Add jdiff output to documentation, listing all API
-changes from the prior release.<br />(cutting)</li>
     </ol>
   </li>
-  <li><a href="javascript:toggleList('release_0.19.0_-_unreleased_._improvements_')">  IMPROVEMENTS
-</a>&nbsp;&nbsp;&nbsp;(77)
-    <ol id="release_0.19.0_-_unreleased_._improvements_">
+  <li><a href="javascript:toggleList('release_0.19.0_-_2008-11-18_._improvements_')">  IMPROVEMENTS
+</a>&nbsp;&nbsp;&nbsp;(78)
+    <ol id="release_0.19.0_-_2008-11-18_._improvements_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4205">HADOOP-4205</a>. hive: metastore and ql to use the refactored SerDe library.<br />(zshao)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4106">HADOOP-4106</a>. libhdfs: add time, permission and user attribute support
 (part 2).<br />(Pete Wyckoff through zshao)</li>
@@ -404,8 +531,6 @@ incrementing the task attempt numbers by 1000 when the job restarts.<br />(Amar
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4301">HADOOP-4301</a>. Adds forrest doc for the skip bad records feature.<br />(Sharad Agarwal via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4354">HADOOP-4354</a>. Separate TestDatanodeDeath.testDatanodeDeath() into 4 tests.<br />(szetszwo)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3790">HADOOP-3790</a>. Add more unit tests for testing HDFS file append.<br />(szetszwo)</li>
-      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4150">HADOOP-4150</a>. Include librecordio in hadoop releases.<br />(Giridharan Kesavan
-via acmurthy)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4321">HADOOP-4321</a>. Include documentation for the capacity scheduler.<br />(Hemanth
 Yamijala via omalley)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4424">HADOOP-4424</a>. Change menu layout for Hadoop documentation (Boris Shkolnik
@@ -413,11 +538,13 @@ via cdouglas).
 </li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4438">HADOOP-4438</a>. Update forrest documentation to include missing FsShell
 commands.<br />(Suresh Srinivas via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4105">HADOOP-4105</a>.  Add forrest documentation for libhdfs.<br />(Pete Wyckoff via cutting)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4510">HADOOP-4510</a>. Make getTaskOutputPath public.<br />(Chris Wensel via omalley)</li>
     </ol>
   </li>
-  <li><a href="javascript:toggleList('release_0.19.0_-_unreleased_._optimizations_')">  OPTIMIZATIONS
+  <li><a href="javascript:toggleList('release_0.19.0_-_2008-11-18_._optimizations_')">  OPTIMIZATIONS
 </a>&nbsp;&nbsp;&nbsp;(11)
-    <ol id="release_0.19.0_-_unreleased_._optimizations_">
+    <ol id="release_0.19.0_-_2008-11-18_._optimizations_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3556">HADOOP-3556</a>. Removed lock contention in MD5Hash by changing the
 singleton MessageDigester by an instance per Thread using
 ThreadLocal.<br />(Iv?n de Prado via omalley)</li>
@@ -444,9 +571,9 @@ TaskTrackerInstrumentation, and TaskTrackerMetricsInst) in
 org.apache.hadoop.mapred  package private instead of public.<br />(omalley)</li>
     </ol>
   </li>
-  <li><a href="javascript:toggleList('release_0.19.0_-_unreleased_._bug_fixes_')">  BUG FIXES
-</a>&nbsp;&nbsp;&nbsp;(141)
-    <ol id="release_0.19.0_-_unreleased_._bug_fixes_">
+  <li><a href="javascript:toggleList('release_0.19.0_-_2008-11-18_._bug_fixes_')">  BUG FIXES
+</a>&nbsp;&nbsp;&nbsp;(152)
+    <ol id="release_0.19.0_-_2008-11-18_._bug_fixes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3563">HADOOP-3563</a>.  Refactor the distributed upgrade code so that it is
 easier to identify datanode and namenode related code.<br />(dhruba)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3640">HADOOP-3640</a>. Fix the read method in the NativeS3InputStream.<br />(tomwhite via
@@ -643,9 +770,6 @@ free.<br />(ddas via acmurthy)</li>
 negatvies.<br />(cdouglas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3942">HADOOP-3942</a>. Update distcp documentation to include features introduced in
 <a href="http://issues.apache.org/jira/browse/HADOOP-3873">HADOOP-3873</a>, <a href="http://issues.apache.org/jira/browse/HADOOP-3939">HADOOP-3939</a>. (Tsz Wo (Nicholas), SZE via cdouglas)
-</li>
-      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4257">HADOOP-4257</a>. The DFS client should pick only one datanode as the candidate
-to initiate lease recovery.  (Tsz Wo (Nicholas), SZE via dhruba)
 </li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4319">HADOOP-4319</a>. fuse-dfs dfs_read function returns as many bytes as it is
 told to read unlesss end-of-file is reached.<br />(Pete Wyckoff via dhruba)</li>
@@ -666,8 +790,6 @@ but not rethrown.<br />(enis)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4018">HADOOP-4018</a>. The number of tasks for a single job cannot exceed a
 pre-configured maximum value.<br />(dhruba)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4288">HADOOP-4288</a>. Fixes a NPE problem in CapacityScheduler.<br />(Amar Kamat via ddas)</li>
-      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3883">HADOOP-3883</a>. Limit namenode to assign at most one generation stamp for
-a particular block within a short period.<br />(szetszwo)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4014">HADOOP-4014</a>. Create hard links with 'fsutil hardlink' on Windows.<br />(shv)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4393">HADOOP-4393</a>. Merged org.apache.hadoop.fs.permission.AccessControlException
 and org.apache.hadoop.security.AccessControlIOException into a single
@@ -677,7 +799,6 @@ maps/reduces.<br />(Sreekanth Ramakrishnan via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4361">HADOOP-4361</a>. Makes sure that jobs killed from command line are killed
 fast (i.e., there is a slot to run the cleanup task soon).<br />(Amareshwari Sriramadasu via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4400">HADOOP-4400</a>. Add "hdfs://" to fs.default.name on quickstart.html.<br />(Jeff Hammerbacher via omalley)</li>
-      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4403">HADOOP-4403</a>. Make TestLeaseRecovery and TestFileCreation more robust.<br />(szetszwo)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4378">HADOOP-4378</a>. Fix TestJobQueueInformation to use SleepJob rather than
 WordCount via TestMiniMRWithDFS.<br />(Sreekanth Ramakrishnan via acmurthy)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4376">HADOOP-4376</a>. Fix formatting in hadoop-default.xml for
@@ -714,33 +835,119 @@ append.<br />(szetszwo)</li>
 not correspond to its type.<br />(shv)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4149">HADOOP-4149</a>. Fix handling of updates to the job priority, by changing the
 list of jobs to be keyed by the priority, submit time, and job tracker id.<br />(Amar Kamat via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4296">HADOOP-4296</a>. Fix job client failures by not retiring a job as soon as it
+is finished.<br />(dhruba)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4439">HADOOP-4439</a>. Remove configuration variables that aren't usable yet, in
+particular mapred.tasktracker.tasks.maxmemory and mapred.task.max.memory.<br />(Hemanth Yamijala via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4230">HADOOP-4230</a>. Fix for serde2 interface, limit operator, select * operator,
+UDF trim functions and sampling.<br />(Ashish Thusoo via dhruba)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4358">HADOOP-4358</a>. No need to truncate access time in INode. Also fixes NPE
+in CreateEditsLog.<br />(Raghu Angadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4387">HADOOP-4387</a>. TestHDFSFileSystemContract fails on windows nightly builds.<br />(Raghu Angadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4466">HADOOP-4466</a>. Ensure that SequenceFileOutputFormat isn't tied to Writables
+and can be used with other Serialization frameworks.<br />(Chris Wensel via
+acmurthy)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4525">HADOOP-4525</a>. Fix ipc.server.ipcnodelay originally missed in in <a href="http://issues.apache.org/jira/browse/HADOOP-2232">HADOOP-2232</a>.<br />(cdouglas via Clint Morgan)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4498">HADOOP-4498</a>. Ensure that JobHistory correctly escapes the job name so that
+regex patterns work.<br />(Chris Wensel via acmurthy)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4446">HADOOP-4446</a>. Modify guaranteed capacity labels in capacity scheduler's UI
+to reflect the information being displayed.<br />(Sreekanth Ramakrishnan via
+yhemanth)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4282">HADOOP-4282</a>. Some user facing URLs are not filtered by user filters.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4595">HADOOP-4595</a>. Fixes two race conditions - one to do with updating free slot count,
+and another to do with starting the MapEventsFetcher thread.<br />(ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4552">HADOOP-4552</a>. Fix a deadlock in RPC server.<br />(Raghu Angadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4471">HADOOP-4471</a>. Sort running jobs by priority in the capacity scheduler.<br />(Amar Kamat via yhemanth)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4500">HADOOP-4500</a>. Fix MultiFileSplit to get the FileSystem from the relevant
+path rather than the JobClient.<br />(Joydeep Sen Sarma via cdouglas)</li>
     </ol>
   </li>
 </ul>
-<h2><a href="javascript:toggleList('older')">Older Releases</a></h2>
-<ul id="older">
-<h3><a href="javascript:toggleList('release_0.18.2_-_unreleased_')">Release 0.18.2 - Unreleased
+<h3><a href="javascript:toggleList('release_0.18.3_-_unreleased_')">Release 0.18.3 - Unreleased
 </a></h3>
-<ul id="release_0.18.2_-_unreleased_">
-  <li><a href="javascript:toggleList('release_0.18.2_-_unreleased_._bug_fixes_')">  BUG FIXES
-</a>&nbsp;&nbsp;&nbsp;(10)
-    <ol id="release_0.18.2_-_unreleased_._bug_fixes_">
-      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4116">HADOOP-4116</a>. Balancer should provide better resource management.<br />(hairong)</li>
+<ul id="release_0.18.3_-_unreleased_">
+  <li><a href="javascript:toggleList('release_0.18.3_-_unreleased_._improvements_')">  IMPROVEMENTS
+</a>&nbsp;&nbsp;&nbsp;(2)
+    <ol id="release_0.18.3_-_unreleased_._improvements_">
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4150">HADOOP-4150</a>. Include librecordio in hadoop releases.<br />(Giridharan Kesavan
+via acmurthy)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4668">HADOOP-4668</a>. Improve documentation for setCombinerClass to clarify the
+restrictions on combiners.<br />(omalley)</li>
+    </ol>
+  </li>
+  <li><a href="javascript:toggleList('release_0.18.3_-_unreleased_._bug_fixes_')">  BUG FIXES
+</a>&nbsp;&nbsp;&nbsp;(18)
+    <ol id="release_0.18.3_-_unreleased_._bug_fixes_">
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4499">HADOOP-4499</a>. DFSClient should invoke checksumOk only once.<br />(Raghu Angadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4597">HADOOP-4597</a>. Calculate mis-replicated blocks when safe-mode is turned
+off manually.<br />(shv)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3121">HADOOP-3121</a>. lsr should keep listing the remaining items but not
+terminate if there is any IOException.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4610">HADOOP-4610</a>. Always calculate mis-replicated blocks when safe-mode is
+turned off.<br />(shv)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3883">HADOOP-3883</a>. Limit namenode to assign at most one generation stamp for
+a particular block within a short period.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4556">HADOOP-4556</a>. Block went missing.<br />(hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4643">HADOOP-4643</a>. NameNode should exclude excessive replicas when counting
+live replicas for a block.<br />(hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4703">HADOOP-4703</a>. Should not wait for proxy forever in lease recovering.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4647">HADOOP-4647</a>. NamenodeFsck should close the DFSClient it has created.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4616">HADOOP-4616</a>. Fuse-dfs can handle bad values from FileSystem.read call.<br />(Pete Wyckoff via dhruba)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4061">HADOOP-4061</a>. Throttle Datanode decommission monitoring in Namenode.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4659">HADOOP-4659</a>. Root cause of connection failure is being ost to code that
+uses it for delaying startup.<br />(Steve Loughran and Hairong via hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4614">HADOOP-4614</a>. Lazily open segments when merging map spills to avoid using
+too many file descriptors.<br />(Yuri Pradkin via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4257">HADOOP-4257</a>. The DFS client should pick only one datanode as the candidate
+to initiate lease recovery.  (Tsz Wo (Nicholas), SZE via dhruba)
+</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4713">HADOOP-4713</a>. Fix librecordio to handle records larger than 64k.<br />(Christian
+Kunz via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4635">HADOOP-4635</a>. Fix a memory leak in fuse dfs.<br />(pete wyckoff via mahadev)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4714">HADOOP-4714</a>. Report status between merges and make the number of records
+between progress reports configurable.<br />(Jothi Padmanabhan via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4726">HADOOP-4726</a>. Fix documentation typos "the the".<br />(Edward J. Yoon via
+szetszwo)</li>
+    </ol>
+  </li>
+</ul>
+<h3><a href="javascript:toggleList('release_0.18.2_-_2008-11-03_')">Release 0.18.2 - 2008-11-03
+</a></h3>
+<ul id="release_0.18.2_-_2008-11-03_">
+  <li><a href="javascript:toggleList('release_0.18.2_-_2008-11-03_._bug_fixes_')">  BUG FIXES
+</a>&nbsp;&nbsp;&nbsp;(16)
+    <ol id="release_0.18.2_-_2008-11-03_._bug_fixes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3614">HADOOP-3614</a>. Fix a bug that Datanode may use an old GenerationStamp to get
 meta file.<br />(szetszwo)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4314">HADOOP-4314</a>. Simulated datanodes should not include blocks that are still
 being written in their block report.<br />(Raghu Angadi)</li>
-      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4228">HADOOP-4228</a>. dfs datanoe metrics, bytes_read and bytes_written, overflow
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4228">HADOOP-4228</a>. dfs datanode metrics, bytes_read and bytes_written, overflow
 due to incorrect type used.<br />(hairong)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4395">HADOOP-4395</a>. The FSEditLog loading is incorrect for the case OP_SET_OWNER.<br />(szetszwo)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4351">HADOOP-4351</a>. FSNamesystem.getBlockLocationsInternal throws
 ArrayIndexOutOfBoundsException.<br />(hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4403">HADOOP-4403</a>. Make TestLeaseRecovery and TestFileCreation more robust.<br />(szetszwo)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4292">HADOOP-4292</a>. Do not support append() for LocalFileSystem.<br />(hairong)</li>
-      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4398">HADOOP-4398</a>. No need to truncate access time in INode. Also fixes NPE
-in CreateEditsLog.<br />(Raghu Angadi)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4399">HADOOP-4399</a>. Make fuse-dfs multi-thread access safe.<br />(Pete Wyckoff via dhruba)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-4369">HADOOP-4369</a>. Use setMetric(...) instead of incrMetric(...) for metrics
 averages.<br />(Brian Bockelman via szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4469">HADOOP-4469</a>. Rename and add the ant task jar file to the tar file.<br />(nigel)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3914">HADOOP-3914</a>. DFSClient sends Checksum Ok only once for a block.<br />(Christian Kunz via hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4467">HADOOP-4467</a>. SerializationFactory now uses the current context ClassLoader
+allowing for user supplied Serialization instances.<br />(Chris Wensel via
+acmurthy)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4517">HADOOP-4517</a>. Release FSDataset lock before joining ongoing create threads.<br />(szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4526">HADOOP-4526</a>. fsck failing with NullPointerException.<br />(hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4483">HADOOP-4483</a> Honor the max parameter in DatanodeDescriptor.getBlockArray(..)<br />(Ahad Rana and Hairong Kuang via szetszwo)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-4340">HADOOP-4340</a>. Correctly set the exit code from JobShell.main so that the
+'hadoop jar' command returns the right code to the user.<br />(acmurthy)</li>
+    </ol>
+  </li>
+  <li><a href="javascript:toggleList('release_0.18.2_-_2008-11-03_._new_features_')">  NEW FEATURES
+</a>&nbsp;&nbsp;&nbsp;(1)
+    <ol id="release_0.18.2_-_2008-11-03_._new_features_">
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2421">HADOOP-2421</a>.  Add jdiff output to documentation, listing all API
+changes from the prior release.<br />(cutting)</li>
     </ol>
   </li>
 </ul>

+ 4 - 1
docs/cluster_setup.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">
@@ -334,7 +337,7 @@ document.write("Last Published: " + document.lastModified);
         values via the <span class="codefrag">conf/hadoop-env.sh</span>.</p>
 <a name="N10097"></a><a name="Site+Configuration"></a>
 <h3 class="h4">Site Configuration</h3>
-<p>To configure the the Hadoop cluster you will need to configure the
+<p>To configure the Hadoop cluster you will need to configure the
         <em>environment</em> in which the Hadoop daemons execute as well as
         the <em>configuration parameters</em> for the Hadoop daemons.</p>
 <p>The Hadoop daemons are <span class="codefrag">NameNode</span>/<span class="codefrag">DataNode</span> 

File diff suppressed because it is too large
+ 1 - 1
docs/cluster_setup.pdf


+ 3 - 0
docs/commands_manual.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 3 - 0
docs/distcp.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 28 - 6
docs/hadoop-default.html

@@ -190,17 +190,30 @@ creations/deletions), or "all".</td>
   </td>
 </tr>
 <tr>
-<td><a name="dfs.datanode.https.address">dfs.datanode.https.address</a></td><td>0.0.0.0:50475</td><td></td>
+<td><a name="dfs.https.enable">dfs.https.enable</a></td><td>false</td><td>Decide if HTTPS(SSL) is supported on HDFS
+  </td>
 </tr>
 <tr>
-<td><a name="dfs.https.address">dfs.https.address</a></td><td>0.0.0.0:50470</td><td></td>
+<td><a name="dfs.https.need.client.auth">dfs.https.need.client.auth</a></td><td>false</td><td>Whether SSL client certificate authentication is required
+  </td>
 </tr>
 <tr>
-<td><a name="https.keystore.info.rsrc">https.keystore.info.rsrc</a></td><td>sslinfo.xml</td><td>The name of the resource from which ssl keystore information
-  will be extracted
+<td><a name="dfs.https.server.keystore.resource">dfs.https.server.keystore.resource</a></td><td>ssl-server.xml</td><td>Resource file from which ssl server keystore
+  information will be extracted
   </td>
 </tr>
 <tr>
+<td><a name="dfs.https.client.keystore.resource">dfs.https.client.keystore.resource</a></td><td>ssl-client.xml</td><td>Resource file from which ssl client keystore
+  information will be extracted
+  </td>
+</tr>
+<tr>
+<td><a name="dfs.datanode.https.address">dfs.datanode.https.address</a></td><td>0.0.0.0:50475</td><td></td>
+</tr>
+<tr>
+<td><a name="dfs.https.address">dfs.https.address</a></td><td>0.0.0.0:50470</td><td></td>
+</tr>
+<tr>
 <td><a name="dfs.datanode.dns.interface">dfs.datanode.dns.interface</a></td><td>default</td><td>The name of the Network Interface from which a data node should 
   report its IP address.
   </td>
@@ -338,10 +351,14 @@ creations/deletions), or "all".</td>
   </td>
 </tr>
 <tr>
-<td><a name="dfs.namenode.decommission.interval">dfs.namenode.decommission.interval</a></td><td>300</td><td>Namenode periodicity in seconds to check if decommission is 
+<td><a name="dfs.namenode.decommission.interval">dfs.namenode.decommission.interval</a></td><td>30</td><td>Namenode periodicity in seconds to check if decommission is 
   complete.</td>
 </tr>
 <tr>
+<td><a name="dfs.namenode.decommission.nodes.per.interval">dfs.namenode.decommission.nodes.per.interval</a></td><td>5</td><td>The number of nodes namenode checks if decommission is complete
+  in each dfs.namenode.decommission.interval.</td>
+</tr>
+<tr>
 <td><a name="dfs.replication.interval">dfs.replication.interval</a></td><td>3</td><td>The periodicity in seconds with which the namenode computes 
   repliaction work for datanodes. </td>
 </tr>
@@ -735,7 +752,7 @@ creations/deletions), or "all".</td>
 <tr>
 <td><a name="mapred.task.profile">mapred.task.profile</a></td><td>false</td><td>To set whether the system should collect profiler
      information for some of the tasks in this job? The information is stored
-     in the the user log directory. The value is "true" if task profiling
+     in the user log directory. The value is "true" if task profiling
      is enabled.</td>
 </tr>
 <tr>
@@ -957,6 +974,11 @@ creations/deletions), or "all".</td>
     index cache that is used when serving map outputs to reducers.
   </td>
 </tr>
+<tr>
+<td><a name="mapred.merge.recordsBeforeProgress">mapred.merge.recordsBeforeProgress</a></td><td>10000</td><td> The number of records to process during merge before
+   sending a progress notification to the TaskTracker.
+  </td>
+</tr>
 </table>
 </body>
 </html>

+ 3 - 0
docs/hadoop_archives.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 3 - 0
docs/hdfs_design.html

@@ -152,6 +152,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 5 - 2
docs/hdfs_permissions_guide.html

@@ -152,6 +152,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">
@@ -256,12 +259,12 @@ document.write("Last Published: " + document.lastModified);
 		</li>
 		
 <li>
-		   Otherwise the the other permissions of <span class="codefrag">foo</span> are tested.
+		   Otherwise the other permissions of <span class="codefrag">foo</span> are tested.
 		</li>
 	
 </ul>
 <p>
-		If a permissions check fails, the the client operation fails.	
+		If a permissions check fails, the client operation fails.	
 </p>
 </div>
 

File diff suppressed because it is too large
+ 1 - 1
docs/hdfs_permissions_guide.pdf


+ 3 - 0
docs/hdfs_quota_admin_guide.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 3 - 0
docs/hdfs_shell.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 3 - 0
docs/hdfs_user_guide.html

@@ -152,6 +152,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 3 - 0
docs/hod_admin_guide.html

@@ -152,6 +152,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menupage">

+ 3 - 0
docs/hod_config_guide.html

@@ -152,6 +152,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 3 - 0
docs/hod_user_guide.html

@@ -151,6 +151,9 @@ document.write("Last Published: " + document.lastModified);
 <div class="menuitem">
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
+<div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
 <div class="menupage">
 <div class="menupagetitle">HOD User Guide</div>
 </div>

+ 3 - 0
docs/index.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 6 - 6
docs/jdiff/hadoop_0.17.0.xml

@@ -551,7 +551,7 @@
       <doc>
       <![CDATA[Set the quiteness-mode. 
  
- In the the quite-mode error and informational messages might not be logged.
+ In the quite-mode error and informational messages might not be logged.
  
  @param quietmode <code>true</code> to set quiet-mode on, <code>false</code>
               to turn it off.]]>
@@ -23538,7 +23538,7 @@ To add a new serialization framework write an implementation of
  helps to cut down the amount of data transferred from the {@link Mapper} to
  the {@link Reducer}, leading to better performance.</p>
   
- <p>Typically the combiner is same as the the <code>Reducer</code> for the  
+ <p>Typically the combiner is same as the <code>Reducer</code> for the  
  job i.e. {@link #setReducerClass(Class)}.</p>
  
  @param theClass the user-defined combiner class used to combine 
@@ -23958,7 +23958,7 @@ To add a new serialization framework write an implementation of
       <param name="newValue" type="boolean"/>
       <doc>
       <![CDATA[Set whether the system should collect profiler information for some of 
- the tasks in this job? The information is stored in the the user log 
+ the tasks in this job? The information is stored in the user log 
  directory.
  @param newValue true means it should be gathered]]>
       </doc>
@@ -29523,7 +29523,7 @@ Sun Microsystems, Inc. in the United States and other countries.</i></p>]]>
       <param name="aJob" type="org.apache.hadoop.mapred.jobcontrol.Job"/>
       <doc>
       <![CDATA[Add a new job.
- @param aJob the the new job]]>
+ @param aJob the new job]]>
       </doc>
     </method>
     <method name="addJobs"
@@ -33101,7 +33101,7 @@ public class WordCountAggregatorDescriptor extends ValueAggregatorBaseDescriptor
 </pre>
 </blockquote>
 In the above code, LONG_VALUE_SUM is a string denoting the aggregation type LongValueSum, which sums over long values.
-ONE denotes a string "1". Function generateEntry(LONG_VALUE_SUM, words[i], ONE) will inperpret the first argument as an aggregation type, the second as an aggregation ID, and the the third argumnent as the value to be aggregated. The output will look like: "LongValueSum:xxxx", where XXXX is the string value of words[i]. The value will be "1". The mapper will call generateKeyValPairs(Object key, Object val)  for each input key/value pair to generate the desired aggregation id/value pairs. 
+ONE denotes a string "1". Function generateEntry(LONG_VALUE_SUM, words[i], ONE) will inperpret the first argument as an aggregation type, the second as an aggregation ID, and the third argumnent as the value to be aggregated. The output will look like: "LongValueSum:xxxx", where XXXX is the string value of words[i]. The value will be "1". The mapper will call generateKeyValPairs(Object key, Object val)  for each input key/value pair to generate the desired aggregation id/value pairs. 
 The down stream combiner/reducer will interpret these pairs as adding one to the aggregator XXXX.
 <p />
 Class ValueAggregatorBaseDescriptor is a base class that user plugin classes can extend. Here is the XML fragment specifying the user plugin class:
@@ -41954,7 +41954,7 @@ is serialized as
     <doc>
     <![CDATA[A helper to load the native hadoop code i.e. libhadoop.so.
  This handles the fallback to either the bundled libhadoop-Linux-i386-32.so
- or the the default java implementations where appropriate.]]>
+ or the default java implementations where appropriate.]]>
     </doc>
   </class>
   <!-- end class org.apache.hadoop.util.NativeCodeLoader -->

+ 5 - 5
docs/jdiff/hadoop_0.18.1.xml

@@ -567,7 +567,7 @@
       <doc>
       <![CDATA[Set the quiteness-mode. 
  
- In the the quite-mode error and informational messages might not be logged.
+ In the quite-mode error and informational messages might not be logged.
  
  @param quietmode <code>true</code> to set quiet-mode on, <code>false</code>
               to turn it off.]]>
@@ -25379,7 +25379,7 @@
  helps to cut down the amount of data transferred from the {@link Mapper} to
  the {@link Reducer}, leading to better performance.</p>
   
- <p>Typically the combiner is same as the the <code>Reducer</code> for the  
+ <p>Typically the combiner is same as the <code>Reducer</code> for the  
  job i.e. {@link #setReducerClass(Class)}.</p>
  
  @param theClass the user-defined combiner class used to combine 
@@ -25814,7 +25814,7 @@
       <param name="newValue" type="boolean"/>
       <doc>
       <![CDATA[Set whether the system should collect profiler information for some of 
- the tasks in this job? The information is stored in the the user log 
+ the tasks in this job? The information is stored in the user log 
  directory.
  @param newValue true means it should be gathered]]>
       </doc>
@@ -31855,7 +31855,7 @@
       <param name="aJob" type="org.apache.hadoop.mapred.jobcontrol.Job"/>
       <doc>
       <![CDATA[Add a new job.
- @param aJob the the new job]]>
+ @param aJob the new job]]>
       </doc>
     </method>
     <method name="addJobs"
@@ -43418,7 +43418,7 @@
     <doc>
     <![CDATA[A helper to load the native hadoop code i.e. libhadoop.so.
  This handles the fallback to either the bundled libhadoop-Linux-i386-32.so
- or the the default java implementations where appropriate.]]>
+ or the default java implementations where appropriate.]]>
     </doc>
   </class>
   <!-- end class org.apache.hadoop.util.NativeCodeLoader -->

+ 5 - 5
docs/jdiff/hadoop_0.18.2.xml

@@ -567,7 +567,7 @@
       <doc>
       <![CDATA[Set the quiteness-mode. 
  
- In the the quite-mode error and informational messages might not be logged.
+ In the quite-mode error and informational messages might not be logged.
  
  @param quietmode <code>true</code> to set quiet-mode on, <code>false</code>
               to turn it off.]]>
@@ -19389,7 +19389,7 @@
  helps to cut down the amount of data transferred from the {@link Mapper} to
  the {@link Reducer}, leading to better performance.</p>
   
- <p>Typically the combiner is same as the the <code>Reducer</code> for the  
+ <p>Typically the combiner is same as the <code>Reducer</code> for the  
  job i.e. {@link #setReducerClass(Class)}.</p>
  
  @param theClass the user-defined combiner class used to combine 
@@ -19824,7 +19824,7 @@
       <param name="newValue" type="boolean"/>
       <doc>
       <![CDATA[Set whether the system should collect profiler information for some of 
- the tasks in this job? The information is stored in the the user log 
+ the tasks in this job? The information is stored in the user log 
  directory.
  @param newValue true means it should be gathered]]>
       </doc>
@@ -25865,7 +25865,7 @@
       <param name="aJob" type="org.apache.hadoop.mapred.jobcontrol.Job"/>
       <doc>
       <![CDATA[Add a new job.
- @param aJob the the new job]]>
+ @param aJob the new job]]>
       </doc>
     </method>
     <method name="addJobs"
@@ -37428,7 +37428,7 @@
     <doc>
     <![CDATA[A helper to load the native hadoop code i.e. libhadoop.so.
  This handles the fallback to either the bundled libhadoop-Linux-i386-32.so
- or the the default java implementations where appropriate.]]>
+ or the default java implementations where appropriate.]]>
     </doc>
   </class>
   <!-- end class org.apache.hadoop.util.NativeCodeLoader -->

+ 5 - 5
docs/jdiff/hadoop_0.19.0.xml

@@ -644,7 +644,7 @@
       <doc>
       <![CDATA[Set the quiteness-mode. 
  
- In the the quite-mode error and informational messages might not be logged.
+ In the quite-mode error and informational messages might not be logged.
  
  @param quietmode <code>true</code> to set quiet-mode on, <code>false</code>
               to turn it off.]]>
@@ -21200,7 +21200,7 @@
  helps to cut down the amount of data transferred from the {@link Mapper} to
  the {@link Reducer}, leading to better performance.</p>
   
- <p>Typically the combiner is same as the the <code>Reducer</code> for the  
+ <p>Typically the combiner is same as the <code>Reducer</code> for the  
  job i.e. {@link #setReducerClass(Class)}.</p>
  
  @param theClass the user-defined combiner class used to combine 
@@ -21620,7 +21620,7 @@
       <param name="newValue" type="boolean"/>
       <doc>
       <![CDATA[Set whether the system should collect profiler information for some of 
- the tasks in this job? The information is stored in the the user log 
+ the tasks in this job? The information is stored in the user log 
  directory.
  @param newValue true means it should be gathered]]>
       </doc>
@@ -28444,7 +28444,7 @@
       <param name="aJob" type="org.apache.hadoop.mapred.jobcontrol.Job"/>
       <doc>
       <![CDATA[Add a new job.
- @param aJob the the new job]]>
+ @param aJob the new job]]>
       </doc>
     </method>
     <method name="addJobs"
@@ -42249,7 +42249,7 @@
     <doc>
     <![CDATA[A helper to load the native hadoop code i.e. libhadoop.so.
  This handles the fallback to either the bundled libhadoop-Linux-i386-32.so
- or the the default java implementations where appropriate.]]>
+ or the default java implementations where appropriate.]]>
     </doc>
   </class>
   <!-- end class org.apache.hadoop.util.NativeCodeLoader -->

+ 9 - 0
docs/linkmap.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">
@@ -306,6 +309,12 @@ document.write("Last Published: " + document.lastModified);
 </li>
 </ul>
     
+<ul>
+<li>
+<a href="libhdfs.html">HDFS C API</a>&nbsp;&nbsp;___________________&nbsp;&nbsp;<em>libhdfs</em>
+</li>
+</ul>
+    
 <ul>
 <li>
 <a href="hod_user_guide.html">HOD User Guide</a>&nbsp;&nbsp;___________________&nbsp;&nbsp;<em>hod-user-guide</em>

+ 16 - 16
docs/linkmap.pdf

@@ -5,10 +5,10 @@
 /Producer (FOP 0.20.5) >>
 endobj
 5 0 obj
-<< /Length 1190 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 1183 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gau1/bAs(+'Sc@2$C9p#WpW,rWSR2%kp<aLC-!@MA>@^6%L?_Dp6EEN;H!Y9Bekkp(n^giir$i[ShDEd01g[CJAG\TJbEG+J\c)Q!bd$[&DMQ%69oc0/2k?-ZanNT8T9^-a?2Jp;VN94Z"n8/`4*/G4C>PKdsS4X<gQU60>ZY-;.]jGNNsqP_@UZW)nAD78fNB8-YMkTBJ0l!c(L28Y&u)G6'?/0CT3jdV/f(8RlX0qOXbKN-+YQO33o:hFr#]i3j@LLbT9:j5s$ZW6U6-6qb!Z0DKB6j(B/X[k89<<$]%.QiY_P9RiK$`0XhT-!=V-XF+g"CB6dBlb^1",jkI%RKTZPa'h\)YTjg:BhI,"6m6H$5Qc3:Q]?sT:!.q_aOWT5LO[0DM%^P,Q4B0[MV32qfE3;X'a&5iL??6hVqR9LuCr5mbdtkC1lJ\>;X&@-X+d]bY?(_nb!p)U3fgP1Zk-V8T*pC(Bp[k4SX2HaeU=+=\`-O7$29$-:cXHZF+k(m`au\@$Do2+j`bRsq/!\9)e2RX8;kG1H,l4W5D/U94I5uLd,a7QeULdiIHO0@KQ35b_+)K5j.eRoX=MW#M'HHoKWDQ4_Mk!2o?t[20rHRcFpm(\VLdK4(r(Jn3N49*IYenZM@'D`U\r^Br[]@e]46CZ_H_P'j[?&itH)MZ8MBk!a4F0bZ=VMF#I8+\ig.+bC\,U'Cj$&`GO$9Ol8PB.tg_\;b!L=r9G$E80Xf#1BH:]uc1Bs@=^?Qh9*2@D<q4+F`W'BG;=ZcU*MiB2ZZRd#53c;6ZI&(1BKXfcVs,t<CQ"r+GhgD\JRt-DQ/-3*NlEm6lgfghF5,#:pZi;/F<'88GLkKsKn,;VQfIq!j)4l#;Vrs>m3+"P:qm[7*2PSCn\'c,..YW$&nAHogYTa1k;9ZVRl&qOLjAN*$);S6#kXhm3PWe)uNgtut8<j,<Hj_9h=eW"a\iHbiH*YOXkA/b*n&&aGV[kFE]QR=IoknX57LRi&5>Ou><'MC[3\BRC9f'%[S^8Qs+6L;mpJ``2LX3V89Q'0KOa<@T?jK(7crq=hb^r.cbK9G<\NNbDBgQbu:l0-^R<+sO(]uL*5th7d)jtESqcTjtVV@;t)pD,?_@T2^!DDPpY?[IHm>gB*;W!>J\T=0?gU,hK"6X1=o"5jig8-TkVUOmC$JO!Z!QVZA?N~>
+Gau1/bAu&c'Sc@-$89Ss6?h2N+Ji=t>=hot_l1!VCN/9@b@PpF,V-eG`Rq\sSS9[&ErPMUk<G:)AA*548Kph/N7OI%[/[E8N4'*.61^#/eDB4"!lC.7]o,(GXd#+^U2q;%k/l:,[K!fOL]5:s>aO==C=?NCX1agGh)pKW8*cL2R7.BCjZnjF1#S-S\TOSa)hQ`jBE#=&0966Ri$@-d/n@2fWnjA5g:Pf^IjF`uVkcJT7b;LSgKK1B3uVL>.7fXIq^7?U?]mk):(-(ncsPHi(O9$6>Sug\^YJfK"m!,oiY[5'`F2<(j"LVu(4m`\@=WcdOA#SIb1`E^hLnq[IP)P4`1cunH'S?1(_F/8(cHjKd,*X#qF8TqZ`+cB"S#<`3X$NKK,>j9A>_*O=@$#jb.dSePnpd!922,URZf#Hs(2EQ(NtJ$-D^Cf#ktW@)_Gid>j?KUJPp"dP_TC$M`:pcBj]=6d'(FIiHq!%R:kr1X/fE\E],s9`f>>c@JW_No5Srr#2OE;_Q2KFBLL0MQ-7k13!BH6O<ATWj4:!62b`;H>^82)aKiIf%*Z>=n;q3;JK=='k.FNM?68MB(n$tIP7Siq$JBZdP9cadbVuC@hfo>(8mU&C<jJ,XdgQd4*^EE#E3hTmm*:m!V:_dGG';J4GurLHB[]Hl@l7#QlGgke<g[$]4f?4L$4+MBVDn`",:.<]Ii#iZ4u@_L^/OunPQK-T-sksJ/4U6k[Tn@;9rjT9P%m0n$91aHeOc8j7Dg_c3HJ+r^Wl)>If)sl44Q0"=Z##XgglP#8CN`XnZ_'Q-&J\2dUGDTH*M=XJ<)K<!OMJ<Zsd(l>*V_;TsG?;rue1<!m)mIKMM0sd[TZK0BR=Zb9`j"WNN/D-_eNHH[G:4H:?g/7=fVZjl`bn38_<o,MX\Y_f#+Ws&@&1QdDsPEe1_((KP&rH-D$qk`_Kqmm_0_)r4T)kZOf=Q(>mHmTYR$aHZ\_]s=u]/>p[oDXQ#U"53%>o;1]hO("5_d#lUu^QruteoL2]&9iKO5YN0InKUi]Mf7Dh8[mYK0Es7iBI=/gAl1.Tc/NL.`1F)u+:oO=<Y?BNjrLQH`^%[LPS1c@-aKf`5AISXFFRup#rC(lY\=X)=\I_EZog71l&Sor7Xl5^GNZ8HpNa6+#N!MV9&0q;mD^)<j5V)HK0Jt#Lbqc=~>
 endstream
 endobj
 6 0 obj
@@ -20,10 +20,10 @@ endobj
 >>
 endobj
 7 0 obj
-<< /Length 483 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 516 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-GatUo9i$C,&;KZOME0CCj6j2pm'/JB@#XW.`.KpZXU?F!ZkPCsHP^@,C""n935)ZBq`_>pcK!i!.Y'9"&17L01n[+h97b6_."t&\35k:TGVWc&N8Tpu6/9l06hZg+.7gM2pR`$[6%%o0jDVod4UfH8qc@RBkn&1^)kKkL5JT&qhu4;02c%Z_3":k/*MBsL\M\0g.]b.*QaL-!),P;]W>m6c2D7,?,Fh)9,R6:V$WEiBk#IiK),bM)RLS<>Xae[<+@>L?EQ0\A+Xe6QA\.3=cg$ff.F^sFLQrTmmq+SY`TAHgIT7]s3@Tm?(A0T`+q@Z@5H^L7-rHoGn9/W"9L*/4$#YCeAguprVJpcg'S>HaW=@sK8Vd,c)j?H0;Ru$+(]J.UJQ`D:\/UfY<g.S"?"N8pDZ-<lnn1YC?84d@?H!<,Mt,6>I=5HLop;Yjb-_'p?;6uU"aY$2hMX:=m:j2@eYdi7rpG48R?[G.~>
+GatUo9i$C,&;KZOME0t<EHg*^@r6%V0ua_:&([,U>ppJ^;UkL1h*P>]VJ2cDH3\'=B6Iq5U>U,=J2/A]`;tN8TKrEJJ]^th$AjpbLu.([BZui'L(d/_nXkgG2]#f*7LWof:YaA'"K;LL]?`2o6$2?3jjnG3(.s3Wp?c1aBcM&4`(nUH0N?-f&3co]oZooKq-Es"/pE,,C1M[97YBHFicL+!pAH+n0F#W?RQHir]6*E\?5M-VDGY<]WU)^40$B`YLH(Pb]n`p3XVUN.8H(eb*0"BT]?+Z,f4bXMnno4U$oJDYd1@`_:"F_reP&T/;3H4+1nVP975#G0Q`U3g&bk@ja=ETY)tWYLGd"U'XI>nPTb>0,[L"\iV1Ub;%F/!i97LFYNu.'m$qIGNPW(Mbqm0%ePcWKnGGht70#%s2U]AYf_mCjM//I,SR`qeCSmEH:QjMIr7Ykq^1ioc9/F915mu+^PZ=PC=.R*F&XjSE6:0NskU@qo:Zs1-4dnR)1],s)R*5V~>
 endstream
 endobj
 8 0 obj
@@ -87,19 +87,19 @@ endobj
 xref
 0 14
 0000000000 65535 f 
-0000002696 00000 n 
-0000002760 00000 n 
-0000002810 00000 n 
+0000002722 00000 n 
+0000002786 00000 n 
+0000002836 00000 n 
 0000000015 00000 n 
 0000000071 00000 n 
-0000001353 00000 n 
-0000001459 00000 n 
-0000002033 00000 n 
-0000002139 00000 n 
-0000002251 00000 n 
-0000002361 00000 n 
-0000002472 00000 n 
-0000002580 00000 n 
+0000001346 00000 n 
+0000001452 00000 n 
+0000002059 00000 n 
+0000002165 00000 n 
+0000002277 00000 n 
+0000002387 00000 n 
+0000002498 00000 n 
+0000002606 00000 n 
 trailer
 <<
 /Size 14
@@ -107,5 +107,5 @@ trailer
 /Info 4 0 R
 >>
 startxref
-2932
+2958
 %%EOF

+ 1 - 1
docs/mapred_tutorial.html

@@ -2465,7 +2465,7 @@ document.write("Last Published: " + document.lastModified);
           <a href="api/org/apache/hadoop/mapred/JobConf.html#setProfileEnabled(boolean)">
           JobConf.setProfileEnabled(boolean)</a>. If the value is set 
           <span class="codefrag">true</span>, the task profiling is enabled. The profiler
-          information is stored in the the user log directory. By default, 
+          information is stored in the user log directory. By default, 
           profiling is not enabled for the job.  </p>
 <p>Once user configures that profiling is needed, she/he can use
           the configuration property 

File diff suppressed because it is too large
+ 1 - 1
docs/mapred_tutorial.pdf


+ 3 - 0
docs/native_libraries.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 3 - 0
docs/quickstart.html

@@ -150,6 +150,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">

+ 6 - 3
docs/streaming.html

@@ -153,6 +153,9 @@ document.write("Last Published: " + document.lastModified);
 <a href="SLG_user_guide.html">HDFS Utilities</a>
 </div>
 <div class="menuitem">
+<a href="libhdfs.html">HDFS C API</a>
+</div>
+<div class="menuitem">
 <a href="hod_user_guide.html">HOD User Guide</a>
 </div>
 <div class="menuitem">
@@ -322,11 +325,11 @@ In the above example, both the mapper and the reducer are executables that read
 </p>
 <p>
   When an executable is specified for mappers, each mapper task will launch the executable as a separate process when the mapper is initialized. As the mapper task runs, it converts its inputs into lines and feed the lines to the stdin of the process. In the meantime, the mapper collects the line oriented outputs from the stdout of the process and converts each line into a key/value pair, which is collected as the output of the mapper. By default, the 
-  <em>prefix of a line up to the first tab character</em> is the <strong>key</strong> and the the rest of the line (excluding the tab character) will be the <strong>value</strong>. 
+  <em>prefix of a line up to the first tab character</em> is the <strong>key</strong> and the rest of the line (excluding the tab character) will be the <strong>value</strong>. 
   If there is no tab character in the line, then entire line is considered as key and the value is null. However, this can be customized, as discussed later.
 </p>
 <p>
-When an executable is specified for reducers, each reducer task will launch the executable as a separate process then the reducer is initialized. As the reducer task runs, it converts its input key/values pairs into lines and feeds the lines to the stdin of the process. In the meantime, the reducer collects the line oriented outputs from the stdout of the process, converts each line into a key/value pair, which is collected as the output of the reducer. By default, the prefix of a line up to the first tab character is the key and the the rest of the line (excluding the tab character) is the value. However, this can be customized, as discussed later.
+When an executable is specified for reducers, each reducer task will launch the executable as a separate process then the reducer is initialized. As the reducer task runs, it converts its input key/values pairs into lines and feeds the lines to the stdin of the process. In the meantime, the reducer collects the line oriented outputs from the stdout of the process, converts each line into a key/value pair, which is collected as the output of the reducer. By default, the prefix of a line up to the first tab character is the key and the rest of the line (excluding the tab character) is the value. However, this can be customized, as discussed later.
 </p>
 <p>
 This is the basis for the communication protocol between the Map/Reduce framework and the streaming mapper/reducer.
@@ -604,7 +607,7 @@ To set an environment variable in a streaming command use:
 <a name="N101C3"></a><a name="Customizing+the+Way+to+Split+Lines+into+Key%2FValue+Pairs"></a>
 <h3 class="h4">Customizing the Way to Split Lines into Key/Value Pairs </h3>
 <p>
-As noted earlier, when the Map/Reduce framework reads a line from the stdout of the mapper, it splits the line into a key/value pair. By default, the prefix of the line up to the first tab character is the key and the the rest of the line (excluding the tab character) is the value.
+As noted earlier, when the Map/Reduce framework reads a line from the stdout of the mapper, it splits the line into a key/value pair. By default, the prefix of the line up to the first tab character is the key and the rest of the line (excluding the tab character) is the value.
 </p>
 <p>
 However, you can customize this default. You can specify a field separator other than the tab character (the default), and you can specify the nth (n &gt;= 1) character rather than the first character in a line (the default) as the separator between the key and value. For example:

File diff suppressed because it is too large
+ 1 - 1
docs/streaming.pdf


+ 1 - 1
src/docs/src/documentation/content/xdocs/cluster_setup.xml

@@ -99,7 +99,7 @@
       <section>
         <title>Site Configuration</title>
         
-        <p>To configure the the Hadoop cluster you will need to configure the
+        <p>To configure the Hadoop cluster you will need to configure the
         <em>environment</em> in which the Hadoop daemons execute as well as
         the <em>configuration parameters</em> for the Hadoop daemons.</p>
         

+ 2 - 2
src/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml

@@ -43,12 +43,12 @@
 		   Else if the group of <code>foo</code> matches any of member of the groups list, then the group permissions are tested;
 		</li>
 		<li>
-		   Otherwise the the other permissions of <code>foo</code> are tested.
+		   Otherwise the other permissions of <code>foo</code> are tested.
 		</li>
 	</ul>
 
 <p>
-		If a permissions check fails, the the client operation fails.	
+		If a permissions check fails, the client operation fails.	
 </p>
      </section>
 

+ 1 - 1
src/docs/src/documentation/content/xdocs/mapred_tutorial.xml

@@ -1871,7 +1871,7 @@
           <a href="ext:api/org/apache/hadoop/mapred/jobconf/setprofileenabled">
           JobConf.setProfileEnabled(boolean)</a>. If the value is set 
           <code>true</code>, the task profiling is enabled. The profiler
-          information is stored in the the user log directory. By default, 
+          information is stored in the user log directory. By default, 
           profiling is not enabled for the job.  </p>
           
           <p>Once user configures that profiling is needed, she/he can use

+ 3 - 3
src/docs/src/documentation/content/xdocs/streaming.xml

@@ -48,11 +48,11 @@ $HADOOP_HOME/bin/hadoop  jar $HADOOP_HOME/hadoop-streaming.jar \
 In the above example, both the mapper and the reducer are executables that read the input from stdin (line by line) and emit the output to stdout. The utility will create a Map/Reduce job, submit the job to an appropriate cluster, and monitor the progress of the job until it completes.
 </p><p>
   When an executable is specified for mappers, each mapper task will launch the executable as a separate process when the mapper is initialized. As the mapper task runs, it converts its inputs into lines and feed the lines to the stdin of the process. In the meantime, the mapper collects the line oriented outputs from the stdout of the process and converts each line into a key/value pair, which is collected as the output of the mapper. By default, the 
-  <em>prefix of a line up to the first tab character</em> is the <strong>key</strong> and the the rest of the line (excluding the tab character) will be the <strong>value</strong>. 
+  <em>prefix of a line up to the first tab character</em> is the <strong>key</strong> and the rest of the line (excluding the tab character) will be the <strong>value</strong>. 
   If there is no tab character in the line, then entire line is considered as key and the value is null. However, this can be customized, as discussed later.
 </p>
 <p>
-When an executable is specified for reducers, each reducer task will launch the executable as a separate process then the reducer is initialized. As the reducer task runs, it converts its input key/values pairs into lines and feeds the lines to the stdin of the process. In the meantime, the reducer collects the line oriented outputs from the stdout of the process, converts each line into a key/value pair, which is collected as the output of the reducer. By default, the prefix of a line up to the first tab character is the key and the the rest of the line (excluding the tab character) is the value. However, this can be customized, as discussed later.
+When an executable is specified for reducers, each reducer task will launch the executable as a separate process then the reducer is initialized. As the reducer task runs, it converts its input key/values pairs into lines and feeds the lines to the stdin of the process. In the meantime, the reducer collects the line oriented outputs from the stdout of the process, converts each line into a key/value pair, which is collected as the output of the reducer. By default, the prefix of a line up to the first tab character is the key and the rest of the line (excluding the tab character) is the value. However, this can be customized, as discussed later.
 </p><p>
 This is the basis for the communication protocol between the Map/Reduce framework and the streaming mapper/reducer.
 </p><p>
@@ -292,7 +292,7 @@ To set an environment variable in a streaming command use:
 <section>
 <title>Customizing the Way to Split Lines into Key/Value Pairs </title>
 <p>
-As noted earlier, when the Map/Reduce framework reads a line from the stdout of the mapper, it splits the line into a key/value pair. By default, the prefix of the line up to the first tab character is the key and the the rest of the line (excluding the tab character) is the value.
+As noted earlier, when the Map/Reduce framework reads a line from the stdout of the mapper, it splits the line into a key/value pair. By default, the prefix of the line up to the first tab character is the key and the rest of the line (excluding the tab character) is the value.
 </p>
 <p>
 However, you can customize this default. You can specify a field separator other than the tab character (the default), and you can specify the nth (n >= 1) character rather than the first character in a line (the default) as the separator between the key and value. For example:

+ 1 - 1
src/mapred/org/apache/hadoop/mapred/ReduceTask.java

@@ -2512,7 +2512,7 @@ class ReduceTask extends Task {
           //earlier when we invoked cloneFileAttributes
           localFileSys.delete(outputPath, true);
           throw (IOException)new IOException
-                  ("Intermedate merge failed").initCause(e);
+                  ("Intermediate merge failed").initCause(e);
         }
 
         // Note the output of the merge

Some files were not shown because too many files changed in this diff