浏览代码

HADOOP-3693. Fix archives, distcp and native library documentation to
conform to style guidelines. Contributed by Amareshwari Sriramadasu.


git-svn-id: https://svn.apache.org/repos/asf/hadoop/core/trunk@674641 13f79535-47bb-0310-9956-ffa450edef68

Christopher Douglas 17 年之前
父节点
当前提交
5325f86f5f

+ 3 - 0
CHANGES.txt

@@ -768,6 +768,9 @@ Release 0.18.0 - Unreleased
     input. Validation job still runs on default fs.
     (Jothi Padmanabhan via cdouglas)
 
+    HADOOP-3693. Fix archives, distcp and native library documentation to
+    conform to style guidelines. (Amareshwari Sriramadasu via cdouglas)
+
 Release 0.17.1 - Unreleased
 
   INCOMPATIBLE CHANGES

+ 10 - 4
docs/changes.html

@@ -56,7 +56,7 @@
 </a></h2>
 <ul id="trunk_(unreleased_changes)_">
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._incompatible_changes_')">  INCOMPATIBLE CHANGES
-</a>&nbsp;&nbsp;&nbsp;(4)
+</a>&nbsp;&nbsp;&nbsp;(5)
     <ol id="trunk_(unreleased_changes)_._incompatible_changes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3595">HADOOP-3595</a>. Remove deprecated methods for mapred.combine.once
 functionality, which was necessary to providing backwards
@@ -75,6 +75,7 @@ compatible combiner semantics for 0.18.<br />(cdouglas via omalley)</li>
 hadoop.hdfs that reflect whether they are client, server, protocol,
 etc. DistributedFileSystem and DFSClient have moved and are now
 considered package private.<br />(Sanjay Radia via omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2325">HADOOP-2325</a>.  Require Java 6.<br />(cutting)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._new_features_')">  NEW FEATURES
@@ -259,7 +260,7 @@ in hadoop user guide.<br />(shv)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('release_0.18.0_-_unreleased_._improvements_')">  IMPROVEMENTS
-</a>&nbsp;&nbsp;&nbsp;(45)
+</a>&nbsp;&nbsp;&nbsp;(46)
     <ol id="release_0.18.0_-_unreleased_._improvements_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2928">HADOOP-2928</a>. Remove deprecated FileSystem.getContentLength().<br />(Lohit Vjayarenu via rangadi)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3130">HADOOP-3130</a>. Make the connect timeout smaller for getFile.<br />(Amar Ramesh Kamat via ddas)</li>
@@ -350,6 +351,7 @@ via the DistributedCache.<br />(Amareshwari Sriramadasu via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3606">HADOOP-3606</a>. Updates the Streaming doc.<br />(Amareshwari Sriramadasu via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3532">HADOOP-3532</a>. Add jdiff reports to the build scripts.<br />(omalley)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3100">HADOOP-3100</a>. Develop tests to test the DFS command line interface.<br />(mukund)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3688">HADOOP-3688</a>. Fix up HDFS docs.<br />(Robert Chansler via hairong)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('release_0.18.0_-_unreleased_._optimizations_')">  OPTIMIZATIONS
@@ -376,7 +378,7 @@ InputFormat.validateInput.<br />(tomwhite via omalley)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('release_0.18.0_-_unreleased_._bug_fixes_')">  BUG FIXES
-</a>&nbsp;&nbsp;&nbsp;(110)
+</a>&nbsp;&nbsp;&nbsp;(111)
     <ol id="release_0.18.0_-_unreleased_._bug_fixes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2905">HADOOP-2905</a>. 'fsck -move' triggers NPE in NameNode.<br />(Lohit Vjayarenu via rangadi)</li>
       <li>Increment ClientProtocol.versionID missed by <a href="http://issues.apache.org/jira/browse/HADOOP-2585">HADOOP-2585</a>.<br />(shv)</li>
@@ -595,6 +597,8 @@ a lock during this call.<br />(Arun C Murthy via cdouglas)</li>
 read from DFS.<br />(rangadi)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3683">HADOOP-3683</a>. Fix dfs metrics to count file listings rather than files
 listed.<br />(lohit vijayarenu via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3597">HADOOP-3597</a>. Fix SortValidator to use filesystems other than the default as
+input. Validation job still runs on default fs.<br />(Jothi Padmanabhan via cdouglas)</li>
     </ol>
   </li>
 </ul>
@@ -620,7 +624,7 @@ therefore provides better resource management.<br />(hairong)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('release_0.17.1_-_unreleased_._bug_fixes_')">  BUG FIXES
-</a>&nbsp;&nbsp;&nbsp;(13)
+</a>&nbsp;&nbsp;&nbsp;(14)
     <ol id="release_0.17.1_-_unreleased_._bug_fixes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2159">HADOOP-2159</a> Namenode stuck in safemode. The counter blockSafe should
 not be decremented for invalid blocks.<br />(hairong)</li>
@@ -648,6 +652,8 @@ network location is not resolved.<br />(hairong)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3571">HADOOP-3571</a>. Fix bug in block removal used in lease recovery.<br />(shv)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3645">HADOOP-3645</a>. MetricsTimeVaryingRate returns wrong value for
 metric_avg_time.<br />(Lohit Vijayarenu via hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3633">HADOOP-3633</a>. Correct exception handling in DataXceiveServer, and throttle
+the number of xceiver threads in a data-node.<br />(shv)</li>
     </ol>
   </li>
 </ul>

+ 6 - 6
docs/distcp.html

@@ -234,10 +234,10 @@ document.write("Last Published: " + document.lastModified);
 <h2 class="h3">Overview</h2>
 <div class="section">
 <p>DistCp (distributed copy) is a tool used for large inter/intra-cluster
-      copying. It uses map/reduce to effect its distribution, error
-      handling/recovery, and reporting. It expands a list of files and
+      copying. It uses Map/Reduce to effect its distribution, error
+      handling and recovery, and reporting. It expands a list of files and
       directories into input to map tasks, each of which will copy a partition
-      of the files specified in the source list. Its map/reduce pedigree has
+      of the files specified in the source list. Its Map/Reduce pedigree has
       endowed it with some quirks in both its semantics and execution. The
       purpose of this document is to offer guidance for common tasks and to
       elucidate its model.</p>
@@ -303,13 +303,13 @@ document.write("Last Published: " + document.lastModified);
         copier failed for some subset of its files, but succeeded on a later
         attempt (see <a href="#etc">Appendix</a>).</p>
 <p>It is important that each TaskTracker can reach and communicate with
-        both the source and destination filesystems. For hdfs, both the source
+        both the source and destination file systems. For HDFS, both the source
         and destination must be running the same version of the protocol or use
         a backwards-compatible protocol (see <a href="#cpver">Copying Between
         Versions</a>).</p>
 <p>After a copy, it is recommended that one generates and cross-checks
         a listing of the source and destination to verify that the copy was
-        truly successful. Since DistCp employs both map/reduce and the
+        truly successful. Since DistCp employs both Map/Reduce and the
         FileSystem API, issues in or between any of the three could adversely
         and silently affect the copy. Some have had success running with
         <span class="codefrag">-update</span> enabled to perform a second pass, but users should
@@ -518,7 +518,7 @@ document.write("Last Published: " + document.lastModified);
           copiers (i.e. maps) may not always increase the number of
           simultaneous copies nor the overall throughput.</p>
 <p>If <span class="codefrag">-m</span> is not specified, DistCp will attempt to
-          schedule work for <span class="codefrag">min(total_bytes / bytes.per.map, 20 *
+          schedule work for <span class="codefrag">min (total_bytes / bytes.per.map, 20 *
           num_task_trackers)</span> where <span class="codefrag">bytes.per.map</span> defaults
           to 256MB.</p>
 <p>Tuning the number of maps to the size of the source and

文件差异内容过多而无法显示
+ 0 - 0
docs/distcp.pdf


+ 14 - 12
docs/hadoop_archives.html

@@ -207,7 +207,7 @@ document.write("Last Published: " + document.lastModified);
 <div class="section">
 <p>
         Hadoop archives are special format archives. A Hadoop archive
-        maps to a FileSystem directory. A Hadoop archive always has a *.har
+        maps to a file system directory. A Hadoop archive always has a *.har
         extension. A Hadoop archive directory contains metadata (in the form 
         of _index and _masterindex) and data (part-*) files. The _index file contains
         the name of the files that are part of the archive and the location
@@ -224,20 +224,21 @@ document.write("Last Published: " + document.lastModified);
         
 </p>
 <p>
-        -archiveName is the name of the archive you would like to create. An example would be 
-        foo.har. The name should have a *.har extension. The inputs are filesystem pathnames which 
-        work as usual with regular expressions. The destination directory would contain the archive.
-        Note that this is a Map Reduce job that creates the archives. You would need a map reduce cluster
-        to run this. The following is an example:</p>
+        -archiveName is the name of the archive you would like to create. 
+        An example would be foo.har. The name should have a *.har extension. 
+        The inputs are file system pathnames which work as usual with regular
+        expressions. The destination directory would contain the archive.
+        Note that this is a Map/Reduce job that creates the archives. You would
+        need a map reduce cluster to run this. The following is an example:</p>
 <p>
         
 <span class="codefrag">hadoop archive -archiveName foo.har /user/hadoop/dir1 /user/hadoop/dir2 /user/zoo/</span>
         
 </p>
 <p>
-        In the above example /user/hadoop/dir1 and /user/hadoop/dir2 will be archived in the following
-        filesystem directory -- /user/zoo/foo.har. The sources are not changed or removed when an archive
-        is created.
+        In the above example /user/hadoop/dir1 and /user/hadoop/dir2 will be
+        archived in the following file system directory -- /user/zoo/foo.har.
+        The sources are not changed or removed when an archive is created.
         </p>
 </div>
         
@@ -245,9 +246,10 @@ document.write("Last Published: " + document.lastModified);
 <h2 class="h3"> How to look up files in archives? </h2>
 <div class="section">
 <p>
-        The archive exposes itself as a filesystem layer. So all the fs shell commands in the archives work but 
-        with a different URI. Also, note that archives are immutable. So, rename's, deletes and creates return an error. 
-        URI for Hadoop Archives is 
+        The archive exposes itself as a file system layer. So all the fs shell
+        commands in the archives work but with a different URI. Also, note that
+        archives are immutable. So, rename's, deletes and creates return
+        an error. URI for Hadoop Archives is 
         </p>
 <p>
 <span class="codefrag">har://scheme-hostname:port/archivepath/fileinarchive</span>

+ 22 - 22
docs/hadoop_archives.pdf

@@ -58,10 +58,10 @@ endobj
 >>
 endobj
 14 0 obj
-<< /Length 1800 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 1798 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gatm<?#SIU'Rf_Zd+a_VQ6khF_9a>GQ8JbFClO?]=GDBmBhPKPWh^?Lqt6^8ah3kZ36d2jKXA[Em`.?^;X[Ym`GK.r^?o%'mEDhY1Y6ZcT5-#!Yo!8.m<YJ6.]bie>l2s<US-GXpasIm>[^+_i]Z-p>/;;E1j^MG]1_HcC\?0CLXkaLYgQ_OERn:_Hohg*b<N]pO'4B!=R4YDZZe":^iE9W:^)VZgQ+C3-DLd(]HTq%F1JQPXAD(Fh?P[jS97\H'Y"4ACC2,Js5%?J\NfSP'RaLkn!r$"]m#s)Oeel@n4GZ"hr=e=P[^daFChA;aX)548]`bl^=)?S[a)Wq);%^Q?UDpSB-dOXO;`%V%Br!#A0hLq9%;qD(#@%4s2rInGleQ#%J[3D,5CO"aMXk5TeV?1Z[.'(+5_r4_E(i-cntaAXr[99Na]=caON*cMROlaLu)>n-nR"FRusqW9.r8D>#d7Ob+gskE%U?Emd.8@D=aZ^9,QDl*uF#M("NrXirMu_eAJrOFr]#&L-SYne%M?6#pZr\S=4\=(9RWhmDqYIE4JhV,=T//HK>Lhd.RZHrP\5d>RRU+e!%,9<POi>ml0D-cA/P6qV>%PIEPdap'Rl[K\(-<iA2m^00"6?qiR:1-J<l"b=NEhBEQD12E<1c4SqgD9,lA,O>l``M[D4SFR31(:gXcU3B4dM[+3#JXnC!1<'<Ib$;J6,?PPA"=Z*UdLE.8[[h$TK3c2-QGP5a:Ufr6OnY3kW4_:7:*UnNq)qZQC,0L]G]-^TbWYj]s:D6;^.B9>bX;pI,BGH8tor@e7=kQtKK>7s$7mBhkG#]926ik=aI?4:DK9l7oCmkKspGZBI%dtmu`#d19nbY,\-jeR?d1m35ld9E5iZfm;8`$h&:erq!P\P_BC#.3"Vu*RN)'V'%kOhE,$o7(*!Uh:(YE["taDte<E5ECHBa[d]Xu8I=,dh%X"r\TW&#G2*dd"fFAD<Z[N7Sb9a$e!*i3hBC#PgdG*7/soX[-pMKPo70H''JrV5XKd8nl@1T9$gt5s;![S2MoQcM/]>lq;?/K<A'Jmu.tQkdSRFd#g`LVkhgDoVAP(ddNR<3BOlu@[[S?\?k;1A9c3F"fgBXUTj%tH2^"i$/TsNpb^R?0>*Bl%[at2cZX`a-R`%-estf/BJA4<FQJbI"+`=M&$K3Af[]d7(uQiocR#qH"J#tke*=BEG0j,lCPQl-%<gQXg/M9+E-@F^[tKkV4T"C'*?9BG:>[H7DsV=R0#Umi1c8V&7#<-J>JLf]q@*[6)j6E=i4>;P_SL@`R3ld4Oh+s*=qSb&bcL:T2B5!@nQkd!iblc^[2UC$mLPtEls)1Z4@r+1jsfD7li)#q=hX#dg!3cZ[N<CA1R.Y*4TKb#)\!#>[P2]o(69g88l@D?Qu*OkL+28t.?"p0,HdOf58_<V"BfS:ap<V[0q>CWVB8hqAV!/s&?9ttU]-Ya3Ijq3JmP&X2tgd=^*h,9Em<A@RKsV/dfc]<<Q@sCYMoaPd@k9$\T+dDG#bdN<GhN40MRKFJ^PY5Ncp9:W9Z;D5ptkr^j<2L&Z'F<D[\qr"o2rTm.u6Js4cnmH)$cQ:N2C1gHJ$c8##/EfSWq[G0t@.\M=eMAeZGf=L'Hdhd\`d`&nqAI7&0o'X[7P41&LACblX2]\'RE"r-_WJ&+4@(dnA1lZ=:llki0?!'ULsipsGb"&$i</onS!^8KM4A\7S:cV`K;iW04c>Tq.Ejf5N4DK/V6"Kq4X[bgb\W8"kZ6E6't6Q<\hp#g44[*?FULdXAJ#tr5)?cBs>!W~>
+Gatm<D/\/e&H;*)Tl3S^=I,me#=F8g=I0FuZ*c3=@2T8QVCep5W01$un(3W-,jQk@G+&m9'%.E]T"B?@46l3J[WiTkY&?DQC--L<h9<8Hn`O(K8*krDB=hJ`XU=BT/(]5]bu#VXYC9,a4fg=^_r;/E.';Eg8X_dTrnT*j:M]XPm"a-XMm+BWrA\SpGeP2RU#dA'Qq,RD7]c:Lf@aBm?spODF\m2j\[ETf^6souI,l#617Sjo>KR9f;>WP-I!ZIbYYj<cR'%l_'\LWMkgG%6V!)9`icJWKFZ;;t,=tV_iJ!<NI#efR[1m\aDm/OBZXHFsC/XV]Q7k(AEa+;P=9c4_iCdiej/`AY'r8hg>Q`k)Y-JZC2^N2WLDAVTs8S@L)tF,:/2uM-!fQ!Cj!,)<$kJ>gW%b:8YMnLsTs/!,dPVKjfX9`Ln(95"-Do0ed$D0aHF.E`Ug*=I,BY<.B,in5.MsPpd@;Pn(3MG"m'Aa<q:!8.N:!j-MfOX=<ZgG`!DpJZ_n=WF*7UR75Z7%6?I-iK&R6iZ-A]nZ(8.KEH)R&`2i;FIB\64d@=]BO?8&OT38Tc:@5W-fEd"hTejJ6)NHr#)^+G"U[o?DcEgF@`%FuK8lSka2)77p1S3.,Xa@mD6lp$ftUl#Yh!G+Te*#)DEI4".*#'C75NBH0oJrCTg]@IVE(Be%;i4"qC/KE5C&!j-Yq'f?Xnl>OdR*!LNWD22BhadtFe>`'Q]Y.+I,h4(T'T@A!HceMSMr=Pg2m@R"8e3FAG"o,kWu0e.h*`5Yg+$hm.n<24RYDu6SN`Vu0O"oA#je2hqOU%1X846_$38EEh=>2`/O#7jPCR0BVJFYKD9871UDZ4D5JrGCOH%g?0n[9;[,,L@iM/\iP.Zp::sUuLP\OGsg"b"<Vu*RN)$2`Ckfl\[$p*X2=mr.V0@[t$ahu,miIif:dh\\3>]"kYBFogd)eaWO5EmcQBr`&*"3Ktje[q\T4!mRsEF.C[FY9;2RfaG0V_na?RBUKG.515r:2+J$]M*%l5+dmEaRELIA54)0(^L)(66<#6gd.#9<'DN14[q[=1irrElr-gICO=Lf+:23\fNaqV4>Brk@GKT-ShnFJPQKNU.Q%63T@1oL1f#gP>K3QI-[t20c#^ZC[Q6JMLdj\@iaL)'%d"#6A`\;JjnM)K[=!Y:0h;:eBEk>Y9&Oas/IFcu6R9'CY,`bR'Xup_0P[ds2kWdnmL!??ZL*r3;M&OGICP&r:sS=5;Z"HlGN$9+'h)']F\\/\K!U.M6>O?p?u&,M;B)9e!A!N1.r$qUNJu?r1D#L!7U]m4HJ!lNRE$,ngBlYo?YVktYhni>('8Ta=R[R^"du9Qf+S'W79R=f67KsXKrds&fCXcG^+UUIBqsP#r>$^ieDelO?;i=C+1$s$g+^L92hc8:BZ*&ujbJ\#[ql+3<Z@>[emM5]V^n2""rL';TQ#u"<Q<bgs$D;W5c?S,02I<AHUEYH&S;<ZAsNJb"A79[(tEf3j*\&^F+^.L]TN0K3+>\\X4W$(ephJ&-H;$1l&WE3>iKOA]nT;/M>+$fHIU*id^B!g6Vog0rYdQ(52b/p2G^bp1t1R.*G!-t&&%KX;=f6Q7BU:8BK?'\)2[<.$5$7=Tg-`H[4N:OUIIQ$RN3=NZ8$:p`pA_&#/ZZBH^8J5;.oG9]H`;""FPB`2M@)2UW<t]=(57dUR#=rM.saoDL9n-ZoYF-QQGHQMIe=[:uDg6j>c\6lDh3sgf?Ccl]6-ln%2WMm_#;]WfB#,)nabqkB:jPc,#j)6p(scQAFA3dR]@T~>
 endstream
 endobj
 15 0 obj
@@ -194,33 +194,33 @@ endobj
 xref
 0 28
 0000000000 65535 f 
-0000005335 00000 n 
-0000005407 00000 n 
-0000005499 00000 n 
+0000005333 00000 n 
+0000005405 00000 n 
+0000005497 00000 n 
 0000000015 00000 n 
 0000000071 00000 n 
 0000000642 00000 n 
 0000000762 00000 n 
 0000000801 00000 n 
-0000005633 00000 n 
+0000005631 00000 n 
 0000000936 00000 n 
-0000005696 00000 n 
+0000005694 00000 n 
 0000001072 00000 n 
-0000005762 00000 n 
+0000005760 00000 n 
 0000001209 00000 n 
-0000003102 00000 n 
-0000003210 00000 n 
-0000003794 00000 n 
-0000005828 00000 n 
-0000003902 00000 n 
-0000004139 00000 n 
-0000004390 00000 n 
-0000004673 00000 n 
-0000004786 00000 n 
-0000004896 00000 n 
-0000005004 00000 n 
-0000005110 00000 n 
-0000005226 00000 n 
+0000003100 00000 n 
+0000003208 00000 n 
+0000003792 00000 n 
+0000005826 00000 n 
+0000003900 00000 n 
+0000004137 00000 n 
+0000004388 00000 n 
+0000004671 00000 n 
+0000004784 00000 n 
+0000004894 00000 n 
+0000005002 00000 n 
+0000005108 00000 n 
+0000005224 00000 n 
 trailer
 <<
 /Size 28
@@ -228,5 +228,5 @@ trailer
 /Info 4 0 R
 >>
 startxref
-5879
+5877
 %%EOF

+ 1 - 1
docs/hod_admin_guide.html

@@ -460,7 +460,7 @@ in the HOD Configuration Guide.</p>
 <ul>
    
 <li>${JAVA_HOME}: Location of Java for Hadoop. Hadoop supports Sun JDK
-    1.5.x and above.</li>
+    1.6.x and above.</li>
    
 <li>${CLUSTER_NAME}: Name of the cluster which is specified in the
     'node property' as mentioned in resource manager configuration.</li>

文件差异内容过多而无法显示
+ 0 - 0
docs/hod_admin_guide.pdf


+ 2 - 2
docs/mapred_tutorial.html

@@ -1731,9 +1731,9 @@ document.write("Last Published: " + document.lastModified);
         <em>current working directory</em> added to the
         <span class="codefrag">java.library.path</span> and <span class="codefrag">LD_LIBRARY_PATH</span>. 
         And hence the cached libraries can be loaded via 
-        <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#loadLibrary(java.lang.String)">
+        <a href="http://java.sun.com/javase/6/docs/api/java/lang/System.html#loadLibrary(java.lang.String)">
         System.loadLibrary</a> or 
-        <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#load(java.lang.String)">
+        <a href="http://java.sun.com/javase/6/docs/api/java/lang/System.html#load(java.lang.String)">
         System.load</a>. More details on how to load shared libraries through 
         distributed cache are documented at 
         <a href="native_libraries.html#Loading+native+libraries+through+DistributedCache">

+ 210 - 210
docs/mapred_tutorial.pdf

@@ -1236,7 +1236,7 @@ endobj
 /Rect [ 156.984 359.55 251.976 347.55 ]
 /C [ 0 0 0 ]
 /Border [ 0 0 0 ]
-/A << /URI (http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#loadLibrary(java.lang.String))
+/A << /URI (http://java.sun.com/javase/6/docs/api/java/lang/System.html#loadLibrary(java.lang.String))
 /S /URI >>
 /H /I
 >>
@@ -1247,7 +1247,7 @@ endobj
 /Rect [ 267.972 359.55 326.976 347.55 ]
 /C [ 0 0 0 ]
 /Border [ 0 0 0 ]
-/A << /URI (http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#load(java.lang.String))
+/A << /URI (http://java.sun.com/javase/6/docs/api/java/lang/System.html#load(java.lang.String))
 /S /URI >>
 /H /I
 >>
@@ -2960,53 +2960,53 @@ endobj
 xref
 0 332
 0000000000 65535 f 
-0000123392 00000 n 
-0000123697 00000 n 
-0000123790 00000 n 
+0000123388 00000 n 
+0000123693 00000 n 
+0000123786 00000 n 
 0000000015 00000 n 
 0000000071 00000 n 
 0000001300 00000 n 
 0000001420 00000 n 
 0000001578 00000 n 
-0000123942 00000 n 
+0000123938 00000 n 
 0000001713 00000 n 
-0000124005 00000 n 
+0000124001 00000 n 
 0000001850 00000 n 
-0000124071 00000 n 
+0000124067 00000 n 
 0000001987 00000 n 
-0000124137 00000 n 
+0000124133 00000 n 
 0000002124 00000 n 
-0000124201 00000 n 
+0000124197 00000 n 
 0000002260 00000 n 
-0000124267 00000 n 
+0000124263 00000 n 
 0000002397 00000 n 
-0000124333 00000 n 
+0000124329 00000 n 
 0000002534 00000 n 
-0000124397 00000 n 
+0000124393 00000 n 
 0000002670 00000 n 
-0000124461 00000 n 
+0000124457 00000 n 
 0000002807 00000 n 
-0000124525 00000 n 
+0000124521 00000 n 
 0000002944 00000 n 
-0000124589 00000 n 
+0000124585 00000 n 
 0000003079 00000 n 
-0000124656 00000 n 
+0000124652 00000 n 
 0000003216 00000 n 
-0000124721 00000 n 
+0000124717 00000 n 
 0000003353 00000 n 
-0000124787 00000 n 
+0000124783 00000 n 
 0000003488 00000 n 
-0000124854 00000 n 
+0000124850 00000 n 
 0000003625 00000 n 
-0000124921 00000 n 
+0000124917 00000 n 
 0000003762 00000 n 
-0000124986 00000 n 
+0000124982 00000 n 
 0000003898 00000 n 
-0000125053 00000 n 
+0000125049 00000 n 
 0000004035 00000 n 
-0000125120 00000 n 
+0000125116 00000 n 
 0000004172 00000 n 
-0000125186 00000 n 
+0000125182 00000 n 
 0000004309 00000 n 
 0000006940 00000 n 
 0000007063 00000 n 
@@ -3106,191 +3106,191 @@ xref
 0000058186 00000 n 
 0000058312 00000 n 
 0000058373 00000 n 
-0000125251 00000 n 
+0000125247 00000 n 
 0000058511 00000 n 
-0000058755 00000 n 
-0000058992 00000 n 
-0000059216 00000 n 
-0000059411 00000 n 
-0000062140 00000 n 
-0000062266 00000 n 
-0000062335 00000 n 
-0000062534 00000 n 
-0000062771 00000 n 
-0000063011 00000 n 
-0000063208 00000 n 
-0000063445 00000 n 
-0000063640 00000 n 
-0000066281 00000 n 
-0000066407 00000 n 
-0000066476 00000 n 
-0000066673 00000 n 
-0000066870 00000 n 
-0000067066 00000 n 
-0000067261 00000 n 
-0000067458 00000 n 
-0000067656 00000 n 
-0000070301 00000 n 
-0000070427 00000 n 
-0000070472 00000 n 
-0000070726 00000 n 
-0000070983 00000 n 
-0000071181 00000 n 
-0000073988 00000 n 
-0000074114 00000 n 
-0000074191 00000 n 
-0000074419 00000 n 
-0000074677 00000 n 
-0000074882 00000 n 
-0000075155 00000 n 
-0000075430 00000 n 
-0000075705 00000 n 
-0000075986 00000 n 
-0000078776 00000 n 
-0000078902 00000 n 
-0000078979 00000 n 
-0000079232 00000 n 
-0000079525 00000 n 
-0000079811 00000 n 
-0000080001 00000 n 
-0000080206 00000 n 
-0000080453 00000 n 
-0000080654 00000 n 
-0000083136 00000 n 
-0000083262 00000 n 
-0000083307 00000 n 
-0000083530 00000 n 
-0000083775 00000 n 
-0000084003 00000 n 
-0000086444 00000 n 
-0000086570 00000 n 
-0000086663 00000 n 
-0000086850 00000 n 
-0000087079 00000 n 
-0000087314 00000 n 
-0000087525 00000 n 
-0000087733 00000 n 
-0000087908 00000 n 
-0000088103 00000 n 
-0000088277 00000 n 
-0000088453 00000 n 
-0000091137 00000 n 
-0000091263 00000 n 
-0000091356 00000 n 
-0000091575 00000 n 
-0000091812 00000 n 
-0000092077 00000 n 
-0000092357 00000 n 
-0000092570 00000 n 
-0000092895 00000 n 
-0000093217 00000 n 
-0000093403 00000 n 
-0000093601 00000 n 
-0000095943 00000 n 
-0000096053 00000 n 
-0000098348 00000 n 
-0000098458 00000 n 
-0000100849 00000 n 
-0000100959 00000 n 
-0000103301 00000 n 
-0000103411 00000 n 
-0000105652 00000 n 
-0000105762 00000 n 
-0000108016 00000 n 
-0000108126 00000 n 
-0000109376 00000 n 
-0000109486 00000 n 
-0000111430 00000 n 
-0000111540 00000 n 
-0000111990 00000 n 
-0000125311 00000 n 
-0000112100 00000 n 
-0000112236 00000 n 
-0000112429 00000 n 
-0000112587 00000 n 
-0000112803 00000 n 
-0000113087 00000 n 
-0000113257 00000 n 
-0000113407 00000 n 
-0000113583 00000 n 
-0000113899 00000 n 
-0000125365 00000 n 
-0000114089 00000 n 
-0000125432 00000 n 
-0000114283 00000 n 
-0000125497 00000 n 
-0000114475 00000 n 
-0000125564 00000 n 
-0000114690 00000 n 
-0000125631 00000 n 
-0000114858 00000 n 
-0000125698 00000 n 
-0000115065 00000 n 
-0000125763 00000 n 
-0000115269 00000 n 
-0000125829 00000 n 
-0000115446 00000 n 
-0000125896 00000 n 
-0000115686 00000 n 
-0000125963 00000 n 
-0000115883 00000 n 
-0000126029 00000 n 
-0000116080 00000 n 
-0000126097 00000 n 
-0000116259 00000 n 
-0000116465 00000 n 
-0000116686 00000 n 
-0000116970 00000 n 
-0000126165 00000 n 
-0000117303 00000 n 
-0000117469 00000 n 
-0000126231 00000 n 
-0000117684 00000 n 
-0000126297 00000 n 
-0000117860 00000 n 
-0000118048 00000 n 
-0000126365 00000 n 
-0000118269 00000 n 
-0000126431 00000 n 
-0000118514 00000 n 
-0000118702 00000 n 
-0000126499 00000 n 
-0000118974 00000 n 
-0000126567 00000 n 
-0000119138 00000 n 
-0000126635 00000 n 
-0000119365 00000 n 
-0000126701 00000 n 
-0000119520 00000 n 
-0000126769 00000 n 
-0000119741 00000 n 
-0000126835 00000 n 
-0000119926 00000 n 
-0000126903 00000 n 
-0000120153 00000 n 
-0000126971 00000 n 
-0000120454 00000 n 
-0000127037 00000 n 
-0000120717 00000 n 
-0000127105 00000 n 
-0000120943 00000 n 
-0000127173 00000 n 
-0000121134 00000 n 
-0000127241 00000 n 
-0000121387 00000 n 
-0000127309 00000 n 
-0000121632 00000 n 
-0000121823 00000 n 
-0000122092 00000 n 
-0000122262 00000 n 
-0000122447 00000 n 
-0000122612 00000 n 
-0000122726 00000 n 
-0000122837 00000 n 
-0000122949 00000 n 
-0000123058 00000 n 
-0000123165 00000 n 
-0000123282 00000 n 
+0000058753 00000 n 
+0000058988 00000 n 
+0000059212 00000 n 
+0000059407 00000 n 
+0000062136 00000 n 
+0000062262 00000 n 
+0000062331 00000 n 
+0000062530 00000 n 
+0000062767 00000 n 
+0000063007 00000 n 
+0000063204 00000 n 
+0000063441 00000 n 
+0000063636 00000 n 
+0000066277 00000 n 
+0000066403 00000 n 
+0000066472 00000 n 
+0000066669 00000 n 
+0000066866 00000 n 
+0000067062 00000 n 
+0000067257 00000 n 
+0000067454 00000 n 
+0000067652 00000 n 
+0000070297 00000 n 
+0000070423 00000 n 
+0000070468 00000 n 
+0000070722 00000 n 
+0000070979 00000 n 
+0000071177 00000 n 
+0000073984 00000 n 
+0000074110 00000 n 
+0000074187 00000 n 
+0000074415 00000 n 
+0000074673 00000 n 
+0000074878 00000 n 
+0000075151 00000 n 
+0000075426 00000 n 
+0000075701 00000 n 
+0000075982 00000 n 
+0000078772 00000 n 
+0000078898 00000 n 
+0000078975 00000 n 
+0000079228 00000 n 
+0000079521 00000 n 
+0000079807 00000 n 
+0000079997 00000 n 
+0000080202 00000 n 
+0000080449 00000 n 
+0000080650 00000 n 
+0000083132 00000 n 
+0000083258 00000 n 
+0000083303 00000 n 
+0000083526 00000 n 
+0000083771 00000 n 
+0000083999 00000 n 
+0000086440 00000 n 
+0000086566 00000 n 
+0000086659 00000 n 
+0000086846 00000 n 
+0000087075 00000 n 
+0000087310 00000 n 
+0000087521 00000 n 
+0000087729 00000 n 
+0000087904 00000 n 
+0000088099 00000 n 
+0000088273 00000 n 
+0000088449 00000 n 
+0000091133 00000 n 
+0000091259 00000 n 
+0000091352 00000 n 
+0000091571 00000 n 
+0000091808 00000 n 
+0000092073 00000 n 
+0000092353 00000 n 
+0000092566 00000 n 
+0000092891 00000 n 
+0000093213 00000 n 
+0000093399 00000 n 
+0000093597 00000 n 
+0000095939 00000 n 
+0000096049 00000 n 
+0000098344 00000 n 
+0000098454 00000 n 
+0000100845 00000 n 
+0000100955 00000 n 
+0000103297 00000 n 
+0000103407 00000 n 
+0000105648 00000 n 
+0000105758 00000 n 
+0000108012 00000 n 
+0000108122 00000 n 
+0000109372 00000 n 
+0000109482 00000 n 
+0000111426 00000 n 
+0000111536 00000 n 
+0000111986 00000 n 
+0000125307 00000 n 
+0000112096 00000 n 
+0000112232 00000 n 
+0000112425 00000 n 
+0000112583 00000 n 
+0000112799 00000 n 
+0000113083 00000 n 
+0000113253 00000 n 
+0000113403 00000 n 
+0000113579 00000 n 
+0000113895 00000 n 
+0000125361 00000 n 
+0000114085 00000 n 
+0000125428 00000 n 
+0000114279 00000 n 
+0000125493 00000 n 
+0000114471 00000 n 
+0000125560 00000 n 
+0000114686 00000 n 
+0000125627 00000 n 
+0000114854 00000 n 
+0000125694 00000 n 
+0000115061 00000 n 
+0000125759 00000 n 
+0000115265 00000 n 
+0000125825 00000 n 
+0000115442 00000 n 
+0000125892 00000 n 
+0000115682 00000 n 
+0000125959 00000 n 
+0000115879 00000 n 
+0000126025 00000 n 
+0000116076 00000 n 
+0000126093 00000 n 
+0000116255 00000 n 
+0000116461 00000 n 
+0000116682 00000 n 
+0000116966 00000 n 
+0000126161 00000 n 
+0000117299 00000 n 
+0000117465 00000 n 
+0000126227 00000 n 
+0000117680 00000 n 
+0000126293 00000 n 
+0000117856 00000 n 
+0000118044 00000 n 
+0000126361 00000 n 
+0000118265 00000 n 
+0000126427 00000 n 
+0000118510 00000 n 
+0000118698 00000 n 
+0000126495 00000 n 
+0000118970 00000 n 
+0000126563 00000 n 
+0000119134 00000 n 
+0000126631 00000 n 
+0000119361 00000 n 
+0000126697 00000 n 
+0000119516 00000 n 
+0000126765 00000 n 
+0000119737 00000 n 
+0000126831 00000 n 
+0000119922 00000 n 
+0000126899 00000 n 
+0000120149 00000 n 
+0000126967 00000 n 
+0000120450 00000 n 
+0000127033 00000 n 
+0000120713 00000 n 
+0000127101 00000 n 
+0000120939 00000 n 
+0000127169 00000 n 
+0000121130 00000 n 
+0000127237 00000 n 
+0000121383 00000 n 
+0000127305 00000 n 
+0000121628 00000 n 
+0000121819 00000 n 
+0000122088 00000 n 
+0000122258 00000 n 
+0000122443 00000 n 
+0000122608 00000 n 
+0000122722 00000 n 
+0000122833 00000 n 
+0000122945 00000 n 
+0000123054 00000 n 
+0000123161 00000 n 
+0000123278 00000 n 
 trailer
 <<
 /Size 332
@@ -3298,5 +3298,5 @@ trailer
 /Info 4 0 R
 >>
 startxref
-127375
+127371
 %%EOF

+ 3 - 3
docs/native_libraries.html

@@ -221,10 +221,10 @@ document.write("Last Published: " + document.lastModified);
 <h2 class="h3">Purpose</h2>
 <div class="section">
 <p>Hadoop has native implementations of certain components for reasons of 
-      both performace &amp; non-availability of Java implementations. These 
+      both performance and non-availability of Java implementations. These 
       components are available in a single, dynamically-linked, native library. 
       On the *nix platform it is <em>libhadoop.so</em>. This document describes 
-      the usage &amp; details on how to build the native libraries.</p>
+      the usage and details on how to build the native libraries.</p>
 </div>
     
     
@@ -273,7 +273,7 @@ document.write("Last Published: " + document.lastModified);
         </li>
         
 <li>
-          Ensure you have either or both of <strong>&gt;zlib-1.2</strong> and 
+          Make sure you have either or both of <strong>&gt;zlib-1.2</strong> and 
           <strong>&gt;lzo2.0</strong> packages for your platform installed; 
           depending on your needs.
         </li>

文件差异内容过多而无法显示
+ 0 - 0
docs/native_libraries.pdf


+ 1 - 1
docs/quickstart.html

@@ -277,7 +277,7 @@ document.write("Last Published: " + document.lastModified);
 <ol>
           
 <li>
-            Java<sup>TM</sup> 1.5.x, preferably from Sun, must be installed.
+            Java<sup>TM</sup> 1.6.x, preferably from Sun, must be installed.
           </li>
           
 <li>

文件差异内容过多而无法显示
+ 0 - 0
docs/quickstart.pdf


+ 6 - 6
src/docs/src/documentation/content/xdocs/distcp.xml

@@ -29,10 +29,10 @@
       <title>Overview</title>
 
       <p>DistCp (distributed copy) is a tool used for large inter/intra-cluster
-      copying. It uses map/reduce to effect its distribution, error
-      handling/recovery, and reporting. It expands a list of files and
+      copying. It uses Map/Reduce to effect its distribution, error
+      handling and recovery, and reporting. It expands a list of files and
       directories into input to map tasks, each of which will copy a partition
-      of the files specified in the source list. Its map/reduce pedigree has
+      of the files specified in the source list. Its Map/Reduce pedigree has
       endowed it with some quirks in both its semantics and execution. The
       purpose of this document is to offer guidance for common tasks and to
       elucidate its model.</p>
@@ -85,14 +85,14 @@
         attempt (see <a href="#etc">Appendix</a>).</p>
 
         <p>It is important that each TaskTracker can reach and communicate with
-        both the source and destination filesystems. For hdfs, both the source
+        both the source and destination file systems. For HDFS, both the source
         and destination must be running the same version of the protocol or use
         a backwards-compatible protocol (see <a href="#cpver">Copying Between
         Versions</a>).</p>
 
         <p>After a copy, it is recommended that one generates and cross-checks
         a listing of the source and destination to verify that the copy was
-        truly successful. Since DistCp employs both map/reduce and the
+        truly successful. Since DistCp employs both Map/Reduce and the
         FileSystem API, issues in or between any of the three could adversely
         and silently affect the copy. Some have had success running with
         <code>-update</code> enabled to perform a second pass, but users should
@@ -253,7 +253,7 @@
           simultaneous copies nor the overall throughput.</p>
 
           <p>If <code>-m</code> is not specified, DistCp will attempt to
-          schedule work for <code>min(total_bytes / bytes.per.map, 20 *
+          schedule work for <code>min (total_bytes / bytes.per.map, 20 *
           num_task_trackers)</code> where <code>bytes.per.map</code> defaults
           to 256MB.</p>
 

+ 14 - 12
src/docs/src/documentation/content/xdocs/hadoop_archives.xml

@@ -24,7 +24,7 @@
         <title> What are Hadoop archives? </title>
         <p>
         Hadoop archives are special format archives. A Hadoop archive
-        maps to a FileSystem directory. A Hadoop archive always has a *.har
+        maps to a file system directory. A Hadoop archive always has a *.har
         extension. A Hadoop archive directory contains metadata (in the form 
         of _index and _masterindex) and data (part-*) files. The _index file contains
         the name of the files that are part of the archive and the location
@@ -37,25 +37,27 @@
         <code>Usage: hadoop archive -archiveName name &lt;src&gt;* &lt;dest&gt;</code>
         </p>
         <p>
-        -archiveName is the name of the archive you would like to create. An example would be 
-        foo.har. The name should have a *.har extension. The inputs are filesystem pathnames which 
-        work as usual with regular expressions. The destination directory would contain the archive.
-        Note that this is a Map Reduce job that creates the archives. You would need a map reduce cluster
-        to run this. The following is an example:</p>
+        -archiveName is the name of the archive you would like to create. 
+        An example would be foo.har. The name should have a *.har extension. 
+        The inputs are file system pathnames which work as usual with regular
+        expressions. The destination directory would contain the archive.
+        Note that this is a Map/Reduce job that creates the archives. You would
+        need a map reduce cluster to run this. The following is an example:</p>
         <p>
         <code>hadoop archive -archiveName foo.har /user/hadoop/dir1 /user/hadoop/dir2 /user/zoo/</code>
         </p><p>
-        In the above example /user/hadoop/dir1 and /user/hadoop/dir2 will be archived in the following
-        filesystem directory -- /user/zoo/foo.har. The sources are not changed or removed when an archive
-        is created.
+        In the above example /user/hadoop/dir1 and /user/hadoop/dir2 will be
+        archived in the following file system directory -- /user/zoo/foo.har.
+        The sources are not changed or removed when an archive is created.
         </p>
         </section>
         <section>
         <title> How to look up files in archives? </title>
         <p>
-        The archive exposes itself as a filesystem layer. So all the fs shell commands in the archives work but 
-        with a different URI. Also, note that archives are immutable. So, rename's, deletes and creates return an error. 
-        URI for Hadoop Archives is 
+        The archive exposes itself as a file system layer. So all the fs shell
+        commands in the archives work but with a different URI. Also, note that
+        archives are immutable. So, rename's, deletes and creates return
+        an error. URI for Hadoop Archives is 
         </p><p><code>har://scheme-hostname:port/archivepath/fileinarchive</code></p><p>
         If no scheme is provided it assumes the underlying filesystem. 
         In that case the URI would look like 

+ 3 - 3
src/docs/src/documentation/content/xdocs/native_libraries.xml

@@ -29,10 +29,10 @@
       <title>Purpose</title>
       
       <p>Hadoop has native implementations of certain components for reasons of 
-      both performace &amp; non-availability of Java implementations. These 
+      both performance and non-availability of Java implementations. These 
       components are available in a single, dynamically-linked, native library. 
       On the *nix platform it is <em>libhadoop.so</em>. This document describes 
-      the usage &amp; details on how to build the native libraries.</p>
+      the usage and details on how to build the native libraries.</p>
     </section>
     
     <section>
@@ -68,7 +68,7 @@
           <a href="#Building+Native+Hadoop+Libraries">build</a> them yourself.
         </li>
         <li>
-          Ensure you have either or both of <strong>&gt;zlib-1.2</strong> and 
+          Make sure you have either or both of <strong>&gt;zlib-1.2</strong> and 
           <strong>&gt;lzo2.0</strong> packages for your platform installed; 
           depending on your needs.
         </li>

部分文件因为文件数量过多而无法显示