Browse Source

0.23.1 release notes

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.23@1241736 13f79535-47bb-0310-9956-ffa450edef68
Matthew Foley 13 năm trước cách đây
mục cha
commit
13b0acccfd

+ 2243 - 2
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html

@@ -2,7 +2,7 @@
 <html>
 <head>
 <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
-<title>Hadoop 0.23.0 Release Notes</title>
+<title>Hadoop 0.23.1 Release Notes</title>
 <STYLE type="text/css">
 		H1 {font-family: sans-serif}
 		H2 {font-family: sans-serif; margin-left: 7mm}
@@ -10,10 +10,2251 @@
 	</STYLE>
 </head>
 <body>
-<h1>Hadoop 0.23.0 Release Notes</h1>
+<h1>Hadoop 0.23.1 Release Notes</h1>
 		These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
 
 <a name="changes"/>
+<h2>Changes since Hadoop 0.23.0</h2>
+
+<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
+<ul>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7348">HADOOP-7348</a>.
+     Major improvement reported by xiexianshan and fixed by xiexianshan (fs)<br>
+     <b>Modify the option of FsShell getmerge from [addnl] to [-nl] for more comprehensive</b><br>
+     <blockquote>                                              The &#39;fs -getmerge&#39; tool now uses a -nl flag to determine if adding a newline at end of each file is required, in favor of the &#39;addnl&#39; boolean flag that was used earlier.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7802">HADOOP-7802</a>.
+     Major bug reported by bmahe and fixed by bmahe <br>
+     <b>Hadoop scripts unconditionally source &quot;$bin&quot;/../libexec/hadoop-config.sh.</b><br>
+     <blockquote>                    Here is a patch to enable this behavior
<br/>
+
+
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7963">HADOOP-7963</a>.
+     Blocker bug reported by tgraves and fixed by sseth <br>
+     <b>test failures: TestViewFileSystemWithAuthorityLocalFileSystem and TestViewFileSystemLocalFileSystem</b><br>
+     <blockquote>                                              Fix ViewFS to catch a null canonical service-name and pass tests TestViewFileSystem*
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7986">HADOOP-7986</a>.
+     Major bug reported by mahadev and fixed by mahadev <br>
+     <b>Add config for History Server protocol in hadoop-policy for service level authorization.</b><br>
+     <blockquote>                                              Adding config for MapReduce History Server protocol in hadoop-policy.xml for service level authorization.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-1314">HDFS-1314</a>.
+     Minor bug reported by karims and fixed by sho.shimauchi <br>
+     <b>dfs.blocksize accepts only absolute value</b><br>
+     <blockquote>                                              The default blocksize property &#39;dfs.blocksize&#39; now accepts unit symbols to be used instead of byte length. Values such as &quot;10k&quot;, &quot;128m&quot;, &quot;1g&quot; are now OK to provide instead of just no. of bytes as was before.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2129">HDFS-2129</a>.
+     Major sub-task reported by tlipcon and fixed by tlipcon (hdfs client, performance)<br>
+     <b>Simplify BlockReader to not inherit from FSInputChecker</b><br>
+     <blockquote>                                              BlockReader has been reimplemented to use direct byte buffers. If you use a custom socket factory, it must generate sockets that have associated Channels.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2130">HDFS-2130</a>.
+     Major sub-task reported by tlipcon and fixed by tlipcon (hdfs client)<br>
+     <b>Switch default checksum to CRC32C</b><br>
+     <blockquote>                                              The default checksum algorithm used on HDFS is now CRC32C. Data from previous versions of Hadoop can still be read backwards-compatibly.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2246">HDFS-2246</a>.
+     Major improvement reported by sanjay.radia and fixed by jnp <br>
+     <b>Shortcut a local client reads to a Datanodes files directly</b><br>
+     <blockquote>                    1. New configurations
<br/>
+
+a. dfs.block.local-path-access.user is the key in datanode configuration to specify the user allowed to do short circuit read.
<br/>
+
+b. dfs.client.read.shortcircuit is the key to enable short circuit read at the client side configuration.
<br/>
+
+c. dfs.client.read.shortcircuit.skip.checksum is the key to bypass checksum check at the client side.
<br/>
+
+2. By default none of the above are enabled and short circuit read will not kick in.
<br/>
+
+3. If security is on, the feature can be used only for user that has kerberos credentials at the client, therefore map reduce tasks cannot benefit from it in general.
<br/>
+
+
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2316">HDFS-2316</a>.
+     Major new feature reported by szetszwo and fixed by szetszwo <br>
+     <b>[umbrella] WebHDFS: a complete FileSystem implementation for accessing HDFS over HTTP</b><br>
+     <blockquote>                    Provide WebHDFS as a complete FileSystem implementation for accessing HDFS over HTTP.
<br/>
+
+Previous hftp feature was a read-only FileSystem and does not provide &quot;write&quot; accesses.
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-778">MAPREDUCE-778</a>.
+     Major new feature reported by hong.tang and fixed by amar_kamat (tools/rumen)<br>
+     <b>[Rumen] Need a standalone JobHistory log anonymizer</b><br>
+     <blockquote>                                              Added an anonymizer tool to Rumen. Anonymizer takes a Rumen trace file and/or topology as input. It supports persistence and plugins to override the default behavior.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2733">MAPREDUCE-2733</a>.
+     Major task reported by vinaythota and fixed by vinaythota <br>
+     <b>Gridmix v3 cpu emulation system tests.</b><br>
+     <blockquote>                                              Adds system tests for the CPU emulation feature in Gridmix3.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2765">MAPREDUCE-2765</a>.
+     Major new feature reported by mithun and fixed by mithun (distcp, mrv2)<br>
+     <b>DistCp Rewrite</b><br>
+     <blockquote>                                              DistCpV2 added to hadoop-tools.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2784">MAPREDUCE-2784</a>.
+     Major bug reported by amar_kamat and fixed by amar_kamat (contrib/gridmix)<br>
+     <b>[Gridmix] TestGridmixSummary fails with NPE when run in DEBUG mode.</b><br>
+     <blockquote>                                              Fixed bugs in ExecutionSummarizer and ResourceUsageMatcher.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2863">MAPREDUCE-2863</a>.
+     Blocker improvement reported by acmurthy and fixed by tgraves (mrv2, nodemanager, resourcemanager)<br>
+     <b>Support web-services for RM &amp; NM</b><br>
+     <blockquote>                                              Support for web-services in YARN and MR components.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2950">MAPREDUCE-2950</a>.
+     Major bug reported by amar_kamat and fixed by ravidotg (contrib/gridmix)<br>
+     <b>[Gridmix] TestUserResolve fails in trunk</b><br>
+     <blockquote>                                              Fixes bug in TestUserResolve.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3102">MAPREDUCE-3102</a>.
+     Major sub-task reported by vinodkv and fixed by hitesh (mrv2, security)<br>
+     <b>NodeManager should fail fast with wrong configuration or permissions for LinuxContainerExecutor</b><br>
+     <blockquote>                                              Changed NodeManager to fail fast when LinuxContainerExecutor has wrong configuration or permissions.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3215">MAPREDUCE-3215</a>.
+     Minor sub-task reported by hitesh and fixed by hitesh (mrv2)<br>
+     <b>org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk</b><br>
+     <blockquote>                    Reneabled and fixed bugs in the failing test TestNoJobSetupCleanup.
<br/>
+
+
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3217">MAPREDUCE-3217</a>.
+     Minor sub-task reported by hitesh and fixed by devaraj.k (mrv2, test)<br>
+     <b>ant test TestAuditLogger fails on trunk</b><br>
+     <blockquote>                    Reenabled and fixed bugs in the failing ant test TestAuditLogger.
<br/>
+
+
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3219">MAPREDUCE-3219</a>.
+     Minor sub-task reported by hitesh and fixed by hitesh (mrv2, test)<br>
+     <b>ant test TestDelegationToken failing on trunk</b><br>
+     <blockquote>                    Reenabled and fixed bugs in the failing test TestDelegationToken.
<br/>
+
+
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3221">MAPREDUCE-3221</a>.
+     Minor sub-task reported by hitesh and fixed by devaraj.k (mrv2, test)<br>
+     <b>ant test TestSubmitJob failing on trunk</b><br>
+     <blockquote>                                              Fixed a bug in TestSubmitJob.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3280">MAPREDUCE-3280</a>.
+     Major bug reported by vinodkv and fixed by vinodkv (applicationmaster, mrv2)<br>
+     <b>MR AM should not read the username from configuration</b><br>
+     <blockquote>                                              Removed the unnecessary job user-name configuration in mapred-site.xml.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3297">MAPREDUCE-3297</a>.
+     Major task reported by sseth and fixed by sseth (mrv2)<br>
+     <b>Move Log Related components from yarn-server-nodemanager to yarn-common</b><br>
+     <blockquote>                    Moved log related components into yarn-common so that HistoryServer and clients can use them without depending on the yarn-server-nodemanager module.
<br/>
+
+
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3299">MAPREDUCE-3299</a>.
+     Minor improvement reported by sseth and fixed by jeagles (mrv2)<br>
+     <b>Add AMInfo table to the AM job page</b><br>
+     <blockquote>                                              Added AMInfo table to the MR AM job pages to list all the job-attempts when AM restarts and recovers.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3312">MAPREDUCE-3312</a>.
+     Major bug reported by revans2 and fixed by revans2 (mrv2)<br>
+     <b>Make MR AM not send a stopContainer w/o corresponding start container</b><br>
+     <blockquote>                                              Modified MR AM to not send a stop-container request for a container that isn&#39;t launched at all.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3325">MAPREDUCE-3325</a>.
+     Major improvement reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>Improvements to CapacityScheduler doc</b><br>
+     <blockquote>                                              document changes only.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3333">MAPREDUCE-3333</a>.
+     Blocker bug reported by vinodkv and fixed by vinodkv (applicationmaster, mrv2)<br>
+     <b>MR AM for sort-job going out of memory</b><br>
+     <blockquote>                                              Fixed bugs in ContainerLauncher of MR AppMaster due to which per-container connections to NodeManager were lingering long enough to hit the ulimits on number of processes.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3339">MAPREDUCE-3339</a>.
+     Blocker bug reported by ramgopalnaali and fixed by sseth (mrv2)<br>
+     <b>Job is getting hanged indefinitely,if the child processes are killed on the NM.  KILL_CONTAINER eventtype is continuosly sent to the containers that are not existing</b><br>
+     <blockquote>                                              Fixed MR AM to stop considering node blacklisting after the number of nodes blacklisted crosses a threshold.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3342">MAPREDUCE-3342</a>.
+     Critical bug reported by tgraves and fixed by jeagles (jobhistoryserver, mrv2)<br>
+     <b>JobHistoryServer doesn&apos;t show job queue</b><br>
+     <blockquote>                                              Fixed JobHistoryServer to also show the job&#39;s queue name.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3345">MAPREDUCE-3345</a>.
+     Major bug reported by vinodkv and fixed by hitesh (mrv2, resourcemanager)<br>
+     <b>Race condition in ResourceManager causing TestContainerManagerSecurity to fail sometimes</b><br>
+     <blockquote>                                              Fixed a race condition in ResourceManager that was causing TestContainerManagerSecurity to fail sometimes.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3349">MAPREDUCE-3349</a>.
+     Blocker bug reported by vinodkv and fixed by amar_kamat (mrv2)<br>
+     <b>No rack-name logged in JobHistory for unsuccessful tasks</b><br>
+     <blockquote>                                              Unsuccessful tasks now log hostname and rackname to job history. 
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3355">MAPREDUCE-3355</a>.
+     Blocker bug reported by vinodkv and fixed by vinodkv (applicationmaster, mrv2)<br>
+     <b>AM scheduling hangs frequently with sort job on 350 nodes</b><br>
+     <blockquote>                                              Fixed MR AM&#39;s ContainerLauncher to handle node-command timeouts correctly.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3360">MAPREDUCE-3360</a>.
+     Critical improvement reported by kam_iitkgp and fixed by kamesh (mrv2)<br>
+     <b>Provide information about lost nodes in the UI.</b><br>
+     <blockquote>                                              Added information about lost/rebooted/decommissioned nodes on the webapps.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3368">MAPREDUCE-3368</a>.
+     Critical bug reported by rramya and fixed by hitesh (build, mrv2)<br>
+     <b>compile-mapred-test fails</b><br>
+     <blockquote>                                              Fixed ant test compilation.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3375">MAPREDUCE-3375</a>.
+     Major task reported by vinaythota and fixed by vinaythota <br>
+     <b>Memory Emulation system tests.</b><br>
+     <blockquote>                                              Added system tests to test the memory emulation feature in Gridmix.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3379">MAPREDUCE-3379</a>.
+     Major bug reported by sseth and fixed by sseth (mrv2, nodemanager)<br>
+     <b>LocalResourceTracker should not tracking deleted cache entries</b><br>
+     <blockquote>                                              Fixed LocalResourceTracker in NodeManager to remove deleted cache entries correctly.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3382">MAPREDUCE-3382</a>.
+     Critical bug reported by vinodkv and fixed by raviprak (applicationmaster, mrv2)<br>
+     <b>Network ACLs can prevent AMs to ping the Job-end notification URL</b><br>
+     <blockquote>                                              Enhanced MR AM to use a proxy to ping the job-end notification URL.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3387">MAPREDUCE-3387</a>.
+     Critical bug reported by revans2 and fixed by revans2 (mrv2)<br>
+     <b>A tracking URL of N/A before the app master is launched breaks oozie</b><br>
+     <blockquote>                                              Fixed AM&#39;s tracking URL to always go through the proxy, even before the job started, so that it works properly with oozie throughout the job execution.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3392">MAPREDUCE-3392</a>.
+     Blocker sub-task reported by johnvijoe and fixed by johnvijoe <br>
+     <b>Cluster.getDelegationToken() throws NPE if client.getDelegationToken() returns null.</b><br>
+     <blockquote>                                              Fixed Cluster&#39;s getDelegationToken&#39;s API to return null when there isn&#39;t a supported token.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3398">MAPREDUCE-3398</a>.
+     Blocker bug reported by sseth and fixed by sseth (mrv2, nodemanager)<br>
+     <b>Log Aggregation broken in Secure Mode</b><br>
+     <blockquote>                                              Fixed log aggregation to work correctly in secure mode. Contributed by Siddharth Seth.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3399">MAPREDUCE-3399</a>.
+     Blocker sub-task reported by sseth and fixed by sseth (mrv2, nodemanager)<br>
+     <b>ContainerLocalizer should request new resources after completing the current one</b><br>
+     <blockquote>                                              Modified ContainerLocalizer to send a heartbeat to NM immediately after downloading a resource instead of always waiting for a second.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3404">MAPREDUCE-3404</a>.
+     Critical bug reported by patwhitey2007 and fixed by eepayne (job submission, mrv2)<br>
+     <b>Speculative Execution: speculative map tasks launched even if -Dmapreduce.map.speculative=false</b><br>
+     <blockquote>                                              Corrected MR AM to honor speculative configuration and enable speculating either maps or reduces.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3407">MAPREDUCE-3407</a>.
+     Minor bug reported by hitesh and fixed by hitesh (mrv2)<br>
+     <b>Wrong jar getting used in TestMR*Jobs* for MiniMRYarnCluster</b><br>
+     <blockquote>                                              Fixed pom files to refer to the correct MR app-jar needed by the integration tests.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3412">MAPREDUCE-3412</a>.
+     Major bug reported by amar_kamat and fixed by amar_kamat <br>
+     <b>&apos;ant docs&apos; is broken</b><br>
+     <blockquote>                                              Fixes &#39;ant docs&#39; by removing stale references to capacity-scheduler docs.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3417">MAPREDUCE-3417</a>.
+     Blocker bug reported by tgraves and fixed by jeagles (mrv2)<br>
+     <b>job access controls not working app master and job history UI&apos;s</b><br>
+     <blockquote>                                              Fixed job-access-controls to work with MR AM and JobHistoryServer web-apps.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3426">MAPREDUCE-3426</a>.
+     Blocker sub-task reported by hitesh and fixed by hitesh (mrv2)<br>
+     <b>uber-jobs tried to write outputs into wrong dir</b><br>
+     <blockquote>                                              Fixed MR AM in uber mode to write map intermediate outputs in the correct directory to work properly in secure mode.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3462">MAPREDUCE-3462</a>.
+     Blocker bug reported by amar_kamat and fixed by raviprak (mrv2, test)<br>
+     <b>Job submission failing in JUnit tests</b><br>
+     <blockquote>                                              Fixed failing JUnit tests in Gridmix.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3481">MAPREDUCE-3481</a>.
+     Major improvement reported by amar_kamat and fixed by amar_kamat (contrib/gridmix)<br>
+     <b>[Gridmix] Improve STRESS mode locking</b><br>
+     <blockquote>                                              Modified Gridmix STRESS mode locking structure. The submitted thread and the polling thread now run simultaneously without blocking each other. 
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3484">MAPREDUCE-3484</a>.
+     Major bug reported by raviprak and fixed by raviprak (mr-am, mrv2)<br>
+     <b>JobEndNotifier is getting interrupted before completing all its retries.</b><br>
+     <blockquote>                                              Fixed JobEndNotifier to not get interrupted before completing all its retries.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3487">MAPREDUCE-3487</a>.
+     Critical bug reported by tgraves and fixed by jlowe (mrv2)<br>
+     <b>jobhistory web ui task counters no longer links to singletakecounter page</b><br>
+     <blockquote>                                              Fixed JobHistory web-UI to display links to single task&#39;s counters&#39; page.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3490">MAPREDUCE-3490</a>.
+     Blocker bug reported by sseth and fixed by sharadag (mr-am, mrv2)<br>
+     <b>RMContainerAllocator counts failed maps towards Reduce ramp up</b><br>
+     <blockquote>                                              Fixed MapReduce AM to count failed maps also towards Reduce ramp up.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3511">MAPREDUCE-3511</a>.
+     Blocker sub-task reported by sseth and fixed by vinodkv (mr-am, mrv2)<br>
+     <b>Counters occupy a good part of AM heap</b><br>
+     <blockquote>                                              Removed a multitude of cloned/duplicate counters in the AM thereby reducing the AM heap size and preventing full GCs.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3512">MAPREDUCE-3512</a>.
+     Blocker sub-task reported by sseth and fixed by sseth (mr-am, mrv2)<br>
+     <b>Batch jobHistory disk flushes</b><br>
+     <blockquote>                                              Batching JobHistory flushing to DFS so that we don&#39;t flush for every event slowing down AM.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3519">MAPREDUCE-3519</a>.
+     Blocker sub-task reported by ravidotg and fixed by ravidotg (mrv2, nodemanager)<br>
+     <b>Deadlock in LocalDirsHandlerService and ShuffleHandler</b><br>
+     <blockquote>                                              Fixed a deadlock in NodeManager LocalDirectories&#39;s handling service.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3528">MAPREDUCE-3528</a>.
+     Major bug reported by sseth and fixed by sseth (mr-am, mrv2)<br>
+     <b>The task timeout check interval should be configurable independent of mapreduce.task.timeout</b><br>
+     <blockquote>                                              Fixed TaskHeartBeatHandler to use a new configuration for the thread loop interval separate from task-timeout configuration property.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3530">MAPREDUCE-3530</a>.
+     Blocker bug reported by karams and fixed by acmurthy (mrv2, resourcemanager, scheduler)<br>
+     <b>Sometimes NODE_UPDATE to the scheduler throws an NPE causing the scheduling to stop</b><br>
+     <blockquote>                                              Fixed an NPE occuring during scheduling in the ResourceManager.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3532">MAPREDUCE-3532</a>.
+     Critical bug reported by karams and fixed by kamesh (mrv2, nodemanager)<br>
+     <b>When 0 is provided as port number in yarn.nodemanager.webapp.address, NMs webserver component picks up random port, NM keeps on Reporting 0 port to RM</b><br>
+     <blockquote>                                              Modified NM to report correct http address when an ephemeral web port is configured.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3549">MAPREDUCE-3549</a>.
+     Blocker bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>write api documentation for web service apis for RM, NM, mapreduce app master, and job history server</b><br>
+     <blockquote>                    new files added: A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebServicesIntro.apt.vm
<br/>
+
+A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
<br/>
+
+A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm
<br/>
+
+A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/MapredAppMasterRest.apt.vm
<br/>
+
+A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/HistoryServerRest.apt.vm
<br/>
+
+
<br/>
+
+The hadoop-project/src/site/site.xml is split into separate patch.
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3564">MAPREDUCE-3564</a>.
+     Blocker bug reported by mahadev and fixed by sseth (mrv2)<br>
+     <b>TestStagingCleanup and TestJobEndNotifier are failing on trunk.</b><br>
+     <blockquote>                                              Fixed failures in TestStagingCleanup and TestJobEndNotifier tests.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3568">MAPREDUCE-3568</a>.
+     Critical sub-task reported by vinodkv and fixed by vinodkv (mr-am, mrv2, performance)<br>
+     <b>Optimize Job&apos;s progress calculations in MR AM</b><br>
+     <blockquote>                                              Optimized Job&#39;s progress calculations in MR AM.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3586">MAPREDUCE-3586</a>.
+     Blocker bug reported by vinodkv and fixed by vinodkv (mr-am, mrv2)<br>
+     <b>Lots of AMs hanging around in PIG testing</b><br>
+     <blockquote>                                              Modified CompositeService to avoid duplicate stop operations thereby solving race conditions in MR AM shutdown.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3597">MAPREDUCE-3597</a>.
+     Major improvement reported by ravidotg and fixed by ravidotg (tools/rumen)<br>
+     <b>Provide a way to access other info of history file from Rumentool</b><br>
+     <blockquote>                                              Rumen now provides {{Parsed*}} objects. These objects provide extra information that are not provided by {{Logged*}} objects.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3618">MAPREDUCE-3618</a>.
+     Major sub-task reported by sseth and fixed by sseth (mrv2, performance)<br>
+     <b>TaskHeartbeatHandler holds a global lock for all task-updates</b><br>
+     <blockquote>                                              Fixed TaskHeartbeatHandler to not hold a global lock for all task-updates.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3630">MAPREDUCE-3630</a>.
+     Critical task reported by amolkekre and fixed by mahadev (mrv2)<br>
+     <b>NullPointerException running teragen</b><br>
+     <blockquote>                                              Committed to trunk and branch-0.23. Thanks Mahadev.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3639">MAPREDUCE-3639</a>.
+     Blocker bug reported by sseth and fixed by sseth (mrv2)<br>
+     <b>TokenCache likely broken for FileSystems which don&apos;t issue delegation tokens</b><br>
+     <blockquote>                                              Fixed TokenCache to work with absent FileSystem canonical service-names.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3641">MAPREDUCE-3641</a>.
+     Blocker sub-task reported by acmurthy and fixed by acmurthy (mrv2, scheduler)<br>
+     <b>CapacityScheduler should be more conservative assigning off-switch requests</b><br>
+     <blockquote>                                              Making CapacityScheduler more conservative so as to assign only one off-switch container in a single scheduling iteration.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3656">MAPREDUCE-3656</a>.
+     Blocker bug reported by karams and fixed by sseth (applicationmaster, mrv2, resourcemanager)<br>
+     <b>Sort job on 350 scale is consistently failing with latest MRV2 code </b><br>
+     <blockquote>                                              Fixed a race condition in MR AM which is failing the sort benchmark consistently.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3699">MAPREDUCE-3699</a>.
+     Major bug reported by vinodkv and fixed by hitesh (mrv2)<br>
+     <b>Default RPC handlers are very low for YARN servers</b><br>
+     <blockquote>                                              Increased RPC handlers for all YARN servers to reasonable values for working at scale.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3703">MAPREDUCE-3703</a>.
+     Critical bug reported by eepayne and fixed by eepayne (mrv2, resourcemanager)<br>
+     <b>ResourceManager should provide node lists in JMX output</b><br>
+     <blockquote>                    New JMX Bean in ResourceManager to provide list of live node managers:
<br/>
+
+
<br/>
+
+Hadoop:service=ResourceManager,name=RMNMInfo LiveNodeManagers
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3710">MAPREDUCE-3710</a>.
+     Major bug reported by sseth and fixed by sseth (mrv1, mrv2)<br>
+     <b>last split generated by FileInputFormat.getSplits may not have the best locality</b><br>
+     <blockquote>                                              Improved FileInputFormat to return better locality for the last split.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3711">MAPREDUCE-3711</a>.
+     Blocker sub-task reported by sseth and fixed by revans2 (mrv2)<br>
+     <b>AppMaster recovery for Medium to large jobs take long time</b><br>
+     <blockquote>                                              Fixed MR AM recovery so that only single selected task output is recovered and thus reduce the unnecessarily bloated recovery time.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3713">MAPREDUCE-3713</a>.
+     Blocker bug reported by sseth and fixed by acmurthy (mrv2, resourcemanager)<br>
+     <b>Incorrect headroom reported to jobs</b><br>
+     <blockquote>                                              Fixed the way head-room is allocated to applications by CapacityScheduler so that it deducts current-usage per user and not per-application.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3714">MAPREDUCE-3714</a>.
+     Blocker bug reported by vinodkv and fixed by vinodkv (mrv2, task)<br>
+     <b>Reduce hangs in a corner case</b><br>
+     <blockquote>                                              Fixed EventFetcher and Fetcher threads to shut-down properly so that reducers don&#39;t hang in corner cases.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3716">MAPREDUCE-3716</a>.
+     Blocker bug reported by jeagles and fixed by jeagles (mrv2)<br>
+     <b>java.io.File.createTempFile fails in map/reduce tasks</b><br>
+     <blockquote>                                              Fixing YARN+MR to allow MR jobs to be able to use java.io.File.createTempFile to create temporary files as part of their tasks.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3720">MAPREDUCE-3720</a>.
+     Major bug reported by vinodkv and fixed by vinodkv (client, mrv2)<br>
+     <b>Command line listJobs should not visit each AM</b><br>
+     <blockquote>                    Changed bin/mapred job -list to not print job-specific information not available at RM.
<br/>
+
+
<br/>
+
+Very minor incompatibility in cmd-line output, inevitable due to MRv2 architecture.
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3732">MAPREDUCE-3732</a>.
+     Blocker bug reported by acmurthy and fixed by acmurthy (mrv2, resourcemanager, scheduler)<br>
+     <b>CS should only use &apos;activeUsers with pending requests&apos; for computing user-limits</b><br>
+     <blockquote>                                              Modified CapacityScheduler to use only users with pending requests for computing user-limits.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3752">MAPREDUCE-3752</a>.
+     Blocker bug reported by acmurthy and fixed by acmurthy (mrv2)<br>
+     <b>Headroom should be capped by queue max-cap</b><br>
+     <blockquote>                                              Modified application limits to include queue max-capacities besides the usual user limits.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3754">MAPREDUCE-3754</a>.
+     Major bug reported by vinodkv and fixed by vinodkv (mrv2, webapps)<br>
+     <b>RM webapp should have pages filtered based on App-state</b><br>
+     <blockquote>                                              Modified RM UI to filter applications based on state of the applications.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3760">MAPREDUCE-3760</a>.
+     Major bug reported by rramya and fixed by vinodkv (mrv2)<br>
+     <b>Blacklisted NMs should not appear in Active nodes list</b><br>
+     <blockquote>                                              Changed active nodes list to not contain unhealthy nodes on the webUI and metrics.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3774">MAPREDUCE-3774</a>.
+     Major bug reported by mahadev and fixed by mahadev (mrv2)<br>
+     <b>yarn-default.xml should be moved to hadoop-yarn-common.</b><br>
+     <blockquote>      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3784">MAPREDUCE-3784</a>.
+     Major bug reported by rramya and fixed by acmurthy (mrv2)<br>
+     <b>maxActiveApplications(|PerUser) per queue is too low for small clusters</b><br>
+     <blockquote>                                              Fixed CapacityScheduler so that maxActiveApplication and maxActiveApplicationsPerUser per queue are not too low for small clusters. 
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3804">MAPREDUCE-3804</a>.
+     Major bug reported by davet and fixed by davet (jobhistoryserver, mrv2, resourcemanager)<br>
+     <b>yarn webapp interface vulnerable to cross scripting attacks</b><br>
+     <blockquote>                                              fix cross scripting attacks vulnerability through webapp interface.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3808">MAPREDUCE-3808</a>.
+     Blocker bug reported by sseth and fixed by revans2 (mrv2)<br>
+     <b>NPE in FileOutputCommitter when running a 0 reduce job</b><br>
+     <blockquote>                                              Fixed an NPE in FileOutputCommitter for jobs with maps but no reduces.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3815">MAPREDUCE-3815</a>.
+     Critical sub-task reported by sseth and fixed by sseth (mrv2)<br>
+     <b>Data Locality suffers if the AM asks for containers using IPs instead of hostnames</b><br>
+     <blockquote>                                              Fixed MR AM to always use hostnames and never IPs when requesting containers so that scheduler can give off data local containers correctly.
+
+      
+</blockquote></li>
+
+</ul>
+
+<h3>Other Jiras (describe bug fixes and minor changes)</h3>
+<ul>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4515">HADOOP-4515</a>.
+     Minor improvement reported by abagri and fixed by sho.shimauchi <br>
+     <b>conf.getBoolean must be case insensitive</b><br>
+     <blockquote>Currently, if xx is set to &quot;TRUE&quot;, conf.getBoolean(&quot;xx&quot;, false) would return false. <br><br>conf.getBoolean should do an equalsIgnoreCase() instead of equals()<br><br>I am marking the change as incompatible because it does change semantics as pointed by Steve in HADOOP-4416</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6490">HADOOP-6490</a>.
+     Minor bug reported by zshao and fixed by umamaheswararao (fs)<br>
+     <b>Path.normalize should use StringUtils.replace in favor of String.replace</b><br>
+     <blockquote>in our environment, we are seeing that the JobClient is going out of memory because Path.normalizePath(String) is called several tens of thousands of times, and each time it calls &quot;String.replace&quot; twice.<br><br>java.lang.String.replace compiles a regex to do the job which is very costly.<br>We should use org.apache.commons.lang.StringUtils.replace which is much faster and consumes almost no extra memory.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6614">HADOOP-6614</a>.
+     Minor improvement reported by stevel@apache.org and fixed by jmhsieh (util)<br>
+     <b>RunJar should provide more diags when it can&apos;t create a temp file</b><br>
+     <blockquote>When you see a stack trace about permissions, it is better if the trace included the file/directory at fault:<br>{code}<br>Exception in thread &quot;main&quot; java.io.IOException: Permission denied<br>	at java.io.UnixFileSystem.createFileExclusively(Native Method)<br>	at java.io.File.checkAndCreate(File.java:1704)<br>	at java.io.File.createTempFile(File.java:1792)<br>	at org.apache.hadoop.util.RunJar.main(RunJar.java:147)<br>{code}<br><br>As it is, you need to go into the code, discover that it&apos;s {{${hadoop.tmp.dir}/hadoop-unja...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6840">HADOOP-6840</a>.
+     Minor improvement reported by nspiegelberg and fixed by nspiegelberg (fs, io)<br>
+     <b>Support non-recursive create() in FileSystem &amp; SequenceFile.Writer</b><br>
+     <blockquote>The proposed solution for HBASE-2312 requires the sequence file to handle a non-recursive create.  This is already supported by HDFS, but needs to have an equivalent FileSystem &amp; SequenceFile.Writer API.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6886">HADOOP-6886</a>.
+     Minor improvement reported by nspiegelberg and fixed by nspiegelberg (fs)<br>
+     <b>LocalFileSystem Needs createNonRecursive API</b><br>
+     <blockquote>While running sanity check tests for HBASE-2312, I noticed that HDFS-617 did not include createNonRecursive() support for the LocalFileSystem.  This is a problem for HBase, which allows the user to run over the LocalFS instead of HDFS for local cluster testing.  I think this only affects 0.20-append, but may affect the trunk based upon how exactly FileContext handles non-recursive creates.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7424">HADOOP-7424</a>.
+     Major improvement reported by eli and fixed by umamaheswararao <br>
+     <b>Log an error if the topology script doesn&apos;t handle multiple args</b><br>
+     <blockquote>ScriptBasedMapping#resolve currently warns and returns null if it passes n arguments to the topology script and gets back a different number of resolutions. This indicates a bug in the topology script (or it&apos;s input) and therefore should be an error.<br><br>{code}<br>// invalid number of entries returned by the script<br>LOG.warn(&quot;Script &quot; + scriptName + &quot; returned &quot;<br>   + Integer.toString(m.size()) + &quot; values when &quot;<br>   + Integer.toString(names.size()) + &quot; were expected.&quot;);<br>return null;<br>{code}<br><br>There&apos;s on...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7470">HADOOP-7470</a>.
+     Minor improvement reported by stevel@apache.org and fixed by enis (util)<br>
+     <b>move up to Jackson 1.8.8</b><br>
+     <blockquote>I see that hadoop-core still depends on Jackson 1.0.1 -but that project is now up to 1.8.2 in releases. Upgrading will make it easier for other Jackson-using apps that are more up to date to keep their classpath consistent.<br><br>The patch would be updating the ivy file to pull in the later version; no test</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7504">HADOOP-7504</a>.
+     Trivial improvement reported by eli and fixed by qwertymaniac (metrics)<br>
+     <b>hadoop-metrics.properties missing some Ganglia31 options </b><br>
+     <blockquote>The &quot;jvm&quot;, &quot;rpc&quot;, and &quot;ugi&quot; sections of hadoop-metrics.properties should have Ganglia31 options like &quot;dfs&quot; and &quot;mapred&quot;</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7574">HADOOP-7574</a>.
+     Trivial improvement reported by xiexianshan and fixed by xiexianshan (fs)<br>
+     <b>Improvement for FSshell -stat</b><br>
+     <blockquote>Add two optional formats for FSshell -stat, one is %G for group name of owner and the other is %U for user name.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7590">HADOOP-7590</a>.
+     Major sub-task reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>Mavenize streaming and MR examples</b><br>
+     <blockquote>MR1 code is still available in MR2 for testing contribs.<br><br>While this is a temporary until contribs tests are ported to MR2.<br><br>As a follow up the contrib projects themselves should be mavenized.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7657">HADOOP-7657</a>.
+     Major improvement reported by mrbsd and fixed by decster <br>
+     <b>Add support for LZ4 compression</b><br>
+     <blockquote>According to several benchmark sites, LZ4 seems to overtake other fast compression algorithms, especially in the decompression speed area. The interface is also trivial to integrate (http://code.google.com/p/lz4/source/browse/trunk/lz4.h) and there is no license issue.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7736">HADOOP-7736</a>.
+     Trivial improvement reported by qwertymaniac and fixed by qwertymaniac (fs)<br>
+     <b>Remove duplicate call of Path#normalizePath during initialization.</b><br>
+     <blockquote>Found during code reading on HADOOP-6490, there seems to be an unnecessary call of {{normalizePath(...)}} being made in the constructor {{Path(Path, Path)}}. Since {{initialize(...)}} normalizes its received path string already, its unnecessary to do it to the path parameter in the constructor&apos;s call of the same.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7758">HADOOP-7758</a>.
+     Major improvement reported by tucu00 and fixed by tucu00 (fs)<br>
+     <b>Make GlobFilter class public</b><br>
+     <blockquote>Currently the GlobFilter class is package private.<br><br>As a generic filter it is quite useful (and I&apos;ve found myself doing cut&amp;paste of it a few times)</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7761">HADOOP-7761</a>.
+     Major improvement reported by tlipcon and fixed by tlipcon (io, performance, util)<br>
+     <b>Improve performance of raw comparisons</b><br>
+     <blockquote>Guava has a nice implementation of lexicographical byte-array comparison that uses sun.misc.Unsafe to compare unsigned byte arrays long-at-a-time. Their benchmarks show it as being 2x more CPU-efficient than the equivalent pure-Java implementation. We can easily integrate this into WritableComparator.compareBytes to improve CPU performance in the shuffle.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7777">HADOOP-7777</a>.
+     Major improvement reported by stevel@apache.org and fixed by stevel@apache.org (util)<br>
+     <b>Implement a base class for DNSToSwitchMapping implementations that can offer extra topology information</b><br>
+     <blockquote>HDFS-2492 has identified a need for DNSToSwitchMapping implementations to provide a bit more topology information (e.g. whether or not there are multiple switches). This could be done by writing an extended interface, querying its methods if present and coming up with a default action if there is no extended interface. <br><br>Alternatively, we have a base class that all the standard mappings implement, with a boolean isMultiRack() method; all the standard subclasses would extend this, as could any...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7787">HADOOP-7787</a>.
+     Major bug reported by bmahe and fixed by bmahe (build)<br>
+     <b>Make source tarball use conventional name.</b><br>
+     <blockquote>When building binary and source tarballs, I get the following artifacts:<br>Binary tarball: hadoop-0.23.0-SNAPSHOT.tar.gz <br>Source tarball: hadoop-dist-0.23.0-SNAPSHOT-src.tar.gz<br><br>Notice the &quot;-dist&quot; right between &quot;hadoop&quot; and the version in the source tarball name.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7801">HADOOP-7801</a>.
+     Major bug reported by bmahe and fixed by bmahe (build)<br>
+     <b>HADOOP_PREFIX cannot be overriden</b><br>
+     <blockquote>hadoop-config.sh forces HADOOP_prefix to a specific value:<br>export HADOOP_PREFIX=`dirname &quot;$this&quot;`/..<br><br>It would be nice to make this overridable.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7804">HADOOP-7804</a>.
+     Major improvement reported by arpitgupta and fixed by arpitgupta (conf)<br>
+     <b>enable hadoop config generator to set dfs.block.local-path-access.user to enable short circuit read</b><br>
+     <blockquote>we have a new config that allows to select which user can have access for short circuit read. We should make that configurable through the config generator scripts.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7808">HADOOP-7808</a>.
+     Major new feature reported by daryn and fixed by daryn (fs, security)<br>
+     <b>Port token service changes from 205</b><br>
+     <blockquote>Need to merge the 205 token bug fixes and the feature to enable hostname-based tokens.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7810">HADOOP-7810</a>.
+     Blocker bug reported by johnvijoe and fixed by johnvijoe <br>
+     <b>move hadoop archive to core from tools</b><br>
+     <blockquote>&quot;The HadoopArchieves classes are included in the $HADOOP_HOME/hadoop_tools.jar, but this file is not found in `hadoop classpath`.<br><br>A Pig script using HCatalog&apos;s dynamic partitioning with HAR enabled will therefore fail if a jar with HAR is not included in the pig call&apos;s &apos;-cp&apos; and &apos;-Dpig.additional.jars&apos; arguments.&quot;<br><br>I am not aware of any reason to not include hadoop-tools.jar in &apos;hadoop classpath&apos;. Will attach a patch soon.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7811">HADOOP-7811</a>.
+     Major bug reported by jeagles and fixed by jeagles (security, test)<br>
+     <b>TestUserGroupInformation#testGetServerSideGroups test fails in chroot</b><br>
+     <blockquote>It is common when running in chroot to have root&apos;s group vector preserved when running as your self.<br><br>For example<br><br># Enter chroot<br>$ sudo chroot /myroot<br><br># still root<br>$ whoami<br>root<br><br># switch to user preserving root&apos;s group vector<br>$ sudo -u user -P -s<br><br># root&apos;s groups<br>$ groups root<br>a b c<br><br># user&apos;s real groups<br>$ groups user<br>d e f<br><br># user&apos;s effective groups<br>$ groups<br>a b c d e f<br>-------------------------------<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7837">HADOOP-7837</a>.
+     Major bug reported by stevel@apache.org and fixed by eli (conf)<br>
+     <b>no NullAppender in the log4j config</b><br>
+     <blockquote>running sbin/start-dfs.sh gives me a telling off about no null appender -should one be in the log4j config file.<br><br>Full trace (failure expected, but full output not as expected)<br>{code}<br>./start-dfs.sh <br>log4j:ERROR Could not find value for key log4j.appender.NullAppender<br>log4j:ERROR Could not instantiate appender named &quot;NullAppender&quot;.<br>Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.<br>Starting namenodes on []<br>cat: /Users/slo/J...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7843">HADOOP-7843</a>.
+     Major bug reported by johnvijoe and fixed by johnvijoe <br>
+     <b>compilation failing because workDir not initialized in RunJar.java</b><br>
+     <blockquote>Compilation is failing on 0.23 and trunk because workDir is not initialized in RunJar.java</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7853">HADOOP-7853</a>.
+     Blocker bug reported by daryn and fixed by daryn (security)<br>
+     <b>multiple javax security configurations cause conflicts</b><br>
+     <blockquote>Both UGI and the SPNEGO KerberosAuthenticator set the global javax security configuration.  SPNEGO stomps on UGI&apos;s security config which leads to kerberos/SASL authentication errors.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7854">HADOOP-7854</a>.
+     Critical bug reported by daryn and fixed by daryn (security)<br>
+     <b>UGI getCurrentUser is not synchronized</b><br>
+     <blockquote>Sporadic {{ConcurrentModificationExceptions}} are originating from {{UGI.getCurrentUser}} when it needs to create a new instance.  The problem was specifically observed in a JT under heavy load when a post-job cleanup is accessing the UGI while a new job is being processed.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7858">HADOOP-7858</a>.
+     Trivial improvement reported by tlipcon and fixed by tlipcon <br>
+     <b>Drop some info logging to DEBUG level in IPC, metrics, and HTTP</b><br>
+     <blockquote>Our info level logs have gotten noisier and noisier over time, which is annoying both for users and when looking at unit tests. I&apos;d like to drop a few of the less useful INFO level messages down to DEBUG.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7859">HADOOP-7859</a>.
+     Major bug reported by eli and fixed by eli (fs)<br>
+     <b>TestViewFsHdfs.testgetFileLinkStatus is failing an assert</b><br>
+     <blockquote>Probably introduced by HADOOP-7783. I&apos;ll fix it.<br><br>{noformat}<br>java.lang.AssertionError<br>	at org.apache.hadoop.fs.FileContext.qualifySymlinkTarget(FileContext.java:1111)<br>	at org.apache.hadoop.fs.FileContext.access$000(FileContext.java:170)<br>	at org.apache.hadoop.fs.FileContext$15.next(FileContext.java:1142)<br>	at org.apache.hadoop.fs.FileContext$15.next(FileContext.java:1137)<br>	at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2327)<br>	at org.apache.hadoop.fs.FileContext.getF...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7864">HADOOP-7864</a>.
+     Major bug reported by abayer and fixed by abayer (build)<br>
+     <b>Building mvn site with Maven &lt; 3.0.2 causes OOM errors</b><br>
+     <blockquote>If you try to run mvn site with Maven 3.0.0 (and possibly 3.0.1 - haven&apos;t actually tested that), you get hit with unavoidable OOM errors. Switching to Maven 3.0.2 or later fixes this. The enforcer should require 3.0.2 for builds.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7870">HADOOP-7870</a>.
+     Major bug reported by jmhsieh and fixed by jmhsieh <br>
+     <b>fix SequenceFile#createWriter with boolean createParent arg to respect createParent.</b><br>
+     <blockquote>After HBASE-6840, one set of calls to createNonRecursive(...) seems fishy - the new boolean createParent variable from the signature isn&apos;t used at all.  <br><br>{code}<br>+  public static Writer<br>+    createWriter(FileSystem fs, Configuration conf, Path name,<br>+                 Class keyClass, Class valClass, int bufferSize,<br>+                 short replication, long blockSize, boolean createParent,<br>+                 CompressionType compressionType, CompressionCodec codec,<br>+                 Metadata meta...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7874">HADOOP-7874</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>native libs should be under lib/native/ dir</b><br>
+     <blockquote>Currently common and hdfs SO files end up under lib/ dir with all JARs, they should end up under lib/native.<br><br>In addition, the hadoop-config.sh script needs some cleanup when comes to native lib handling:<br><br>* it is using lib/native/${JAVA_PLATFORM} for the java.library.path, when it should use lib/native.<br>* it is looking for build/lib/native, this is from the old ant build, not applicable anymore.<br>* it is looking for the libhdfs.a and adding to the java.librar.path, this is not correct.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7877">HADOOP-7877</a>.
+     Major task reported by szetszwo and fixed by szetszwo (documentation)<br>
+     <b>Federation: update Balancer documentation</b><br>
+     <blockquote>Update Balancer documentation for the new balancing policy and CLI.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7878">HADOOP-7878</a>.
+     Minor bug reported by stevel@apache.org and fixed by stevel@apache.org (util)<br>
+     <b>Regression HADOOP-7777 switch changes break HDFS tests when the isSingleSwitch() predicate is used</b><br>
+     <blockquote>This doesn&apos;t show up until you apply the HDFS-2492 patch, but the attempt to make the {{StaticMapping}} topology clever by deciding if it is single rack or multi rack based on its rack-&gt;node mapping breaks the HDFS {{TestBlocksWithNotEnoughRacks}} test. Why? Because the racks go in after the switch topology is cached by the {{BlockManager}}, which assumes the system is always single-switch.<br><br>Fix: default to assuming multi-switch; remove the intelligence, add a setter for anyone who really wan...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7887">HADOOP-7887</a>.
+     Critical bug reported by tucu00 and fixed by tucu00 (security)<br>
+     <b>KerberosAuthenticatorHandler is not setting KerberosName name rules from configuration</b><br>
+     <blockquote>While the KerberosAuthenticatorHandler defines the name rules property, it does not set it in KerberosName.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7890">HADOOP-7890</a>.
+     Trivial improvement reported by knoguchi and fixed by knoguchi (scripts)<br>
+     <b>Redirect hadoop script&apos;s deprecation message to stderr</b><br>
+     <blockquote>$ hadoop dfs -ls<br>DEPRECATED: Use of this script to execute hdfs command is deprecated.<br>Instead use the hdfs command for it.<br>...<br><br>If we&apos;re still letting the command run, I think we should redirect the deprecation message to stderr in case users have a script taking the output from stdout.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7898">HADOOP-7898</a>.
+     Minor bug reported by sureshms and fixed by sureshms (security)<br>
+     <b>Fix javadoc warnings in AuthenticationToken.java</b><br>
+     <blockquote>Fix the following javadoc warning:<br>[WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationToken.java:33: warning - Tag @link: reference not found: HttpServletRequest<br>[WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationToken.java...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7902">HADOOP-7902</a>.
+     Major bug reported by szetszwo and fixed by tucu00 <br>
+     <b>skipping name rules setting (if already set) should be done on UGI initialization only </b><br>
+     <blockquote>Both TestDelegationToken and TestOfflineEditsViewer are currently failing.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7907">HADOOP-7907</a>.
+     Blocker bug reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>hadoop-tools JARs are not part of the distro</b><br>
+     <blockquote>After mavenizing streaming, the hadoop-streaming JAR is not part of the final tar.<br><br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7910">HADOOP-7910</a>.
+     Minor improvement reported by sho.shimauchi and fixed by sho.shimauchi (conf)<br>
+     <b>add configuration methods to handle human readable size values</b><br>
+     <blockquote>It&apos;s better to have a new configuration methods which handle human readable size values.<br>For example, see HDFS-1314.<br><br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7912">HADOOP-7912</a>.
+     Major bug reported by revans2 and fixed by revans2 (build)<br>
+     <b>test-patch should run eclipse:eclipse to verify that it does not break again</b><br>
+     <blockquote>Recently the eclipse:eclipse build was broken.  If we are going to document this on the wiki and have many developers use it we should verify that it always works.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7914">HADOOP-7914</a>.
+     Major bug reported by szetszwo and fixed by szetszwo (build)<br>
+     <b>duplicate declaration of hadoop-hdfs test-jar</b><br>
+     <blockquote>[WARNING] Some problems were encountered while building the effective model for org.apache.hadoop:hadoop-common-project:pom:0.24.0-SNAPSHOT<br>[WARNING] &apos;dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)&apos; must be unique: org.apache.hadoop:hadoop-hdfs:test-jar -&gt; duplicate declaration of version ${project.version} @ org.apache.hadoop:hadoop-project:0.24.0-SNAPSHOT, /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-project/pom.xml, line 140, ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7917">HADOOP-7917</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>compilation of protobuf files fails in windows/cygwin</b><br>
+     <blockquote>HADOOP-7899 &amp; HDFS-2511 introduced compilation of proto files as part of the build.<br><br>Such compilation is failing in windows/cygwin</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7919">HADOOP-7919</a>.
+     Trivial improvement reported by qwertymaniac and fixed by qwertymaniac (documentation)<br>
+     <b>[Doc] Remove hadoop.logfile.* properties.</b><br>
+     <blockquote>The following only resides in core-default.xml and doesn&apos;t look like its used anywhere at all. At least a grep of the prop name and parts of it does not give me back anything at all.<br><br>These settings are now configurable via generic Log4J opts, via the shipped log4j.properties file in the distributions.<br><br>{code}<br>137 &lt;!--- logging properties --&gt;<br>138 <br>139 &lt;property&gt;<br>140   &lt;name&gt;hadoop.logfile.size&lt;/name&gt;<br>141   &lt;value&gt;10000000&lt;/value&gt;<br>142   &lt;description&gt;The max size of each log file&lt;/description&gt;<br>...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7933">HADOOP-7933</a>.
+     Critical bug reported by sseth and fixed by sseth <br>
+     <b>Viewfs changes for MAPREDUCE-3529</b><br>
+     <blockquote>ViewFs.getDelegationTokens returns a list of tokens for the associated namenodes. Credentials serializes these tokens using the service name for the actual namenodes. Effectively, tokens are not cached for viewfs (some more details in MR 3529). Affects any job which uses the TokenCache in tasks along with viewfs (some Pig jobs).<br><br>Talk to Jitendra about this, some options<br>1. Change Credentials.getAllTokens to return the key, instead of just a token list (associate the viewfs canonical name wit...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7934">HADOOP-7934</a>.
+     Critical improvement reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>Normalize dependencies versions across all modules</b><br>
+     <blockquote>Move all dependencies versions to the dependencyManagement section in the hadoop-project POM<br><br>Move all plugin versions to the dependencyManagement section in the hadoop-project POM</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7936">HADOOP-7936</a>.
+     Major bug reported by eli and fixed by tucu00 (build)<br>
+     <b>There&apos;s a Hoop README in the root dir of the tarball</b><br>
+     <blockquote>The Hoop README.txt is now in the root dir of the tarball.<br><br>{noformat}<br>hadoop-trunk1 $ tar xvzf hadoop-dist/target/hadoop-0.24.0-SNAPSHOT.tar.gz  -C /tmp/<br>..<br>hadoop-trunk1 $ head -n3 /tmp/hadoop-0.24.0-SNAPSHOT/README.txt <br>-----------------------------------------------------------------------------<br>HttpFS - Hadoop HDFS over HTTP<br>{noformat}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7939">HADOOP-7939</a>.
+     Major improvement reported by rvs and fixed by rvs (build, conf, documentation, scripts)<br>
+     <b>Improve Hadoop subcomponent integration in Hadoop 0.23</b><br>
+     <blockquote>h1. Introduction<br><br>For the rest of this proposal it is assumed that the current set<br>of Hadoop subcomponents is:<br> * hadoop-common<br> * hadoop-hdfs<br> * hadoop-yarn<br> * hadoop-mapreduce<br><br>It must be noted that this is an open ended list, though. For example,<br>implementations of additional frameworks on top of yarn (e.g. MPI) would<br>also be considered a subcomponent.<br><br>h1. Problem statement<br><br>Currently there&apos;s an unfortunate coupling and hard-coding present at the<br>level of launcher scripts, configuration s...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7948">HADOOP-7948</a>.
+     Minor bug reported by cim_michajlomatijkiw and fixed by cim_michajlomatijkiw (build)<br>
+     <b>Shell scripts created by hadoop-dist/pom.xml to build tar do not properly propagate failure</b><br>
+     <blockquote>The run() function, as defined in dist-layout-stitching.sh and dist-tar-stitching, created in hadoop-dist/pom.xml, does not properly propagate the error code of a failing command.  See the following:<br>{code}<br>    ...<br>    &quot;${@}&quot;                 # call fails with non-zero exit code<br>    if [ $? != 0 ]; then   <br>        echo               <br>        echo &quot;Failed!&quot;     <br>        echo               <br>        exit $?            # $?=result of echo above, likely 0, thus exit with code 0<br>    ...<br>{code}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7949">HADOOP-7949</a>.
+     Trivial bug reported by eli and fixed by eli (ipc)<br>
+     <b>Updated maxIdleTime default in the code to match core-default.xml</b><br>
+     <blockquote>HADOOP-2909 intended to set the server max idle time for a connection to twice the client value. (&quot;The server-side max idle time should be greater than the client-side max idle time, for example, twice of the client-side max idle time.&quot;) This way when a server times out a connection it&apos;s due a crashed client and not an inactive client so we don&apos;t close client connections with outstanding requests (by setting 2x the client value on the server side the client should time out the connection firs...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7964">HADOOP-7964</a>.
+     Blocker bug reported by kihwal and fixed by daryn (security, util)<br>
+     <b>Deadlock in class init.</b><br>
+     <blockquote>After HADOOP-7808, client-side commands hang occasionally. There are cyclic dependencies in NetUtils and SecurityUtil class initialization. Upon initial look at the stack trace, two threads deadlock when they hit the either of class init the same time.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7971">HADOOP-7971</a>.
+     Blocker bug reported by tgraves and fixed by prashant_ <br>
+     <b>hadoop &lt;job/queue/pipes&gt; removed - should be added back, but deprecated</b><br>
+     <blockquote>The mapred subcommands (mradmin|jobtracker|tasktracker|pipes|job|queue)<br> were removed from the /bin/hadoop command. I believe for backwards compatibility at least some of these should have stayed along with the deprecated warnings.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7974">HADOOP-7974</a>.
+     Major bug reported by eli and fixed by qwertymaniac (fs)<br>
+     <b>TestViewFsTrash incorrectly determines the user&apos;s home directory</b><br>
+     <blockquote>HADOOP-7284 added a test called TestViewFsTrash which contains the following code to determine the user&apos;s home directory. It only works if the user&apos;s directory is one level deep, and breaks if the home directory is more than one level deep (eg user hudson, who&apos;s home dir might be /usr/lib/hudson instead of /home/hudson).<br><br>{code}<br>    // create a link for home directory so that trash path works<br>    // set up viewfs&apos;s home dir root to point to home dir root on target<br>    // But home dir is diffe...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7975">HADOOP-7975</a>.
+     Minor bug reported by qwertymaniac and fixed by qwertymaniac <br>
+     <b>Add entry to XML defaults for new LZ4 codec</b><br>
+     <blockquote>HADOOP-7657 added in a new LZ4 codec, but failed to extend the io.compression.codecs list which MR/etc. use up to load codecs.<br><br>We should add an entry to the core-default XML for this new codec, just as we did with Snappy.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7981">HADOOP-7981</a>.
+     Major bug reported by jeagles and fixed by jeagles (io)<br>
+     <b>Improve documentation for org.apache.hadoop.io.compress.Decompressor.getRemaining</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7982">HADOOP-7982</a>.
+     Major bug reported by tlipcon and fixed by tlipcon (security)<br>
+     <b>UserGroupInformation fails to login if thread&apos;s context classloader can&apos;t load HadoopLoginModule</b><br>
+     <blockquote>In a few hard-to-reproduce situations, we&apos;ve seen a problem where the UGI login call causes a failure to login exception with the following cause:<br><br>Caused by: javax.security.auth.login.LoginException: unable to find <br>LoginModule class: org.apache.hadoop.security.UserGroupInformation <br>$HadoopLoginModule<br><br>After a bunch of debugging, I determined that this happens when the login occurs in a thread whose Context ClassLoader has been set to null.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7987">HADOOP-7987</a>.
+     Major improvement reported by devaraj and fixed by jnp (security)<br>
+     <b>Support setting the run-as user in unsecure mode</b><br>
+     <blockquote>Some applications need to be able to perform actions (such as launch MR jobs) from map or reduce tasks. In earlier unsecure versions of hadoop (20.x), it was possible to do this by setting user.name in the configuration. But in 20.205 and 1.0, when running in unsecure mode, this does not work. (In secure mode, you can do this using the kerberos credentials).</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7988">HADOOP-7988</a>.
+     Major bug reported by jnp and fixed by jnp <br>
+     <b>Upper case in hostname part of the principals doesn&apos;t work with kerberos.</b><br>
+     <blockquote>Kerberos doesn&apos;t like upper case in the hostname part of the principals.<br>This issue has been seen in 23 as well as 1.0.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7993">HADOOP-7993</a>.
+     Major bug reported by anupamseth and fixed by anupamseth (conf)<br>
+     <b>Hadoop ignores old-style config options for enabling compressed output</b><br>
+     <blockquote>Hadoop seems to ignore the config options even though they are printed as deprecation warnings in the log: mapred.output.compress and<br>mapred.output.compression.codec<br><br>- settings that work on 0.20 but not on 0.23<br>mapred.output.compress=true<br>mapred.output.compression.codec=org.apache.hadoop.io.compress.BZip2Codec<br><br>- settings that work on 0.23<br>mapreduce.output.fileoutputformat.compress=true<br>mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec<br><br>This breaks bac...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7997">HADOOP-7997</a>.
+     Major bug reported by gchanan and fixed by gchanan (io)<br>
+     <b>SequenceFile.createWriter(...createParent...) no longer works on existing file</b><br>
+     <blockquote>SequenceFile.createWriter no longer works on an existing file, because old version specified OVEWRITE by default and new version does not.  This breaks some HBase tests.<br><br>Tested against trunk.<br><br>Patch with test to follow.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7998">HADOOP-7998</a>.
+     Major bug reported by daryn and fixed by daryn (fs)<br>
+     <b>CheckFileSystem does not correctly honor setVerifyChecksum</b><br>
+     <blockquote>Regardless of the verify checksum flag, {{ChecksumFileSystem#open}} will instantiate a {{ChecksumFSInputChecker}} instead of a normal stream.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7999">HADOOP-7999</a>.
+     Critical bug reported by jlowe and fixed by jlowe (scripts)<br>
+     <b>&quot;hadoop archive&quot; fails with ClassNotFoundException</b><br>
+     <blockquote>Running &quot;hadoop archive&quot; from a command prompt results in this error:<br><br>Exception in thread &quot;main&quot; java.lang.NoClassDefFoundError: org/apache/hadoop/tools/HadoopArchives<br>Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.tools.HadoopArchives<br>	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)<br>	at java.security.AccessController.doPrivileged(Native Method)<br>	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)<br>	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)<br>	...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8000">HADOOP-8000</a>.
+     Critical bug reported by arpitgupta and fixed by arpitgupta <br>
+     <b>fetchdt command not available in bin/hadoop</b><br>
+     <blockquote>fetchdt command needs to be added to bin/hadoop to allow for backwards compatibility.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8001">HADOOP-8001</a>.
+     Major bug reported by daryn and fixed by daryn (fs)<br>
+     <b>ChecksumFileSystem&apos;s rename doesn&apos;t correctly handle checksum files</b><br>
+     <blockquote>Rename will move the src file and its crc *if present* to the destination.  If the src file has no crc, but the destination already exists with a crc, then src will be associated with the old file&apos;s crc.  Subsequent access to the file will fail with checksum errors.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8002">HADOOP-8002</a>.
+     Major bug reported by arpitgupta and fixed by arpitgupta <br>
+     <b>SecurityUtil acquired token message should be a debug rather than info</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8006">HADOOP-8006</a>.
+     Major bug reported by umamaheswararao and fixed by daryn (fs)<br>
+     <b>TestFSInputChecker is failing in trunk.</b><br>
+     <blockquote>Trunk build number 939 failed with TestFSInputChecker.<br>https://builds.apache.org/job/Hadoop-Hdfs-trunk/939/<br><br>junit.framework.AssertionFailedError: expected:&lt;10&gt; but was:&lt;0&gt;<br>	at junit.framework.Assert.fail(Assert.java:47)<br>	at junit.framework.Assert.failNotEquals(Assert.java:283)<br>	at junit.framework.Assert.assertEquals(Assert.java:64)<br>	at junit.framework.Assert.assertEquals(Assert.java:130)<br>	at junit.framework.Assert.assertEquals(Assert.java:136)<br>	at org.apache.hadoop.hdfs.TestFSInputChecker.ch...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8009">HADOOP-8009</a>.
+     Critical improvement reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>Create hadoop-client and hadoop-minicluster artifacts for downstream projects </b><br>
+     <blockquote>Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house system that interacts with Hadoop is quite challenging for the following reasons:<br><br>* *Different versions of Hadoop produce different artifacts:* Before Hadoop 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there are several (common, hdfs, mapred*, yarn*)<br><br>* *There are no &apos;client&apos; artifacts:* Current artifacts include all JARs needed to run the services, thus bringing into clients several JARs t...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8012">HADOOP-8012</a>.
+     Minor bug reported by rvs and fixed by rvs (scripts)<br>
+     <b>hadoop-daemon.sh and yarn-daemon.sh are trying to mkdir and chow log/pid dirs which can fail</b><br>
+     <blockquote>Here&apos;s what I see when using Hadoop in Bigtop:<br><br>{noformat}<br>$ sudo /sbin/service hadoop-hdfs-namenode start<br>Starting Hadoop namenode daemon (hadoop-namenode): chown: changing ownership of `/var/log/hadoop&apos;: Operation not permitted<br>starting namenode, logging to /var/log/hadoop/hadoop-hdfs-namenode-centos5.out<br>{noformat}<br><br>This is a cosmetic issue, but it would be nice to fix it.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8013">HADOOP-8013</a>.
+     Major bug reported by daryn and fixed by daryn (fs)<br>
+     <b>ViewFileSystem does not honor setVerifyChecksum</b><br>
+     <blockquote>{{ViewFileSystem#setVerifyChecksum}} is a no-op.  It should call {{setVerifyChecksum}} on the mount points.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8015">HADOOP-8015</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>ChRootFileSystem should extend FilterFileSystem</b><br>
+     <blockquote>{{ChRootFileSystem}} simply extends {{FileSystem}}, and attempts to delegate some methods to the underlying mount point.  It is essentially the same as {{FilterFileSystem}} but it mangles the paths to include the chroot path.  Unfortunately {{ChRootFileSystem}} is not delegating some methods that should be delegated.  Changing the inheritance will prevent a copy-n-paste of code for HADOOP-8013 and HADOOP-8014 into both {{ChRootFileSystem}} and {{FilterFileSystem}}.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8018">HADOOP-8018</a>.
+     Major bug reported by mattf and fixed by jeagles (build, test)<br>
+     <b>Hudson auto test for HDFS has started throwing javadoc: warning - Error fetching URL: http://java.sun.com/javase/6/docs/api/package-list</b><br>
+     <blockquote>Hudson automated testing has started failing with one javadoc warning message, consisting of<br>javadoc: warning - Error fetching URL: http://java.sun.com/javase/6/docs/api/package-list<br><br>This may be due to Oracle&apos;s decommissioning of the sun.com domain.  If one tries to access it manually, it is redirected to <br>http://download.oracle.com/javase/6/docs/api/package-list<br><br>So it looks like a build script needs to be updated.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8027">HADOOP-8027</a>.
+     Minor improvement reported by qwertymaniac and fixed by atm (metrics)<br>
+     <b>Visiting /jmx on the daemon web interfaces may print unnecessary error in logs</b><br>
+     <blockquote>Logs that follow a {{/jmx}} servlet visit:<br><br>{code}<br>11/11/22 12:09:52 ERROR jmx.JMXJsonServlet: getting attribute UsageThreshold of java.lang:type=MemoryPool,name=Par Eden Space threw an exception<br>javax.management.RuntimeMBeanException: java.lang.UnsupportedOperationException: Usage threshold is not supported<br>	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:856)<br>...<br>{code}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-69">HDFS-69</a>.
+     Minor bug reported by raviphulari and fixed by qwertymaniac <br>
+     <b>Improve dfsadmin command line help </b><br>
+     <blockquote>Enhance dfsadmin command line help informing &quot;A quota of one forces a directory to remain empty&quot; </blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-362">HDFS-362</a>.
+     Major improvement reported by szetszwo and fixed by umamaheswararao (name-node)<br>
+     <b>FSEditLog should not writes long and short as UTF8 and should not use ArrayWritable for writing non-array items</b><br>
+     <blockquote>In FSEditLog, <br><br>- long and short are first converted to String and are further converted to UTF8<br><br>- For some non-array items, it first create an ArrayWritable object to hold all the items and then writes the ArrayWritable object.<br><br>These result creating many intermediate objects which affects Namenode CPU performance and Namenode restart.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-442">HDFS-442</a>.
+     Minor bug reported by rramya and fixed by qwertymaniac (test)<br>
+     <b>dfsthroughput in test.jar throws NPE</b><br>
+     <blockquote>On running hadoop jar hadoop-test.jar dfsthroughput OR hadoop org.apache.hadoop.hdfs.BenchmarkThroughput, we get NullPointerException. Below is the stacktrace:<br>{noformat}<br>Exception in thread &quot;main&quot; java.lang.NullPointerException<br>        at java.util.Hashtable.put(Hashtable.java:394)<br>        at java.util.Properties.setProperty(Properties.java:143)<br>        at java.lang.System.setProperty(System.java:731)<br>        at org.apache.hadoop.hdfs.BenchmarkThroughput.run(BenchmarkThroughput.java:198)<br>   ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-554">HDFS-554</a>.
+     Minor improvement reported by stevel@apache.org and fixed by qwertymaniac (name-node)<br>
+     <b>BlockInfo.ensureCapacity may get a speedup from System.arraycopy()</b><br>
+     <blockquote>BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into the expanded array.  {{System.arraycopy()}} is generally much faster for this, as it can do a bulk memory copy. There is also the typesafe Java6 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2178">HDFS-2178</a>.
+     Major improvement reported by tucu00 and fixed by tucu00 <br>
+     <b>HttpFS - a read/write Hadoop file system proxy</b><br>
+     <blockquote>We&apos;d like to contribute Hoop to Hadoop HDFS as a replacement (an improvement) for HDFS Proxy.<br><br>Hoop provides access to all Hadoop Distributed File System (HDFS) operations (read and write) over HTTP/S.<br><br>The Hoop server component is a REST HTTP gateway to HDFS supporting all file system operations. It can be accessed using standard HTTP tools (i.e. curl and wget), HTTP libraries from different programing languages (i.e. Perl, Java Script) as well as using the Hoop client. The Hoop server compo...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2335">HDFS-2335</a>.
+     Major improvement reported by eli and fixed by umamaheswararao (data-node, name-node)<br>
+     <b>DataNodeCluster and NNStorage always pull fresh entropy</b><br>
+     <blockquote>Jira for giving DataNodeCluster and NNStorage the same treatment as HDFS-1835. They&apos;re not truly cryptographic uses as well. We should also factor this out to a utility method, seems like the three uses are slightly different, eg one uses DFSUtil.getRandom and the other creates a new Random object.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2349">HDFS-2349</a>.
+     Trivial improvement reported by qwertymaniac and fixed by qwertymaniac (data-node)<br>
+     <b>DN should log a WARN, not an INFO when it detects a corruption during block transfer</b><br>
+     <blockquote>Currently, in DataNode.java, we have:<br><br>{code}<br><br>      LOG.info(&quot;Can&apos;t replicate block &quot; + block<br>          + &quot; because on-disk length &quot; + onDiskLength <br>          + &quot; is shorter than NameNode recorded length &quot; + block.getNumBytes());<br><br>{code}<br><br>This log is better off as a WARN as it indicates (and also reports) a corruption.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2397">HDFS-2397</a>.
+     Major improvement reported by tlipcon and fixed by eli (name-node)<br>
+     <b>Undeprecate SecondaryNameNode</b><br>
+     <blockquote>I would like to consider un-deprecating the SecondaryNameNode for 0.23, and amending the documentation to indicate that it is still the most trust-worthy way to run checkpoints, and while CN/BN may have some advantages, they&apos;re not battle hardened as of yet. The test coverage for the 2NN is far superior to the CheckpointNode or BackupNode, and people have a lot more production experience. Indicating that it is deprecated before we have expanded test coverage of the CN/BN won&apos;t send the right ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2454">HDFS-2454</a>.
+     Minor improvement reported by umamaheswararao and fixed by qwertymaniac (data-node)<br>
+     <b>Move maxXceiverCount check to before starting the thread in dataXceiver</b><br>
+     <blockquote>We can hoist the maxXceiverCount out of DataXceiverServer#run, there&apos;s no need to check each time we accept a connection, we can accept when we create a thread.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2502">HDFS-2502</a>.
+     Minor improvement reported by eli and fixed by qwertymaniac (documentation)<br>
+     <b>hdfs-default.xml should include dfs.name.dir.restore</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2511">HDFS-2511</a>.
+     Minor improvement reported by tlipcon and fixed by tucu00 (build)<br>
+     <b>Add dev script to generate HDFS protobufs</b><br>
+     <blockquote>Would like to add a simple shell script to re-generate the protobuf code in HDFS -- just easier than remembering the right syntax.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2533">HDFS-2533</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (data-node, performance)<br>
+     <b>Remove needless synchronization on FSDataSet.getBlockFile</b><br>
+     <blockquote>HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2536">HDFS-2536</a>.
+     Trivial improvement reported by atm and fixed by qwertymaniac (name-node)<br>
+     <b>Remove unused imports</b><br>
+     <blockquote>Looks like it has 11 unused imports by my count.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2541">HDFS-2541</a>.
+     Major bug reported by qwertymaniac and fixed by qwertymaniac (data-node)<br>
+     <b>For a sufficiently large value of blocks, the DN Scanner may request a random number with a negative seed value.</b><br>
+     <blockquote>Running off 0.20-security, I noticed that one could get the following exception when scanners are used:<br><br>{code}<br>DataXceiver <br>java.lang.IllegalArgumentException: n must be positive <br>at java.util.Random.nextInt(Random.java:250) <br>at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNewBlockScanTime(DataBlockScanner.java:251) <br>at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:268) <br>at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(Da...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2543">HDFS-2543</a>.
+     Major bug reported by bmahe and fixed by bmahe (scripts)<br>
+     <b>HADOOP_PREFIX cannot be overriden</b><br>
+     <blockquote>hadoop-config.sh forces HADOOP_prefix to a specific value:<br>export HADOOP_PREFIX=`dirname &quot;$this&quot;`/..<br><br>It would be nice to make this overridable.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2544">HDFS-2544</a>.
+     Major bug reported by bmahe and fixed by bmahe (scripts)<br>
+     <b>Hadoop scripts unconditionally source &quot;$bin&quot;/../libexec/hadoop-config.sh.</b><br>
+     <blockquote>It would be nice to be able to specify some other location for hadoop-config.sh</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2545">HDFS-2545</a>.
+     Major bug reported by szetszwo and fixed by szetszwo <br>
+     <b>Webhdfs: Support multiple namenodes in federation</b><br>
+     <blockquote>DatanodeWebHdfsMethods only talks to the default namenode.  It won&apos;t work if there are multiple namenodes in federation.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2552">HDFS-2552</a>.
+     Major task reported by szetszwo and fixed by szetszwo (documentation)<br>
+     <b>Add WebHdfs Forrest doc</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2553">HDFS-2553</a>.
+     Critical bug reported by tlipcon and fixed by umamaheswararao (data-node)<br>
+     <b>BlockPoolSliceScanner spinning in loop</b><br>
+     <blockquote>Playing with trunk, I managed to get a DataNode in a situation where the BlockPoolSliceScanner is spinning in the following loop, using 100% CPU:<br>        at org.apache.hadoop.hdfs.server.datanode.DataNode$BPOfferService.isAlive(DataNode.java:820)<br>        at org.apache.hadoop.hdfs.server.datanode.DataNode.isBPServiceAlive(DataNode.java:2962)<br>        at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:625)<br>        at org.apache.hadoop.hdfs.server.data...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2560">HDFS-2560</a>.
+     Major improvement reported by tlipcon and fixed by tlipcon (data-node)<br>
+     <b>Refactor BPOfferService to be a static inner class</b><br>
+     <blockquote>Currently BPOfferService is a non-static inner class of DataNode. For HA we are adding another inner class inside of this, which makes the scope very hard to understand when reading the code (and has resulted in subtle bugs like HDFS-2529 where a variable is referenced from the wrong scope. Making it a static inner class with a reference to the DN has two advantages: a) scope is now explicit, and b) enables unit testing of the BPOS against a mocked-out DN.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2562">HDFS-2562</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (data-node)<br>
+     <b>Refactor DN configuration variables out of DataNode class</b><br>
+     <blockquote>Right now there are many member variables in DataNode.java which are just read from configuration when the DN is started. Similar to what we did with DFSClient, we should refactor them into a new DNConf class which can be passed around - the motivation is to remove the many references we have throughout the code that read package-protected members of DataNode and reduce the number of members in DataNode itself.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2563">HDFS-2563</a>.
+     Major improvement reported by tlipcon and fixed by tlipcon (data-node)<br>
+     <b>Some cleanup in BPOfferService</b><br>
+     <blockquote>BPOfferService is currently rather difficult to follow and not really commented. This JIRA is to clean up the code a bit, add javadocs/comments where necessary, and improve the formatting of the log messages.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2566">HDFS-2566</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (data-node)<br>
+     <b>Move BPOfferService to be a non-inner class</b><br>
+     <blockquote>Rounding out the cleanup of BPOfferService, it would be good to move it to its own file, so it&apos;s no longer an inner class. DataNode.java is really large and hard to navigate. BPOfferService itself is ~700 lines, so seems like a large enough unit to merit its own file.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2567">HDFS-2567</a>.
+     Major bug reported by qwertymaniac and fixed by qwertymaniac (name-node)<br>
+     <b>When 0 DNs are available, show a proper error when trying to browse DFS via web UI</b><br>
+     <blockquote>Trace:<br><br>{code}<br>HTTP ERROR 500<br><br>Problem accessing /nn_browsedfscontent.jsp. Reason:<br><br>    n must be positive<br>Caused by:<br><br>java.lang.IllegalArgumentException: n must be positive<br>	at java.util.Random.nextInt(Random.java:250)<br>	at org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)<br>	at org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)<br>	at org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)<br>	at org....</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2568">HDFS-2568</a>.
+     Trivial improvement reported by qwertymaniac and fixed by qwertymaniac (data-node)<br>
+     <b>Use a set to manage child sockets in XceiverServer</b><br>
+     <blockquote>Found while reading up for HDFS-2454, currently we maintain childSockets in a DataXceiverServer as a Map&lt;Socket,Socket&gt;. This can very well be a Set&lt;Socket&gt; data structure -- since the goal is easy removals.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2570">HDFS-2570</a>.
+     Trivial improvement reported by eli and fixed by eli (documentation)<br>
+     <b>Add descriptions for dfs.*.https.address in hdfs-default.xml</b><br>
+     <blockquote>Let&apos;s add descriptions for dfs.*.https.address in hdfs-default.xml.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2572">HDFS-2572</a>.
+     Trivial improvement reported by qwertymaniac and fixed by qwertymaniac (data-node)<br>
+     <b>Unnecessary double-check in DN#getHostName</b><br>
+     <blockquote>We do a double config.get unnecessarily inside DN#getHostName(...). Can be removed by this patch.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2574">HDFS-2574</a>.
+     Trivial task reported by joecrobak and fixed by joecrobak (documentation)<br>
+     <b>remove references to deprecated properties in hdfs-site.xml template and hdfs-default.xml</b><br>
+     <blockquote>Some examples: hadoop-hdfs/src/main/packages/templates/conf/hdfs-site.xml contains an entry for dfs.name.dir rather than dfs.namenode.name.dir and hdfs-default.xml references dfs.name.dir twice in &lt;description&gt; tags rather than using dfs.namenode.name.dir.<br><br>List of deprecated properties is here: http://hadoop.apache.org/common/docs/r0.23.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2575">HDFS-2575</a>.
+     Minor bug reported by tlipcon and fixed by tlipcon (test)<br>
+     <b>DFSTestUtil may create empty files</b><br>
+     <blockquote>DFSTestUtil creates files with random sizes, but there is no minimum size. So, sometimes, it can make a file with length 0. This will cause tests that use this functionality to fail - for example, TestListCorruptFileBlocks assumes that each of the created files has at least one block. We should add a minSize parameter to prevent this.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2587">HDFS-2587</a>.
+     Major task reported by szetszwo and fixed by szetszwo (documentation)<br>
+     <b>Add WebHDFS apt doc</b><br>
+     <blockquote>This issue is to add a WebHDFS doc in apt format in additional to the forrest doc (HDFS-2552).</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2588">HDFS-2588</a>.
+     Trivial bug reported by davevr and fixed by davevr (scripts)<br>
+     <b>hdfs jsp pages missing DOCTYPE [post-split branches]</b><br>
+     <blockquote>Some jsp pages in the UI are missing a DOCTYPE declaration. This causes the pages to render incorrectly on some browsers, such as IE9.  Please see parent bug HADOOP-7827 for details and patch.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2590">HDFS-2590</a>.
+     Major bug reported by szetszwo and fixed by szetszwo (documentation)<br>
+     <b>Some links in WebHDFS forrest doc do not work</b><br>
+     <blockquote>Some links are pointing to DistributedFileSystem javadoc but the javadoc of DistributedFileSystem is not generated by default.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2594">HDFS-2594</a>.
+     Critical bug reported by tucu00 and fixed by szetszwo <br>
+     <b>webhdfs HTTP API should implement getDelegationTokens() instead getDelegationToken()</b><br>
+     <blockquote>The current API returns a single delegation token, that method from the FileSystem API is deprecated in favor of the one that returns a list of tokens. The HTTP API should implement the new/undeprecated signature getDelegationTokens().</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2596">HDFS-2596</a>.
+     Major bug reported by eli and fixed by eli (data-node, test)<br>
+     <b>TestDirectoryScanner doesn&apos;t test parallel scans</b><br>
+     <blockquote>The code from HDFS-854 below doesn&apos;t run the test with parallel scanning. They probably intended &quot;parallelism &lt; 3&quot;.<br><br>{code}<br>+  public void testDirectoryScanner() throws Exception {<br>+    // Run the test with and without parallel scanning<br>+    for (int parallelism = 1; parallelism &lt; 2; parallelism++) {<br>+      runTest(parallelism);<br>+    }<br>+  }<br>{code}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2604">HDFS-2604</a>.
+     Minor improvement reported by szetszwo and fixed by szetszwo (data-node, documentation, name-node)<br>
+     <b>Add a log message to show if WebHDFS is enabled</b><br>
+     <blockquote>WebHDFS can be enabled/disabled by the conf key {{dfs.webhdfs.enabled}}.  Let&apos;s add a log message to show if it is enabled.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2606">HDFS-2606</a>.
+     Critical bug reported by tucu00 and fixed by tucu00 (hdfs client)<br>
+     <b>webhdfs client filesystem impl must set the content-type header for create/append</b><br>
+     <blockquote>Currently the content-type header is not being set and for some reason for append it is being set to the form encoded content type making jersey parameter parsing fail.<br><br>For this and to avoid any kind of proxy transcoding the content-type should be set to binary.<br><br>{code}<br>  conn.setRequestProperty(&quot;Content-Type&quot;, &quot;application/octet-stream&quot;);<br>{code}<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2614">HDFS-2614</a>.
+     Major bug reported by bmahe and fixed by tucu00 (build)<br>
+     <b>hadoop dist tarball is missing hdfs headers</b><br>
+     <blockquote>It would be nice to provide hdfs header so one could easily write programs to be linked against that library and access HDFS</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2640">HDFS-2640</a>.
+     Major bug reported by tomwhite and fixed by tomwhite <br>
+     <b>Javadoc generation hangs</b><br>
+     <blockquote>Typing &apos;mvn javadoc:javadoc&apos; causes the build to hang.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2646">HDFS-2646</a>.
+     Major bug reported by umamaheswararao and fixed by tucu00 <br>
+     <b>Hadoop HttpFS introduced 4 findbug warnings.</b><br>
+     <blockquote>https://builds.apache.org/job/PreCommit-HDFS-Build/1665//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2649">HDFS-2649</a>.
+     Major bug reported by jlowe and fixed by jlowe (build)<br>
+     <b>eclipse:eclipse build fails for hadoop-hdfs-httpfs</b><br>
+     <blockquote>Building the eclipse:eclipse target fails in the hadoop-hdfs-httpfs project with this error:<br><br>[ERROR] Failed to execute goal org.apache.maven.plugins:maven-eclipse-plugin:2.8:eclipse (default-cli) on project hadoop-hdfs-httpfs: Request to merge when &apos;filtering&apos; is not identical. Original=resource src/main/resources: output=target/classes, include=[httpfs.properties], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], e...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2653">HDFS-2653</a>.
+     Major improvement reported by eli and fixed by eli (data-node)<br>
+     <b>DFSClient should cache whether addrs are non-local when short-circuiting is enabled</b><br>
+     <blockquote>Something Todd mentioned to me off-line.. currently DFSClient doesn&apos;t cache the fact that non-local reads are non-local, so if short-circuiting is enabled every time we create a block reader we&apos;ll go through the isLocalAddress code path. We should cache the fact that an addr is non-local as well.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2654">HDFS-2654</a>.
+     Major improvement reported by eli and fixed by eli (data-node)<br>
+     <b>Make BlockReaderLocal not extend RemoteBlockReader2</b><br>
+     <blockquote>The BlockReaderLocal code paths are easier to understand (especially true on branch-1 where BlockReaderLocal inherits code from BlockerReader and FSInputChecker) if the local and remote block reader implementations are independent, and they&apos;re not really sharing much code anyway. If for some reason they start to share significant code we can make the BlockReader interface an abstract class.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2657">HDFS-2657</a>.
+     Major bug reported by eli and fixed by tucu00 <br>
+     <b>TestHttpFSServer and TestServerWebApp are failing on trunk</b><br>
+     <blockquote>&gt;&gt;&gt; org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation<br>&gt;&gt;&gt; org.apache.hadoop.lib.servlet.TestServerWebApp.lifecycle</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2658">HDFS-2658</a>.
+     Major bug reported by eli and fixed by tucu00 <br>
+     <b>HttpFS introduced 70 javadoc warnings</b><br>
+     <blockquote>{noformat}<br>hadoop1 (trunk)$ grep warning javadoc.txt |grep -c httpfs<br>70<br>{noformat}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2675">HDFS-2675</a>.
+     Trivial improvement reported by tlipcon and fixed by tlipcon (name-node)<br>
+     <b>Reduce verbosity when double-closing edit logs</b><br>
+     <blockquote>Currently the edit logs log at WARN level when they&apos;re double-closed. But this happens in the normal flow of things, so we may as well reduce it to DEBUG to reduce log spam in unit tests, etc.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2705">HDFS-2705</a>.
+     Major bug reported by tucu00 and fixed by tucu00 <br>
+     <b>HttpFS server should check that upload requests have correct content-type</b><br>
+     <blockquote>The append/create requests should require &apos;application/octet-stream&apos; as content-type when uploading data. This is to prevent the default content-type form-encoded (used as default by some HTTP libraries) to be used or text based content-types to be used.<br><br>If the form-encoded content type is used, then Jersey tries to process the upload stream as parameters<br>If a test base content-type is used, HTTP proxies/gateways could do attempt some transcoding on the stream thus corrupting the data.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2706">HDFS-2706</a>.
+     Major bug reported by szetszwo and fixed by szetszwo (name-node)<br>
+     <b>Use configuration for blockInvalidateLimit if it is set</b><br>
+     <blockquote>HDFS-2191 accidentally removed the following code.<br>{code}<br>- this.blockInvalidateLimit = conf.getInt(<br>-        DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_KEY, this.blockInvalidateLimit);<br>{code}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2707">HDFS-2707</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (security)<br>
+     <b>HttpFS should read the hadoop-auth secret from a file instead inline from the configuration</b><br>
+     <blockquote>Similar to HADOOP-7621, the secret should be in a file other than the configuration file.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2710">HDFS-2710</a>.
+     Critical bug reported by sseth and fixed by  <br>
+     <b>HDFS part of MAPREDUCE-3529, HADOOP-7933</b><br>
+     <blockquote>viewfs implementation of getDelegationTokens(String, Credentials)</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2722">HDFS-2722</a>.
+     Major bug reported by qwertymaniac and fixed by qwertymaniac (hdfs client)<br>
+     <b>HttpFs shouldn&apos;t be using an int for block size</b><br>
+     <blockquote>{{./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java: blockSize = fs.getConf().getInt(&quot;dfs.block.size&quot;, 67108864);}}<br><br>Should instead be using dfs.blocksize and should instead be long.<br><br>I&apos;ll post a patch for this after HDFS-1314 is resolved -- which changes the internal behavior a bit (should be getLongBytes, and not just getLong, to gain formatting advantages).</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2726">HDFS-2726</a>.
+     Major improvement reported by bien and fixed by qwertymaniac <br>
+     <b>&quot;Exception in createBlockOutputStream&quot; shouldn&apos;t delete exception stack trace</b><br>
+     <blockquote>I&apos;m occasionally (1/5000 times) getting this error after upgrading everything to hadoop-0.18:<br><br>08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream<br>08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block blk_624229997631234952_8205908<br><br>DFSClient contains the logging code:<br><br>        LOG.info(&quot;Exception in createBlockOutputStream &quot; + ie);<br><br>This would be better written with ie as the second argument to LOG.info, so that the stac...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2729">HDFS-2729</a>.
+     Minor improvement reported by qwertymaniac and fixed by qwertymaniac (name-node)<br>
+     <b>Update BlockManager&apos;s comments regarding the invalid block set</b><br>
+     <blockquote>Looks like after HDFS-82 was covered at some point, the comments and logs still carry presence of two sets when there really is just one set.<br><br>This patch changes the logs and comments to be more accurate about that.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2751">HDFS-2751</a>.
+     Major bug reported by tlipcon and fixed by tlipcon (data-node)<br>
+     <b>Datanode drops OS cache behind reads even for short reads</b><br>
+     <blockquote>HDFS-2465 has some code which attempts to disable the &quot;drop cache behind reads&quot; functionality when the reads are &lt;256KB (eg HBase random access). But this check was missing in the {{close()}} function, so it always drops cache behind reads regardless of the size of the read. This hurts HBase random read performance when this patch is enabled.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2784">HDFS-2784</a>.
+     Major sub-task reported by daryn and fixed by kihwal (hdfs client, name-node, security)<br>
+     <b>Update hftp and hdfs for host-based token support</b><br>
+     <blockquote>Need to port 205 token changes and update any new related code dealing with tokens in these filesystems.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2785">HDFS-2785</a>.
+     Major sub-task reported by daryn and fixed by revans2 (name-node, security)<br>
+     <b>Update webhdfs and httpfs for host-based token support</b><br>
+     <blockquote>Need to port 205 tokens into these filesystems.  Will mainly involve ensuring code duplicated from hftp is updated accordingly.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2788">HDFS-2788</a>.
+     Major improvement reported by eli and fixed by eli (data-node)<br>
+     <b>HdfsServerConstants#DN_KEEPALIVE_TIMEOUT is dead code</b><br>
+     <blockquote>HDFS-941 introduced HdfsServerConstants#DN_KEEPALIVE_TIMEOUT but its never used. Perhaps was renamed to DFSConfigKeys#DFS_DATANODE_SOCKET_REUSE_KEEPALIVE_DEFAULT while the patch was written and the old one wasn&apos;t deleted.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2790">HDFS-2790</a>.
+     Minor bug reported by arpitgupta and fixed by arpitgupta <br>
+     <b>FSNamesystem.setTimes throws exception with wrong configuration name in the message</b><br>
+     <blockquote>the api throws this message when hdfs is not configured for accessTime<br><br>&quot;Access time for hdfs is not configured.  Please set dfs.support.accessTime configuration parameter.&quot;<br><br><br>The property name should be dfs.access.time.precision</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2791">HDFS-2791</a>.
+     Major bug reported by tlipcon and fixed by tlipcon (data-node, name-node)<br>
+     <b>If block report races with closing of file, replica is incorrectly marked corrupt</b><br>
+     <blockquote>The following sequence of events results in a replica mistakenly marked corrupt:<br>1. Pipeline is open with 2 replicas<br>2. DN1 generates a block report but is slow in sending to the NN (eg some flaky network). It gets &quot;stuck&quot; right before the block report RPC.<br>3. Client closes the file.<br>4. DN2 is fast and sends blockReceived to the NN. NN marks the block as COMPLETE<br>5. DN1&apos;s block report proceeds, and includes the block in an RBW state.<br>6. (x) NN incorrectly marks the replica as corrupt, since i...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2803">HDFS-2803</a>.
+     Minor improvement reported by jxiang and fixed by jxiang (name-node)<br>
+     <b>Adding logging to LeaseRenewer for better lease expiration triage.</b><br>
+     <blockquote>It will be helpful to add some logging to LeaseRenewer when the daemon is terminated (Info level),<br>and when the lease is renewed (Debug level).  Since lacking logging, it is hard to know<br>if a DFS client doesn&apos;t renew the lease because it hangs, or the lease renewer daemon is gone somehow.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2810">HDFS-2810</a>.
+     Critical bug reported by tlipcon and fixed by tlipcon (hdfs client)<br>
+     <b>Leases not properly getting renewed by clients</b><br>
+     <blockquote>We&apos;ve been testing HBase on clusters running trunk and seen an issue where they seem to lose their HDFS leases after a couple of hours of runtime. We don&apos;t quite have enough data to understand what&apos;s happening, but the NN is expiring them, claiming the hard lease period has elapsed. The clients report no error until their output stream gets killed underneath them.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2814">HDFS-2814</a>.
+     Minor improvement reported by hitesh and fixed by hitesh <br>
+     <b>NamenodeMXBean does not account for svn revision in the version information</b><br>
+     <blockquote>Unlike the jobtracker where both the UI and jmx information report the version as &quot;x.y.z, r&lt;svn revision&quot;, in case of the namenode, the UI displays x.y.z and svn revision info but the jmx output only contains the x.y.z version.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2816">HDFS-2816</a>.
+     Trivial bug reported by hitesh and fixed by hitesh <br>
+     <b>Fix missing license header in hadoop-hdfs-project/hadoop-hdfs-httpfs/dev-support/findbugsExcludeFile.xml</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2817">HDFS-2817</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (test)<br>
+     <b>Combine the two TestSafeMode test suites</b><br>
+     <blockquote>We currently have two tests by the same name. We should combine them. Also adding a new test for safemode extension, which wasn&apos;t previously covered.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2818">HDFS-2818</a>.
+     Trivial bug reported by tlipcon and fixed by  (name-node)<br>
+     <b>dfshealth.jsp missing space between role and node name</b><br>
+     <blockquote>There seems to be a missing space in the titles of our webpages. EG: &lt;title&gt;Hadoop NameNodestyx01.sf.cloudera.com:8021&lt;/title&gt;. It seems like the JSP compiler is doing something to the space which is in the .jsp. Probably a simple fix if you know something about JSP :)</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2822">HDFS-2822</a>.
+     Major bug reported by tlipcon and fixed by tlipcon (ha, name-node)<br>
+     <b>processMisReplicatedBlock incorrectly identifies under-construction blocks as under-replicated</b><br>
+     <blockquote>When the NN processes mis-replicated blocks while exiting safemode, it considers under-construction blocks as under-replicated, inserting them into the neededReplicationsQueue. This makes them show up as corrupt in the metrics and UI momentarily, until they&apos;re pulled off the queue. At that point, it realizes that they aren&apos;t in fact under-replicated, correctly. This affects both the HA branch and trunk/23, best I can tell.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2825">HDFS-2825</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (name-node)<br>
+     <b>Add test hook to turn off the writer preferring its local DN</b><br>
+     <blockquote>Currently, the default block placement policy always places the first replica in the pipeline on the local node if there is a valid DN running there. In some network designs, within-rack bandwidth is never constrained so this doesn&apos;t give much of an advantage. It would also be really useful to disable this for MiniDFSCluster tests, since currently if you start a multi-DN cluster and write with replication level 1, all of the replicas go to the same DN.<br>_[per discussion below, this was changed...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2826">HDFS-2826</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (name-node, test)<br>
+     <b>Test case for HDFS-1476 (safemode can initialize repl queues before exiting)</b><br>
+     <blockquote>HDFS-1476 introduced a feature whereby SafeMode can trigger the initialization of replication queues before the safemode exit threshold has been reached. But, it didn&apos;t include a test for this new behavior. This JIRA is to contribute such a test</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2827">HDFS-2827</a>.
+     Major bug reported by umamaheswararao and fixed by umamaheswararao (name-node)<br>
+     <b>Cannot save namespace after renaming a directory above a file with an open lease</b><br>
+     <blockquote>When i execute the following operations and wait for checkpoint to complete.<br><br>fs.mkdirs(new Path(&quot;/test1&quot;));<br>FSDataOutputStream create = fs.create(new Path(&quot;/test/abc.txt&quot;)); //dont close<br>fs.rename(new Path(&quot;/test/&quot;), new Path(&quot;/test1/&quot;));<br><br>Check-pointing is failing with the following exception.<br><br>2012-01-23 15:03:14,204 ERROR namenode.FSImage (FSImage.java:run(795)) - Unable to save image for E:\HDFS-1623\hadoop-hdfs-project\hadoop-hdfs\build\test\data\dfs\name3<br>java.io.IOException: saveLease...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2835">HDFS-2835</a>.
+     Major bug reported by revans2 and fixed by revans2 (tools)<br>
+     <b>Fix org.apache.hadoop.hdfs.tools.GetConf$Command Findbug issue</b><br>
+     <blockquote>https://builds.apache.org/job/PreCommit-HDFS-Build/1804//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html shows a findbugs warning.  It is unrelated to the patch being tested, and has shown up on a few other JIRAS as well.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2836">HDFS-2836</a>.
+     Major bug reported by revans2 and fixed by revans2 <br>
+     <b>HttpFSServer still has 2 javadoc warnings in trunk</b><br>
+     <blockquote>{noformat}<br>[WARNING] hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java:241: warning - @param argument &quot;override,&quot; is not a parameter name.<br>[WARNING] hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java:450: warning - @param argument &quot;override,&quot; is not a parameter name.<br>{noformat}<br><br>These are causing other patches to get a -1 in automated testing.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2837">HDFS-2837</a>.
+     Major bug reported by revans2 and fixed by revans2 <br>
+     <b>mvn javadoc:javadoc not seeing LimitedPrivate class </b><br>
+     <blockquote>mvn javadoc:javadoc not seeing LimitedPrivate class <br>{noformat}<br>[WARNING] org/apache/hadoop/fs/FileSystem.class(org/apache/hadoop/fs:FileSystem.class): warning: Cannot find annotation method &apos;value()&apos; in type &apos;org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate&apos;: class file for org.apache.hadoop.classification.InterfaceAudience not found<br>[WARNING] org/apache/hadoop/fs/FileSystem.class(org/apache/hadoop/fs:FileSystem.class): warning: Cannot find annotation method &apos;value()&apos; in typ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2840">HDFS-2840</a>.
+     Major bug reported by eli and fixed by tucu00 (test)<br>
+     <b>TestHostnameFilter should work with localhost or localhost.localdomain </b><br>
+     <blockquote>TestHostnameFilter may currently fail with the following:<br><br>{noformat}<br>Error Message<br><br>null expected:&lt;localhost[.localdomain]&gt; but was:&lt;localhost[]&gt;<br>Stacktrace<br><br>junit.framework.ComparisonFailure: null expected:&lt;localhost[.localdomain]&gt; but was:&lt;localhost[]&gt;<br>	at junit.framework.Assert.assertEquals(Assert.java:81)<br>	at junit.framework.Assert.assertEquals(Assert.java:87)<br>	at org.apache.hadoop.lib.servlet.TestHostnameFilter$1.doFilter(TestHostnameFilter.java:50)<br>	at org.apache.hadoop.lib.servlet.Hos...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2864">HDFS-2864</a>.
+     Major sub-task reported by szetszwo and fixed by szetszwo (data-node)<br>
+     <b>Remove redundant methods and a constant from FSDataset</b><br>
+     <blockquote>- METADATA_VERSION is declared in both FSDataset and BlockMetadataHeader.<br><br>- In FSDataset, the methods findBlockFile(..), getBlockFile(..) and getFile(..) are very similar. </blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2868">HDFS-2868</a>.
+     Minor improvement reported by qwertymaniac and fixed by qwertymaniac (data-node)<br>
+     <b>Add number of active transfer threads to the DataNode status</b><br>
+     <blockquote>Presently, we do not provide any stats from the DN that specifically indicates the total number of active transfer threads (xceivers). Having such a metric can be very helpful as well, over plain num-ops(type) form of metrics, which already exist.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2879">HDFS-2879</a>.
+     Major sub-task reported by szetszwo and fixed by szetszwo (data-node)<br>
+     <b>Change FSDataset to package private</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2889">HDFS-2889</a>.
+     Major bug reported by gchanan and fixed by gchanan (hdfs client)<br>
+     <b>getNumCurrentReplicas is package private but should be public on 0.23 (see HDFS-2408)</b><br>
+     <blockquote>See https://issues.apache.org/jira/browse/HDFS-2408<br>HDFS-2408 was not committed to 0.23 (or trunk it looks like).<br><br>This is breaking HBase unit tests with &quot;-Dhadoop.profile=23&quot;</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2893">HDFS-2893</a>.
+     Minor bug reported by eli2 and fixed by eli2 <br>
+     <b>The start/stop scripts don&apos;t start/stop the 2NN when using the default configuration</b><br>
+     <blockquote>HDFS-1703 changed the behavior of the start/stop scripts so that the masters file is no longer used to indicate which hosts to start the 2NN on. The 2NN is now started, when using start-dfs.sh, on hosts only when dfs.namenode.secondary.http-address is configured with a non-wildcard IP. This means you can not start a NN using an http-address specified using a wildcard IP. We should allow a 2NN to be started with the default config, ie start-dfs.sh should start a NN, 2NN and DN. The packaging a...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1744">MAPREDUCE-1744</a>.
+     Major bug reported by dking and fixed by dking <br>
+     <b>DistributedCache creates its own FileSytem instance when adding a file/archive to the path</b><br>
+     <blockquote>According to the contract of {{UserGroupInformation.doAs()}} the only required operations within the {{doAs()}} block are the<br>creation of a {{JobClient}} or getting a {{FileSystem}} .<br><br>The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a {{FileSystem}} instance outside of the {{doAs()}} block,<br>this {{FileSystem}} instance is not in the scope of the proxy user but of the superuser and permissions may make the method<br>fail.<br><br>One option is to overload the methods above to rece...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2450">MAPREDUCE-2450</a>.
+     Major bug reported by matei and fixed by rajesh.balamohan <br>
+     <b>Calls from running tasks to TaskTracker methods sometimes fail and incur a 60s timeout</b><br>
+     <blockquote>I&apos;m seeing some map tasks in my jobs take 1 minute to commit after they finish the map computation. On the map side, the output looks like this:<br><br>&lt;code&gt;<br>2009-03-02 21:30:54,384 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=MAP, sessionId= - already initialized<br>2009-03-02 21:30:54,437 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 800<br>2009-03-02 21:30:54,437 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 300<br>2009-03-02 21:30:55,493 I...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3045">MAPREDUCE-3045</a>.
+     Minor bug reported by rramya and fixed by jeagles (jobhistoryserver, mrv2)<br>
+     <b>Elapsed time filter on jobhistory server displays incorrect table entries</b><br>
+     <blockquote>The elapsed time filter on the jobhistory server filters incorrect information. <br>For e.g. on a cluster where the elapsed time of all the tasks is either 7 or 8sec, the filter displays non null table entries for 1sec or 3sec</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3121">MAPREDUCE-3121</a>.
+     Blocker bug reported by vinodkv and fixed by ravidotg (mrv2, nodemanager)<br>
+     <b>DFIP aka &apos;NodeManager should handle Disk-Failures In Place&apos;</b><br>
+     <blockquote>This is akin to MAPREDUCE-2413 but for YARN&apos;s NodeManager. We want to minimize the impact of transient/permanent disk failures on containers. With larger number of disks per node, the ability to continue to run containers on other disks is crucial.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3147">MAPREDUCE-3147</a>.
+     Major improvement reported by raviprak and fixed by raviprak (mrv2)<br>
+     <b>Handle leaf queues with the same name properly</b><br>
+     <blockquote>If there are two leaf queues with the same name, there is ambiguity while submitting jobs, displaying queue info. When such ambiguity exists, the system should ask for clarification / show disambiguated information.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3169">MAPREDUCE-3169</a>.
+     Major improvement reported by tlipcon and fixed by ahmed.radwan (mrv1, mrv2, test)<br>
+     <b>Create a new MiniMRCluster equivalent which only provides client APIs cross MR1 and MR2</b><br>
+     <blockquote>Many dependent projects like HBase, Hive, Pig, etc, depend on MiniMRCluster for writing tests. Many users do as well. MiniMRCluster, however, exposes MR implementation details like the existence of TaskTrackers, JobTrackers, etc, since it was used by MR1 for testing the server implementations as well.<br><br>This JIRA is to create a new interface which could be implemented either by MR1 or MR2 that exposes only the client-side portions of the MR framework. Ideally it would be &quot;recompile-compatible&quot;...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3194">MAPREDUCE-3194</a>.
+     Major bug reported by sseth and fixed by jlowe (mrv2)<br>
+     <b>&quot;mapred mradmin&quot; command is broken in mrv2</b><br>
+     <blockquote>$mapred  mradmin  <br>Exception in thread &quot;main&quot; java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/tools/MRAdmin<br>Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.tools.MRAdmin<br>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)<br>        at java.security.AccessController.doPrivileged(Native Method)<br>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)<br>        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)<br>        at sun.misc.Launc...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3238">MAPREDUCE-3238</a>.
+     Trivial improvement reported by tlipcon and fixed by tlipcon (mrv2)<br>
+     <b>Small cleanup in SchedulerApp</b><br>
+     <blockquote>While reading this code, I did a little bit of cleanup:<br>- added some javadoc<br>- rather than using a Map&lt;Priority, Integer&gt; for keeping counts, switched to Guava&apos;s HashMultiset, which makes a simpler API.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3243">MAPREDUCE-3243</a>.
+     Major bug reported by rramya and fixed by jeagles (contrib/streaming, mrv2)<br>
+     <b>Invalid tracking URL for streaming jobs</b><br>
+     <blockquote>The tracking URL for streaming jobs currently display &quot;http://N/A&quot;<br><br>{noformat}<br>INFO streaming.StreamJob: To kill this job, run:<br>INFO streaming.StreamJob: hadoop job -kill &lt;jobID&gt;<br>INFO streaming.StreamJob: Tracking URL: http://N/A<br>INFO mapreduce.Job: Running job: &lt;jobID&gt;<br>INFO mapreduce.Job:  map 0% reduce 0%<br>INFO mapred.ClientServiceDelegate: Tracking Url of JOB is &lt;host:port&gt;<br><br>{noformat}<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3251">MAPREDUCE-3251</a>.
+     Critical task reported by anupamseth and fixed by anupamseth (mrv2)<br>
+     <b>Network ACLs can prevent some clients to talk to MR ApplicationMaster</b><br>
+     <blockquote>In 0.20.xxx, the JobClient while polling goes to JT to get the job status. With YARN, AM can be launched on any port and the client will have to have ACL open to that port to talk to AM and get the job status. When the client is within the same grid network access to AM is not a problem. But some applications may have one installation per set of clusters and may launch jobs even across such sets (on job trackers in another set of clusters). For that to work only the JT port needs to be open c...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3265">MAPREDUCE-3265</a>.
+     Blocker improvement reported by tlipcon and fixed by acmurthy (mrv2)<br>
+     <b>Reduce log level on MR2 IPC construction, etc</b><br>
+     <blockquote>Currently MR&apos;s IPC logging is very verbose. For example, I see a lot of:<br><br>11/10/25 12:14:06 INFO ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC<br>11/10/25 12:14:06 INFO mapred.ResourceMgrDelegate: Connecting to ResourceManager at c0309.hal.cloudera.com/172.29.81.91:40012<br>11/10/25 12:14:06 INFO ipc.HadoopYarnRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ClientRMProtocol<br>11/10/25 12:14:07 INFO mapred.ResourceMgrDelegate...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3291">MAPREDUCE-3291</a>.
+     Blocker bug reported by rramya and fixed by revans2 (mrv2)<br>
+     <b>App fail to launch due to delegation token not found in cache</b><br>
+     <blockquote>In secure mode, saw an app failure due to &quot;org.apache.hadoop.security.token.SecretManager$InvalidToken: token (HDFS_DELEGATION_TOKEN token &lt;id&gt; for &lt;user&gt;) can&apos;t be found in cache&quot; Exception in the next comment.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3324">MAPREDUCE-3324</a>.
+     Critical bug reported by jeagles and fixed by jeagles (jobhistoryserver, mrv2, nodemanager)<br>
+     <b>Not All HttpServer tools links (stacks,logs,config,metrics) are accessible through all UI servers</b><br>
+     <blockquote>Nodemanager has no tools listed under tools UI.<br>Jobhistory server has no logs tool listed under tools UI.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3326">MAPREDUCE-3326</a>.
+     Critical bug reported by tgraves and fixed by jlowe (mrv2)<br>
+     <b>RM web UI scheduler link not as useful as should be</b><br>
+     <blockquote>The resource manager web ui page for scheduler doesn&apos;t have all the information about the configuration like the jobtracker page used to have.  The things it seems to show you are the current queues - each queues used, set, and max percent and then what apps are running in that queue.  <br><br>It doesn&apos;t list any of yarn.scheduler.capacity.maximum-applications, yarn.scheduler.capacity.maximum-am-resource-percent, yarn.scheduler.capacity.&lt;queue-path&gt;.user-limit-factor, yarn.scheduler.capacity.&lt;queue...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3327">MAPREDUCE-3327</a>.
+     Critical bug reported by tgraves and fixed by anupamseth (mrv2)<br>
+     <b>RM web ui scheduler link doesn&apos;t show correct max value for queues</b><br>
+     <blockquote>Configure a cluster to use the capacity scheduler and then specifying a maximum-capacity &lt; 100% for a queue.  If you go to the RM Web UI and hover over the queue, it always shows the max at 100%.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3328">MAPREDUCE-3328</a>.
+     Critical bug reported by tgraves and fixed by raviprak (mrv2)<br>
+     <b>mapred queue -list output inconsistent and missing child queues</b><br>
+     <blockquote>When running mapred queue -list on a 0.23.0 cluster with capacity scheduler configured with child queues.  In my case I have queues default, test1, and test2.  test1 has subqueues of a1, a2.  test2 has subqueues of a3 and a4.<br><br>- the child queues do not show up<br>- The output of maximum capacity doesn&apos;t match the format of the current capacity and capacity.  the latter two use float while the maximum is specified as int:<br><br>Queue Name : default <br>Queue State : running <br>Scheduling Info : queueName: ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3329">MAPREDUCE-3329</a>.
+     Blocker bug reported by tgraves and fixed by acmurthy (mrv2)<br>
+     <b>capacity schedule maximum-capacity allowed to be less then capacity</b><br>
+     <blockquote>When configuring the capacity scheduler capacity and maximum-capacity, it allows the maximum-capacity to be less then the capacity.  I did not test to see what true limit is, I assume maximum capacity.<br><br>output from mapred queue -list where capacity = 10%, max capacity = 5%.<br><br>Queue Name : test2 <br>Queue State : running <br>Scheduling Info : queueName: &quot;test2&quot;, capacity: 0.1, maximumCapacity: 5.0, currentCapacity: 0.0, state: Q_RUNNING,  <br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3331">MAPREDUCE-3331</a>.
+     Minor improvement reported by anupamseth and fixed by anupamseth (mrv2)<br>
+     <b>Improvement to single node cluster setup documentation for 0.23</b><br>
+     <blockquote>This JIRA is to track some minor corrections and suggestions for improvement for the documentation for the setup of a single node cluster using 0.23 currently available at http://people.apache.org/~acmurthy/hadoop-0.23/hadoop-yarn/hadoop-yarn-site/SingleCluster.html<br><br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3336">MAPREDUCE-3336</a>.
+     Critical bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>com.google.inject.internal.Preconditions not public api - shouldn&apos;t be using it</b><br>
+     <blockquote>com.google.inject.internal.Preconditions does not exist in guice 3.0 and from in guice 2.0 it was an internal api and shouldn&apos;t have been used.   We should use com.google.common.base.Preconditions instead.<br><br>This is currently being used in hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java.<br><br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3341">MAPREDUCE-3341</a>.
+     Major improvement reported by anupamseth and fixed by anupamseth (mrv2)<br>
+     <b>Enhance logging of initalized queue limit values</b><br>
+     <blockquote>Currently the RM log shows only a partial set of the limits that are configured when a queue is initialized / reinitialized.<br><br>For example, this is what is currently shown in the RM log for an initialized queue:<br># &lt;datestamp&gt; INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing<br>default, capacity=0.25, asboluteCapacity=0.25, maxCapacity=25.0, asboluteMaxCapacity=0.25, userLimit=100,<br>userLimitFactor=20.0, maxApplications=2500, maxApplicationsPerUser=50000...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3344">MAPREDUCE-3344</a>.
+     Major bug reported by brocknoland and fixed by brocknoland <br>
+     <b>o.a.h.mapreduce.Reducer since 0.21 blindly casts to ReduceContext.ValueIterator</b><br>
+     <blockquote>0.21 mapreduce.Reducer introduced a blind cast to ReduceContext.ValueIterator. There should an instanceof check around this block to ensure we don&apos;t throw a CastClassException:<br>{code}<br>       // If a back up store is used, reset it<br>      ((ReduceContext.ValueIterator)<br>          (context.getValues().iterator())).resetBackupStore();<br>{code}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3346">MAPREDUCE-3346</a>.
+     Blocker bug reported by karams and fixed by amar_kamat (tools/rumen)<br>
+     <b>Rumen LoggedTaskAttempt  getHostName call returns hostname as null</b><br>
+     <blockquote>After MAPREDUCE-3035 and MAPREDUCE-3317<br>Now MRV2 job history contains hostName and rackName.<br>when rumen trace builder is ran on jobhistory, its generated trace contains hostname in form of <br>hostName : /raclname/hostname<br><br>But getHostName for LoggedTaskAttempt returns hostname as null<br>Seems that TraceBuilder is setting hostName properly but JobTraceReader is not able read it.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3354">MAPREDUCE-3354</a>.
+     Blocker bug reported by vinodkv and fixed by jeagles (jobhistoryserver, mrv2)<br>
+     <b>JobHistoryServer should be started by bin/mapred and not by bin/yarn</b><br>
+     <blockquote>JobHistoryServer belongs to mapreduce land.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3366">MAPREDUCE-3366</a>.
+     Major bug reported by eyang and fixed by eyang (mrv2)<br>
+     <b>Mapreduce component should use consistent directory structure layout as HDFS/common</b><br>
+     <blockquote>Directory structure for MRv2 layout looks like:<br><br>{noformat}<br>hadoop-mapreduce-0.23.0-SNAPSHOT/bin<br>                                /conf<br>                                /lib<br>                                /modules<br>{noformat}<br><br>The directory structure layout should be updated to reflect changes implemented in HADOOP-6255.<br><br>{noformat}<br>hadoop-mapreduce-0.23.0-SNAPSHOT/bin<br>                                /etc/hadoop<br>                                /lib<br>                                /libexec<br>     ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3369">MAPREDUCE-3369</a>.
+     Major improvement reported by ahmed.radwan and fixed by ahmed.radwan (mrv1, mrv2, test)<br>
+     <b>Migrate MR1 tests to run on MR2 using the new interfaces introduced in MAPREDUCE-3169</b><br>
+     <blockquote>This ticket tracks the migration of MR1 tests (currently residing in &quot;hadoop-mapreduce-project/src/test/&quot;) to run on MR2. The migration is using the new interfaces introduced in MAPREDUCE-3169.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3370">MAPREDUCE-3370</a>.
+     Major bug reported by ahmed.radwan and fixed by ahmed.radwan (mrv2, test)<br>
+     <b>MiniMRYarnCluster uses a hard coded path location for the MapReduce application jar</b><br>
+     <blockquote>MiniMRYarnCluster uses a hard coded relative path location for the MapReduce application jar. It is better to have this location as a system property so tests can pick the application jar regardless of their working directory.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3371">MAPREDUCE-3371</a>.
+     Minor improvement reported by raviprak and fixed by raviprak (documentation, mrv2)<br>
+     <b>Review and improve the yarn-api javadocs.</b><br>
+     <blockquote>Review and improve the yarn-api javadocs.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3372">MAPREDUCE-3372</a>.
+     Major bug reported by bmahe and fixed by bmahe <br>
+     <b>HADOOP_PREFIX cannot be overriden</b><br>
+     <blockquote>hadoop-config.sh forces HADOOP_prefix to a specific value:<br>export HADOOP_PREFIX=`dirname &quot;$this&quot;`/..<br><br>It would be nice to make this overridable.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3373">MAPREDUCE-3373</a>.
+     Major bug reported by bmahe and fixed by bmahe <br>
+     <b>Hadoop scripts unconditionally source &quot;$bin&quot;/../libexec/hadoop-config.sh.</b><br>
+     <blockquote>It would be nice to be able to specify some other location for hadoop-config.sh</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3376">MAPREDUCE-3376</a>.
+     Major bug reported by revans2 and fixed by subrotosanyal (mrv1, mrv2)<br>
+     <b>Old mapred API combiner uses NULL reporter</b><br>
+     <blockquote>The OldCombinerRunner class inside Task.java uses a NULL Reporter.  If the combiner code runs for an extended period of time, even with reporting progress as it should, the map task can timeout and be killed.  It appears that the NewCombinerRunner class uses a valid reporter and as such is not impacted by this bug.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3380">MAPREDUCE-3380</a>.
+     Blocker sub-task reported by tucu00 and fixed by mahadev (mr-am, mrv2)<br>
+     <b>Token infrastructure for running clients which are not kerberos authenticated</b><br>
+     <blockquote>The JobClient.getDelegationToken() method is returning NULL, this makes Oozie fail when trying to get the delegation token to use it for starting a job.<br><br>What is seems to be happing is that Jobclient.getDelegationToken() calls Cluster.getDelegationToken() that calls YarnRunner.getDelegationToken() that calls ResourceMgrDelegate.getDelegationToken(). And the last one is not implemented. (Thanks Ahmed for tracing this in MR2 code)<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3389">MAPREDUCE-3389</a>.
+     Critical bug reported by tucu00 and fixed by tucu00 (mrv2)<br>
+     <b>MRApps loads the &apos;mrapp-generated-classpath&apos; file with classpath from the build machine</b><br>
+     <blockquote>The &apos;mrapp-generated-classpath&apos; file contains the classpath from where Hadoop was build. This classpath is not useful under any circumstances.<br><br>For example the content of the &apos;mrapp-generated-classpath&apos; in my dev environment is:<br><br>/Users/tucu/.m2/repository/aopalliance/aopalliance/1.0/aopalliance-1.0.jar:/Users/tucu/.m2/repository/asm/asm/3.2/asm-3.2.jar:/Users/tucu/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar:/Users/tucu/.m2/repository/com/google/guava/guava/r09/guava-r09.ja...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3391">MAPREDUCE-3391</a>.
+     Minor bug reported by subrotosanyal and fixed by subrotosanyal (applicationmaster)<br>
+     <b>Connecting to CM is logged as Connecting to RM</b><br>
+     <blockquote>In class *org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster*<br>{code}<br>private void connectToCM() {<br>      String cmIpPortStr = container.getNodeId().getHost() + &quot;:&quot; <br>          + container.getNodeId().getPort();		<br>      InetSocketAddress cmAddress = NetUtils.createSocketAddr(cmIpPortStr);		<br>      LOG.info(&quot;Connecting to ResourceManager at &quot; + cmIpPortStr);<br>      this.cm = ((ContainerManager) rpc.getProxy(ContainerManager.class, cmAddress, conf));<br>    }<br>{code}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3408">MAPREDUCE-3408</a>.
+     Major bug reported by bmahe and fixed by bmahe (mrv2, nodemanager, resourcemanager)<br>
+     <b>yarn-daemon.sh unconditionnaly sets yarn.root.logger</b><br>
+     <blockquote>yarn-daemon.sh unconditionnaly sets yarn.root.logger which then prevent any override from happening.<br>From ./hadoop-mapreduce-project/hadoop-yarn/bin/yarn-daemon.sh:<br>&gt; export YARN_ROOT_LOGGER=&quot;INFO,DRFA&quot;<br>&gt; export YARN_JHS_LOGGER=&quot;INFO,JSA&quot;<br><br>and then yarn-daemon.sh will call &quot;$YARN_HOME&quot;/bin/yarn which does the following:<br>&gt; YARN_OPTS=&quot;$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}&quot;<br>&gt; YARN_OPTS=&quot;$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}&quot;<br><br>This has at leas...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3411">MAPREDUCE-3411</a>.
+     Minor improvement reported by jeagles and fixed by jeagles (mrv2)<br>
+     <b>Performance Upgrade for jQuery</b><br>
+     <blockquote>jQuery 1.6.4 is almost twice as fast as current version 1.4.4 on modern browsers on some operations. There are also many modern browser compatibility fixes<br><br>http://jsperf.com/jquery-15-unique-traversal/15</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3413">MAPREDUCE-3413</a>.
+     Minor bug reported by jeagles and fixed by jeagles (mrv2)<br>
+     <b>RM web ui applications not sorted in any order by default</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3422">MAPREDUCE-3422</a>.
+     Major bug reported by tomwhite and fixed by jeagles (mrv2)<br>
+     <b>Counter display names are not being picked up</b><br>
+     <blockquote>When running a job I see &quot;MAP_INPUT_RECORDS&quot; rather than &quot;Map input records&quot; for the counter name. To fix this the resource bundle properties files need to be moved to the src/main/resources tree. </blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3427">MAPREDUCE-3427</a>.
+     Blocker bug reported by tucu00 and fixed by hitesh (contrib/streaming, mrv2)<br>
+     <b>streaming tests fail with MR2</b><br>
+     <blockquote>After Mavenizing streaming and getting its testcases to use the MiniMRCluster wrapper (MAPREDUCE-3169), 4 testcases fail to pass.<br><br>Following is an assessment of those failures. Note that the testcases have been tweaked only to set the streaming JAR and yarn as the  framework.<br> <br>(If these issues are unrelated we should create sub-tasks for each one of them).<br><br>*TestStreamingCombiner*, fails because returned counters don&apos;t match assertion. However, counters printed in the test output indicate va...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3433">MAPREDUCE-3433</a>.
+     Major sub-task reported by tomwhite and fixed by tomwhite (client, mrv2)<br>
+     <b>Finding counters by legacy group name returns empty counters</b><br>
+     <blockquote>Attempting to find counters with a legacy group name (e.g. org.apache.hadoop.mapred.Task$Counter rather than the new org.apache.hadoop.mapreduce.TaskCounter) returns empty counters. This causes TestStreamingCombiner to fail when run with YARN.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3434">MAPREDUCE-3434</a>.
+     Blocker bug reported by hitesh and fixed by hitesh (mrv2)<br>
+     <b>Nightly build broken </b><br>
+     <blockquote>https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Mapreduce-trunk/901/<br><br>Results :<br><br>Failed tests:   testSleepJob(org.apache.hadoop.mapreduce.v2.TestMRJobs)<br>  testRandomWriter(org.apache.hadoop.mapreduce.v2.TestMRJobs)<br>  testDistributedCache(org.apache.hadoop.mapreduce.v2.TestMRJobs)<br><br>Tests in error: <br>  org.apache.hadoop.mapreduce.v2.TestMROldApiJobs: Failed to Start org.apache.hadoop.mapreduce.v2.TestMROldApiJobs<br>  org.apache.hadoop.mapreduce.v2.TestUberAM: Failed to Start org.apache.h...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3436">MAPREDUCE-3436</a>.
+     Major bug reported by bmahe and fixed by ahmed.radwan (mrv2, webapps)<br>
+     <b>JobHistory webapp address should use the host from the jobhistory address</b><br>
+     <blockquote>On the following page : http://&lt;RESOURCE_MANAGER&gt;:8088/cluster/apps<br>There are links to the history for each application. None of them can be reached since they all point to the ip 0.0.0.0. For instance:<br>http://0.0.0.0:8088/proxy/application_1321658790349_0002/jobhistory/job/job_1321658790349_2_2<br><br>Am I missing something?<br><br>[root@bigtop-fedora-15 ~]# jps<br>9968 ResourceManager<br>1495 NameNode<br>1645 DataNode<br>12935 Jps<br>11140 -- process information unavailable<br>5309 JobHistoryServer<br>10237 NodeManager<br><br>[r...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3437">MAPREDUCE-3437</a>.
+     Blocker bug reported by jeagles and fixed by jeagles (build, mrv2)<br>
+     <b>Branch 23 fails to build with Failure to find org.apache.hadoop:hadoop-project:pom:0.24.0-SNAPSHOT</b><br>
+     <blockquote>[INFO] Scanning for projects...<br>[ERROR] The build could not read 1 project -&gt; [Help 1]<br>[ERROR]   <br>[ERROR]   The project org.apache.hadoop:hadoop-mapreduce-examples:0.24.0-SNAPSHOT (/home/jeagles/hadoop/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml) has 1 error<br>[ERROR]     Non-resolvable parent POM: Failure to find org.apache.hadoop:hadoop-project:pom:0.24.0-SNAPSHOT in http://stormwalk.champ.corp.yahoo.com:8081/nexus/content/groups/public was cached in the local repository,...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3443">MAPREDUCE-3443</a>.
+     Blocker bug reported by mahadev and fixed by mahadev (mrv2)<br>
+     <b>Oozie jobs are running as oozie user even though they create the jobclient as doAs.</b><br>
+     <blockquote>Oozie is having issues with job submission, since it does the following:<br><br>{code}<br>doAs(userwhosubmittedjob) {<br> jobclient = new JobClient(jobconf);<br>}<br><br>jobclient.submitjob()<br><br>{code}<br><br>In 0.20.2** this works because the JT proxy is created as soon as we call new JobClient(). But in 0.23 this is no longer true since the client has to talk to multiple servers (AM/RM/JHS). To keep this behavior we will have to store the ugi in new JobClient() and make sure all the calls are run with a doAs() inside t...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3444">MAPREDUCE-3444</a>.
+     Blocker bug reported by hitesh and fixed by hitesh (mrv2)<br>
+     <b>trunk/0.23 builds broken </b><br>
+     <blockquote>https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/208/ <br>https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1310/<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3447">MAPREDUCE-3447</a>.
+     Blocker bug reported by tgraves and fixed by mahadev (mrv2)<br>
+     <b>mapreduce examples not working</b><br>
+     <blockquote>Since the mavenization went in the mapreduce examples jar no longer works.  <br><br>$ hadoop jar ./hadoop-0.23.0-SNAPSHOT/modules/hadoop-mapreduce-examples-0.23.0-SNAPSHOT.jar  wordcount input output<br>Exception in thread &quot;main&quot; java.lang.ClassNotFoundException: wordcount<br>        at java.net.URLClassLoader$1.run(URLClassLoader.java:200)<br>        at java.security.AccessController.doPrivileged(Native Method)<br>        at java.net.URLClassLoader.findClass(URLClassLoader.java:188)<br>        at java.lang.Class...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3448">MAPREDUCE-3448</a>.
+     Minor bug reported by jeagles and fixed by jeagles (mrv2)<br>
+     <b>TestCombineOutputCollector javac unchecked warning on mocked generics</b><br>
+     <blockquote>  [javac] found   : org.apache.hadoop.mapred.IFile.Writer<br>    [javac] required: org.apache.hadoop.mapred.IFile.Writer&lt;java.lang.String,java.lang.Integer&gt;<br>    [javac]     Writer&lt;String, Integer&gt; mockWriter = mock(Writer.class);<br>    [javac]                                              ^<br>    [javac] /home/jeagles/hadoop/trunk/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/mapred/TestCombineOutputCollector.java:125: warning: [unchecked] unchecked conversion<br>    [javac] found   : org.a...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3450">MAPREDUCE-3450</a>.
+     Major bug reported by sseth and fixed by sseth (mr-am, mrv2)<br>
+     <b>NM port info no longer available in JobHistory</b><br>
+     <blockquote>The NM RPC port used to be part of the hostname field in JobHistory. That seems to have gone missing. Required for the task log link on the history server.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3452">MAPREDUCE-3452</a>.
+     Major bug reported by tgraves and fixed by jeagles (mrv2)<br>
+     <b>fifoscheduler web ui page always shows 0% used for the queue</b><br>
+     <blockquote>When the fifo scheduler is configured to be on, go to the RM web ui page and click the scheduler link.  Hover over the default queue to see the used%.  It always shows used% as 0.0% even when jobs are running.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3453">MAPREDUCE-3453</a>.
+     Major bug reported by tgraves and fixed by jeagles (mrv2)<br>
+     <b>RM web ui application details page shows RM cluster about information</b><br>
+     <blockquote>Go to the RM Web ui page.  Click on the Applications link, then click on a particular application. The applications details page inadvertently includes the RM about page information after the application details:<br><br>Cluster ID: 	1321943597242<br>ResourceManager state: 	STARTED<br>ResourceManager started on: 	22-Nov-2011 06:33:17<br>ResourceManager version: 	0.23.0-SNAPSHOT from 1203458 by user source checksum 0c288fc0971ed28c970272a62f547eae on Tue Nov 22 06:31:09 UTC 2011<br>Hadoop version: 	0.23.0-SNAPSH...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3454">MAPREDUCE-3454</a>.
+     Major bug reported by amar_kamat and fixed by hitesh (contrib/gridmix)<br>
+     <b>[Gridmix] TestDistCacheEmulation is broken</b><br>
+     <blockquote>TestDistCacheEmulation is broken as &apos;MapReduceTestUtil&apos; no longer exists.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3456">MAPREDUCE-3456</a>.
+     Blocker bug reported by eepayne and fixed by eepayne (mrv2)<br>
+     <b>$HADOOP_PREFIX/bin/yarn should set defaults for $HADOOP_*_HOME</b><br>
+     <blockquote>If the $HADOOP_PREFIX/hadoop-dist/target/hadoop-0.23.0-SNAPSHOT.tar.gz tarball is used to distribute hadoop, all of the HADOOP components (HDFS, MAPRED, COMMON) are all under one directory. In this use case, HADOOP_PREFIX should be set and should point to the root directory for all components, and it should not be necessary to set HADOOP_HDFS_HOME, HADOOP_COMMON_HOME, and HADOOP_MAPRED_HOME. However, the $HADOOP_PREFIX/bin/yarn script requires these 3 to be set explicitly in the calling envir...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3458">MAPREDUCE-3458</a>.
+     Major bug reported by acmurthy and fixed by devaraj.k (mrv2)<br>
+     <b>Fix findbugs warnings in hadoop-examples</b><br>
+     <blockquote>I see 12 findbugs warnings in hadoop-examples: <br>https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1336//artifact/trunk/hadoop-mapreduce-project/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-examples.html</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3460">MAPREDUCE-3460</a>.
+     Blocker bug reported by sseth and fixed by revans2 (mr-am, mrv2)<br>
+     <b>MR AM can hang if containers are allocated on a node blacklisted by the AM</b><br>
+     <blockquote>When an AM is assigned a FAILED_MAP (priority = 5) container on a nodemanager which it has blacklisted - it tries to<br>find a corresponding container request.<br>This uses the hostname to find the matching container request - and can end up returning any of the ContainerRequests which may have requested a container on this node. This container request is cleaned to remove the bad node - and then added back to the RM &apos;ask&apos; list.<br>The AM cleans the &apos;ask&apos; list after each heartbeat - The RM Allocator i...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3463">MAPREDUCE-3463</a>.
+     Blocker bug reported by karams and fixed by sseth (applicationmaster, mrv2)<br>
+     <b>Second AM fails to recover properly when first AM is killed with java.lang.IllegalArgumentException causing lost job</b><br>
+     <blockquote>Set yarn.resourcemanager.am.max-retries=5 in yarn-site.xml. Started yarn 4 Node cluster.<br>First Ran Randowriter/Sort/Sort-validate successfully<br>Then again sort, when job was 50% complete<br>Login node running AppMaster, and killed AppMaster with kill -9<br>On Client side failed with following:<br>{code}<br>11/11/23 10:57:27 INFO mapreduce.Job:  map 58% reduce 8%<br>11/11/23 10:57:27 INFO mapred.ClientServiceDelegate: Failed to contact AM/History for job job_1322040898409_0005 retrying..<br>11/11/23 10:57:28 INF...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3464">MAPREDUCE-3464</a>.
+     Trivial bug reported by davevr and fixed by davevr <br>
+     <b>mapreduce jsp pages missing DOCTYPE [post-split branches]</b><br>
+     <blockquote>Some jsp pages in the UI are missing a DOCTYPE declaration. This causes the pages to render incorrectly on some browsers, such as IE9. Please see parent bug HADOOP-7827 for details and patch.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3465">MAPREDUCE-3465</a>.
+     Minor bug reported by hitesh and fixed by hitesh (mrv2)<br>
+     <b>org.apache.hadoop.yarn.util.TestLinuxResourceCalculatorPlugin fails on 0.23 </b><br>
+     <blockquote>Running org.apache.hadoop.yarn.util.TestLinuxResourceCalculatorPlugin<br>Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.121 sec &lt;&lt;&lt; FAILURE!<br>Tests in error: <br>  testParsingProcStatAndCpuFile(org.apache.hadoop.yarn.util.TestLinuxResourceCalculatorPlugin): /homes/hortonhi/dev/hadoop-common/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/CPUINFO_943711651 (No such file or directory)<br>  testParsingProcMemFile(org.apache.hadoop.yarn.util.TestLinuxResourceCalcu...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3468">MAPREDUCE-3468</a>.
+     Major task reported by sseth and fixed by sseth <br>
+     <b>Change version to 0.23.1 for ant builds on the 23 branch</b><br>
+     <blockquote>Maven version has been changed to 0.23.1-SNAPSHOT. The ant build files need to change as well.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3477">MAPREDUCE-3477</a>.
+     Major bug reported by bmahe and fixed by jeagles (documentation, mrv2)<br>
+     <b>Hadoop site documentation cannot be built anymore on trunk and branch-0.23</b><br>
+     <blockquote>Maven fails and here is the issue I get:<br><br>[ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.0:site (default-site) on project hadoop-yarn-site: Error during page generation: Error parsing &apos;/home/bruno/freesoftware/bigtop/build/hadoop/rpm/BUILD/apache-hadoop-common-e127450/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/SingleCluster.apt.vm&apos;: line [23] Unable to execute macro in the APT document: ParseException: expected SECTION2, found SECTION3 -&gt; [...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3478">MAPREDUCE-3478</a>.
+     Minor bug reported by abayer and fixed by tomwhite (mrv2)<br>
+     <b>Cannot build against ZooKeeper 3.4.0</b><br>
+     <blockquote>I tried to see if one could build Hadoop 0.23.0 against ZooKeeper 3.4.0, rather than 3.3.1 (3.3.3 does work, fwiw) and hit compilation errors:<br><br>{quote}<br>[INFO] ------------------------------------------------------------------------<br>[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.3.2:testCompile (default-testCompile) on project hadoop-yarn-server-common: Compilation failure: Compilation failure:<br>[ERROR] /Volumes/EssEssDee/abayer/src/asf-git/hadoop-common/hadoop-...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3479">MAPREDUCE-3479</a>.
+     Major bug reported by tomwhite and fixed by tomwhite (client)<br>
+     <b>JobClient#getJob cannot find local jobs</b><br>
+     <blockquote>The problem is that JobClient#submitJob doesn&apos;t pass the Cluster object to Job for the submission process, which means that two Cluster objects and two LocalJobRunner objects are created. LocalJobRunner keeps an instance map of job IDs to Jobs, and when JobClient#getJob is called the LocalJobRunner with the unpopulated map is used which results in the job not being found.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3485">MAPREDUCE-3485</a>.
+     Major sub-task reported by hitesh and fixed by ravidotg (mrv2)<br>
+     <b>DISKS_FAILED -101 error code should be defined in same location as ABORTED_CONTAINER_EXIT_STATUS</b><br>
+     <blockquote>With MAPREDUCE-3121, it is defined in ContainerExecutor as part of yarn-nodemanager which would be a problem for client-side code if it needs to understand the exit code. <br><br>A short term fix would be to move it into YarnConfiguration where ABORTED_CONTAINER_EXIT_STATUS is defined. A longer term fix would be to find a more formal and extensible approach for new yarn framework error codes to be added and be easily accessible to client-side code or other AMs. </blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3488">MAPREDUCE-3488</a>.
+     Blocker bug reported by mahadev and fixed by mahadev (mrv2)<br>
+     <b>Streaming jobs are failing because the main class isnt set in the pom files.</b><br>
+     <blockquote>Streaming jobs are failing since the main MANIFEST file isnt being set in the pom files.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3496">MAPREDUCE-3496</a>.
+     Major bug reported by jeagles and fixed by jeagles (mrv2)<br>
+     <b>Yarn initializes ACL operations from capacity scheduler config in a non-deterministic order</b><br>
+     <blockquote>&apos;mapred queue -showacls&apos; does not output put acls in a predictable manner. This is a regression from previous versions.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3499">MAPREDUCE-3499</a>.
+     Blocker bug reported by tucu00 and fixed by johnvijoe (mrv2, test)<br>
+     <b>New MiniMR does not setup proxyuser configuration correctly, thus tests using doAs do not work</b><br>
+     <blockquote>The new MiniMR implementation is not taking proxyuser settings.<br><br>Because of this, testcases using/testing doAs functionality fail.<br><br>This affects all Oozie testcases that use MiniMR.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3500">MAPREDUCE-3500</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (mrv2)<br>
+     <b>MRJobConfig creates an LD_LIBRARY_PATH using the platform ARCH</b><br>
+     <blockquote>With HADOOP-7874 we are removing the arch from the java.library.path.<br><br>The LD_LIBRARY_PATH being set should not include the ARCH.<br><br>{code}<br>  public static final String DEFAULT_MAPRED_ADMIN_USER_ENV =<br>      &quot;LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native/&quot; + PlatformName.getPlatformName();<br>{code}<br><br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3505">MAPREDUCE-3505</a>.
+     Major bug reported by bmahe and fixed by ahmed.radwan (mrv2)<br>
+     <b>yarn APPLICATION_CLASSPATH needs to be overridable</b><br>
+     <blockquote>Right now MRApps sets the classpath to just being mrapp-generated-classpath, its content and a hardcoded list of directories.<br>If I understand correctly mrapp-generated-classpath is only there for testing and may change or disappear at any time<br><br>The list of hardcoded directories is defined in hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java at line 92.<br>For convenience, here is its current content:<br>{noformat}<br>  /**<br>   * Clas...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3510">MAPREDUCE-3510</a>.
+     Major bug reported by jeagles and fixed by jeagles (contrib/capacity-sched, mrv2)<br>
+     <b>Capacity Scheduler inherited ACLs not displayed by mapred queue -showacls</b><br>
+     <blockquote>mapred queue -showacls does not show inherited acls</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3513">MAPREDUCE-3513</a>.
+     Trivial bug reported by mahadev and fixed by chaku88 (mrv2)<br>
+     <b>Capacity Scheduler web UI has a spelling mistake for Memory.</b><br>
+     <blockquote>The web page for capacity scheduler has a column named &quot;Memopry Total&quot;, a spelling mistake which needs to be fixed.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3518">MAPREDUCE-3518</a>.
+     Critical bug reported by jeagles and fixed by jeagles (client, mrv2)<br>
+     <b>mapred queue -info &lt;queue&gt; -showJobs throws NPE</b><br>
+     <blockquote>mapred queue -info default -showJobs<br><br>Exception in thread &quot;main&quot; java.lang.NullPointerException<br>        at org.apache.hadoop.mapreduce.tools.CLI.displayJobList(CLI.java:572)<br>        at org.apache.hadoop.mapred.JobQueueClient.displayQueueInfo(JobQueueClient.java:190)<br>        at org.apache.hadoop.mapred.JobQueueClient.run(JobQueueClient.java:103)<br>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)<br>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83)<br>        at o...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3521">MAPREDUCE-3521</a>.
+     Minor bug reported by revans2 and fixed by revans2 (mrv2)<br>
+     <b>Hadoop Streaming ignores unknown parameters</b><br>
+     <blockquote>The hadoop streaming command will ignore any command line arguments to it.<br><br>{code}<br>hadoop jar streaming.jar -input input -output output -mapper cat -reducer cat ThisIsABadArgument<br>{code}<br><br>Works just fine.  This can mask issues where quotes were mistakenly missed like<br><br>{code}<br>hadoop jar streaming.jar -input input -output output -mapper xargs cat -reducer cat -archive someArchive.tgz<br>{code}<br><br>Streaming should fail if it encounters an unexpected command line parameter</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3522">MAPREDUCE-3522</a>.
+     Major bug reported by jeagles and fixed by jeagles (mrv2)<br>
+     <b>Capacity Scheduler ACLs not inherited by default</b><br>
+     <blockquote>Hierarchical Queues do not inherit parent ACLs correctly by default. Instead, if no value is specified for submit or administer acls, then all access is granted.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3527">MAPREDUCE-3527</a>.
+     Major bug reported by tomwhite and fixed by tomwhite <br>
+     <b>Fix minor API incompatibilities between 1.0 and 0.23</b><br>
+     <blockquote>There are a few minor incompatibilities that were found in HADOOP-7738 and are straightforward to fix.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3529">MAPREDUCE-3529</a>.
+     Critical bug reported by sseth and fixed by sseth (mrv2)<br>
+     <b>TokenCache does not cache viewfs credentials correctly</b><br>
+     <blockquote>viewfs returns a list of delegation tokens for the actual namenodes. TokenCache caches these based on the actual service name - subsequent calls to TokenCache end up trying to get a new set of tokens.<br><br>Tasks which happen to access TokenCache fail when using viewfs - since they end up trying to get a new set of tokens even though the tokens are already available.<br><br>{noformat}<br>Error: java.io.IOException: Delegation Token can be issued only with kerberos or web authentication<br>        at org.apach...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3531">MAPREDUCE-3531</a>.
+     Blocker bug reported by karams and fixed by revans2 (mrv2, resourcemanager, scheduler)<br>
+     <b>Sometimes java.lang.IllegalArgumentException: Invalid key to HMAC computation in NODE_UPDATE also causing RM to stop scheduling </b><br>
+     <blockquote>Filling this Jira a bit late<br>Started 350 cluster<br>sbummited large sleep job.<br>Foud that job was not running as RM has not allocated resouces to it.<br>{code}<br>2011-12-01 11:56:25,200 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: nodeUpdate: &lt;NMHost&gt;:48490 clusterResources: memory: 3225600<br>2011-12-01 11:56:25,202 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in handling event<br>type NODE_UPDATE to the scheduler<br>java.lang.IllegalAr...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3537">MAPREDUCE-3537</a>.
+     Blocker bug reported by acmurthy and fixed by acmurthy <br>
+     <b>DefaultContainerExecutor has a race condn. with multiple concurrent containers</b><br>
+     <blockquote>DCE relies cwd before calling ContainerLocalizer.runLocalization. However, with multiple containers setting cwd on same localFS reference leads to race. </blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3541">MAPREDUCE-3541</a>.
+     Blocker bug reported by raviprak and fixed by raviprak (mrv2)<br>
+     <b>Fix broken TestJobQueueClient test</b><br>
+     <blockquote>Ant build complains <br>    [javac] /hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/mapred/TestJobQueueClient.java&gt;:80: printJobQueueInfo(org.apache.hadoop.mapred.JobQueueInfo,java.io.Writer,java.lang.String) in org.apache.hadoop.mapred.JobQueueClient cannot be applied to (org.apache.hadoop.mapred.JobQueueInfo,java.io.StringWriter)<br>    [javac]     client.printJobQueueInfo(root, writer);<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3542">MAPREDUCE-3542</a>.
+     Major bug reported by tomwhite and fixed by tomwhite <br>
+     <b>Support &quot;FileSystemCounter&quot; legacy counter group name for compatibility</b><br>
+     <blockquote>The group name changed from &quot;FileSystemCounter&quot; to &quot;org.apache.hadoop.mapreduce.FileSystemCounter&quot;, but we should support the old one for compatibility&apos;s sake. This came up in PIG-2347. </blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3544">MAPREDUCE-3544</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (build, tools/rumen)<br>
+     <b>gridmix build is broken, requires hadoop-archives to be added as ivy dependency</b><br>
+     <blockquote>Having moved HAR/HadoopArchives to common/tools makes gridmix to fail as HadoopArchives is not in the mr1 classpath anymore.<br><br>hadoop-archives artifact should be added to gridmix dependencies<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3547">MAPREDUCE-3547</a>.
+     Critical sub-task reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>finish unit tests for web services for RM and NM</b><br>
+     <blockquote>Write more unit tests for the web services added for rm and nm.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3548">MAPREDUCE-3548</a>.
+     Critical sub-task reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>write unit tests for web services for mapreduce app master and job history server</b><br>
+     <blockquote>write more unit tests for mapreduce application master and job history server web services added in MAPREDUCE-2863</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3553">MAPREDUCE-3553</a>.
+     Minor sub-task reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>Add support for data returned when exceptions thrown from web service apis to be in either xml or in JSON</b><br>
+     <blockquote>When the web services apis for rm, nm, app master, and job history server throw an exception - like bad request, not found, they always return the data in JSON format.  It would be nice to return based on what they requested - xml or JSON.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3557">MAPREDUCE-3557</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>MR1 test fail to compile because of missing hadoop-archives dependency</b><br>
+     <blockquote>MAPREDUCE-3544 added hadoop-archives as dependency to gridmix and raid, but missed to add it to the main ivy.xml for the MR1 testcases thus the ant target &apos;compile-mapred-test&apos; fails.<br><br>I was under the impression that this stuff was not used anymore but trunk is failing on that target.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3560">MAPREDUCE-3560</a>.
+     Blocker bug reported by vinodkv and fixed by sseth (mrv2, resourcemanager, test)<br>
+     <b>TestRMNodeTransitions is failing on trunk</b><br>
+     <blockquote>Apparently Jenkins is screwed up. It is happily blessing patches, even though tests are failing.<br><br>Link to logs: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1454//testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMNodeTransitions/testExpiredContainer/</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3563">MAPREDUCE-3563</a>.
+     Major bug reported by acmurthy and fixed by acmurthy (mrv2)<br>
+     <b>LocalJobRunner doesn&apos;t handle Jobs using o.a.h.mapreduce.OutputCommitter</b><br>
+     <blockquote>LocalJobRunner doesn&apos;t handle Jobs using o.a.h.mapreduce.OutputCommitter, ran into this debugging PIG-2347.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3566">MAPREDUCE-3566</a>.
+     Critical sub-task reported by vinodkv and fixed by vinodkv (mr-am, mrv2)<br>
+     <b>MR AM slows down due to repeatedly constructing ContainerLaunchContext</b><br>
+     <blockquote>The construction of the context is expensive, includes per-task trips to NameNode for obtaining the information about job.jar, job splits etc which is redundant across all tasks.<br><br>We should have a common job-level context and a task-specific context inheriting from the common job-level context.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3567">MAPREDUCE-3567</a>.
+     Major sub-task reported by vinodkv and fixed by vinodkv (mr-am, mrv2, performance)<br>
+     <b>Extraneous JobConf objects in AM heap</b><br>
+     <blockquote>MR AM creates new JobConf objects unnecessarily in a couple of places in JobImpl and TaskImpl which occupy non-trivial amount of heap.<br><br>While working with a 64 bit JVM on 100K maps jobs, with uncompressed pointers, removing those extraneous objects helped in addressing OOM with 2GB AM heap size.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3569">MAPREDUCE-3569</a>.
+     Critical sub-task reported by vinodkv and fixed by vinodkv (mr-am, mrv2, performance)<br>
+     <b>TaskAttemptListener holds a global lock for all task-updates</b><br>
+     <blockquote>This got added via MAPREDUCE-3274. We really don&apos;t need the lock if we just implement what I mentioned on that ticket [here|https://issues.apache.org/jira/browse/MAPREDUCE-3274?focusedCommentId=13137214&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13137214].<br><br>This has performance implications on MR AM with lots of tasks.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3572">MAPREDUCE-3572</a>.
+     Critical sub-task reported by vinodkv and fixed by vinodkv (mr-am, mrv2, performance)<br>
+     <b>MR AM&apos;s dispatcher is blocked by heartbeats to ResourceManager</b><br>
+     <blockquote>All the heartbeat processing is done in {{RMContainerAllocator}} locking the object. The event processing is also locked on this, causing the dispatcher to be blocked and the rest of the AM getting stalled.<br><br>The event processing should be in a separate thread.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3579">MAPREDUCE-3579</a>.
+     Major bug reported by atm and fixed by atm (mrv2)<br>
+     <b>ConverterUtils should not include a port in a path for a URL with no port</b><br>
+     <blockquote>In {{ConverterUtils#getPathFromYarnURL}}, it&apos;s incorrectly assumed that if a URL includes a valid host it must also include a valid port.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3582">MAPREDUCE-3582</a>.
+     Major bug reported by ahmed.radwan and fixed by ahmed.radwan (mrv2, test)<br>
+     <b>Move successfully passing MR1 tests to MR2 maven tree.</b><br>
+     <blockquote>This ticket will track moving mr1 tests that are passing successfully to mr2 maven tree.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3588">MAPREDUCE-3588</a>.
+     Blocker bug reported by acmurthy and fixed by acmurthy <br>
+     <b>bin/yarn broken after MAPREDUCE-3366</b><br>
+     <blockquote>bin/yarn broken after MAPREDUCE-3366, doesn&apos;t add yarn jars to classpath. As a result no servers can be started.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3595">MAPREDUCE-3595</a>.
+     Major test reported by tomwhite and fixed by tomwhite (test)<br>
+     <b>Add missing TestCounters#testCounterValue test from branch 1 to 0.23</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3596">MAPREDUCE-3596</a>.
+     Blocker bug reported by raviprak and fixed by vinodkv (applicationmaster, mrv2)<br>
+     <b>Sort benchmark got hang after completion of 99% map phase</b><br>
+     <blockquote>Courtesy [~vinaythota]<br>{quote}<br>Ran sort benchmark couple of times and every time the job got hang after completion 99% map phase. There are some map tasks failed. Also it&apos;s not scheduled some of the pending map tasks.<br>Cluster size is 350 nodes.<br><br>Build Details:<br>==============<br><br>Compiled:       Fri Dec 9 16:25:27 PST 2011 by someone from branches/branch-0.23/hadoop-common-project/hadoop-common <br>ResourceManager version:        revision 1212681 by someone source checksum on Fri Dec 9 16:52:07 PST ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3604">MAPREDUCE-3604</a>.
+     Blocker bug reported by acmurthy and fixed by acmurthy (contrib/streaming)<br>
+     <b>Streaming&apos;s check for local mode is broken</b><br>
+     <blockquote>Streaming isn&apos;t checking for mapreduce.framework.name as part of check for &apos;local&apos; mode.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3608">MAPREDUCE-3608</a>.
+     Major bug reported by mahadev and fixed by mahadev (mrv2)<br>
+     <b>MAPREDUCE-3522 commit causes compilation to fail</b><br>
+     <blockquote>There are compilation errors after MAPREDUCE-3522 was committed. Some more changes were need to webapps to fix the compilation issue.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3610">MAPREDUCE-3610</a>.
+     Minor improvement reported by sho.shimauchi and fixed by sho.shimauchi <br>
+     <b>Some parts in MR use old property dfs.block.size</b><br>
+     <blockquote>Some parts in MR use old property dfs.block.size.<br>dfs.blocksize should be used instead.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3615">MAPREDUCE-3615</a>.
+     Blocker bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>mapred ant test failures</b><br>
+     <blockquote>The following mapred ant tests are failing.  This started on December 22nd.<br><br><br>    [junit] Running org.apache.hadoop.mapred.TestTrackerBlacklistAcrossJobs<br>    [junit] Running org.apache.hadoop.mapred.TestMiniMRDFSSort<br>    [junit] Running org.apache.hadoop.mapred.TestBadRecords<br>    [junit] Running org.apache.hadoop.mapred.TestClusterMRNotification<br>    [junit] Running org.apache.hadoop.mapred.TestDebugScript<br>    [junit] Running org.apache.hadoop.mapred.TestJobCleanup<br>    [junit] Running org.apac...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3616">MAPREDUCE-3616</a>.
+     Major sub-task reported by vinodkv and fixed by vinodkv (mr-am, performance)<br>
+     <b>Thread pool for launching containers in MR AM not expanding as expected</b><br>
+     <blockquote>Found this while running some benchmarks on 350 nodes. The thread pool stays at 60 for a long time and only expands to 350 towards the fag end of the job.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3617">MAPREDUCE-3617</a>.
+     Major bug reported by jeagles and fixed by jeagles (mrv2)<br>
+     <b>Remove yarn default values for resource manager and nodemanager principal</b><br>
+     <blockquote>Default values should be empty since no use can be made of them without correct values defined.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3624">MAPREDUCE-3624</a>.
+     Major bug reported by mahadev and fixed by mahadev (mrv2)<br>
+     <b>bin/yarn script adds jdk tools.jar to the classpath.</b><br>
+     <blockquote>Thanks to Roman for pointing it out. Looks like we have the following lines in bin/yarn:<br><br>{code}<br>CLASSPATH=${CLASSPATH}:$JAVA_HOME/lib/tools.jar<br>{code}<br><br>We dont really have a dependency on the tools jar. We should remove this.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3625">MAPREDUCE-3625</a>.
+     Critical bug reported by acmurthy and fixed by jlowe (mrv2)<br>
+     <b>CapacityScheduler web-ui display of queue&apos;s used capacity is broken</b><br>
+     <blockquote>The display of the queue&apos;s used capacity at runtime is broken because it display&apos;s &apos;used&apos; relative to the queue&apos;s capacity and not the parent&apos;s capacity as shown in the above attachment.<br><br>The display should be relative to parent&apos;s capacity and not leaf queues as everything else in the display is relative to parent&apos;s capacity.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3640">MAPREDUCE-3640</a>.
+     Blocker sub-task reported by sseth and fixed by acmurthy (mrv2)<br>
+     <b>AMRecovery should pick completed task form partial JobHistory files</b><br>
+     <blockquote>Currently, if the JobHistory file has a partial record, AMRecovery will start from scratch. This will become more relevant after MAPREDUCE-3512.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3645">MAPREDUCE-3645</a>.
+     Blocker bug reported by tgraves and fixed by tgraves (mrv1)<br>
+     <b>TestJobHistory fails</b><br>
+     <blockquote>TestJobHistory fails.<br><br>&gt;&gt;&gt; org.apache.hadoop.mapred.TestJobHistory.testDoneFolderOnHDFS 	<br>&gt;&gt;&gt; org.apache.hadoop.mapred.TestJobHistory.testDoneFolderNotOnDefaultFileSystem 	<br>&gt;&gt;&gt; org.apache.hadoop.mapred.TestJobHistory.testHistoryFolderOnHDFS 	<br>&gt;&gt;&gt; org.apache.hadoop.mapred.TestJobHistory.testJobHistoryFile <br><br>It looks like this was introduced by MAPREDUCE-3349 and the issue is that the test expects the hostname to be in the format rackname/hostname, but with 3349 it split those apart into 2 diff...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3646">MAPREDUCE-3646</a>.
+     Major bug reported by rramya and fixed by jeagles (client, mrv2)<br>
+     <b>Remove redundant URL info from &quot;mapred job&quot; output</b><br>
+     <blockquote>The URL information to track the job is printed for all the &quot;mapred job&quot;mrv2 commands. This information is redundant and has to be removed.<br><br>E.g:<br>{noformat}<br>-bash-3.2$ mapred job -list <br><br>Total jobs:3<br>JobId   State   StartTime       UserName        Queue   Priority        Maps    Reduces UsedContainers  RsvdContainers  UsedMem RsvdMem NeededMem       AM info<br>12/01/09 22:20:15 INFO mapred.ClientServiceDelegate: The url to track the job: &lt;RM host&gt;:8088/proxy/&lt;application ID 1&gt;/<br>&lt;job ID 1&gt;  RUNNI...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3648">MAPREDUCE-3648</a>.
+     Blocker bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>TestJobConf failing</b><br>
+     <blockquote>TestJobConf is failing:<br><br><br><br>testFindContainingJar <br>testFindContainingJarWithPlus <br><br>java.lang.ClassNotFoundException: ClassWithNoPackage<br>	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)<br>	at java.security.AccessController.doPrivileged(Native Method)<br>	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)<br>	at java.lang.ClassLoader.loadClass(ClassLoader.java:307)<br>	at java.lang.ClassLoader.loadClass(ClassLoader.java:248)<br>	at java.lang.Class.forName0(Native Method)<br>	at java.lang.Cla...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3649">MAPREDUCE-3649</a>.
+     Blocker bug reported by mahadev and fixed by raviprak (mrv2)<br>
+     <b>Job End notification gives an error on calling back.</b><br>
+     <blockquote>When calling job end notification for oozie the AM fails with the following trace:<br><br>{noformat}<br>2012-01-09 23:45:41,732 WARN [AsyncDispatcher event handler] org.mortbay.log: Job end notification to http://HOST:11000/oozie/v0/callback?id=0000000-120109234442311-oozie-oozi-W@mr-node&amp;status=SUCCEEDED&amp; failed<br>java.net.UnknownServiceException: no content-type<br>	at java.net.URLConnection.getContentHandler(URLConnection.java:1192)<br>	at java.net.URLConnection.getContent(URLConnection.java:689)<br>	at org.a...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3651">MAPREDUCE-3651</a>.
+     Blocker bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>TestQueueManagerRefresh fails</b><br>
+     <blockquote>The following tests fail:<br>org.apache.hadoop.mapred.TestQueueManagerRefresh.testRefreshWithRemovedQueues <br>org.apache.hadoop.mapred.TestQueueManagerRefresh.testRefreshOfSchedulerProperties <br><br>It looks like its simply trying to remove one of the queues but the remove is failing.It looks like MAPREDUCE-3328. mapred queue -list output inconsistent and missing child queues - change the getChilren routine to do a new JobQueueInfo on each one when returning it which is making the remove routine fail s...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3652">MAPREDUCE-3652</a>.
+     Blocker bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>org.apache.hadoop.mapred.TestWebUIAuthorization.testWebUIAuthorization fails</b><br>
+     <blockquote>org.apache.hadoop.mapred.TestWebUIAuthorization.testWebUIAuthorization fails.<br><br>This is testing the old jsp web interfaces.  I think this test should just be removed.<br><br><br>Any objections?</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3657">MAPREDUCE-3657</a>.
+     Minor bug reported by jlowe and fixed by jlowe (build, mrv2)<br>
+     <b>State machine visualize build fails</b><br>
+     <blockquote>Attempting to build the state machine graphs with {{mvn -Pvisualize compile}} fails for the resourcemanager and nodemanager projects.  The build fails because org.apache.commons.logging.LogFactory isn&apos;t in the classpath.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3664">MAPREDUCE-3664</a>.
+     Minor bug reported by praveensripati and fixed by brandonli (documentation)<br>
+     <b>HDFS Federation Documentation has incorrect configuration example</b><br>
+     <blockquote>HDFS Federation documentation example (1) has the following<br><br>&lt;property&gt;<br>    &lt;name&gt;dfs.namenode.rpc-address.ns1&lt;/name&gt;<br>    &lt;value&gt;hdfs://nn-host1:rpc-port&lt;/value&gt;<br>&lt;/property&gt;<br><br>dfs.namenode.rpc-address.* should be set to hostname:port, hdfs:// should not be there.<br><br>(1) - http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3669">MAPREDUCE-3669</a>.
+     Blocker bug reported by tgraves and fixed by mahadev (mrv2)<br>
+     <b>Getting a lot of PriviledgedActionException / SaslException when running a job</b><br>
+     <blockquote>On a secure cluster, when running a job we are seeing a lot of PriviledgedActionException / SaslExceptions.  The job runs fine, its just the jobclient can&apos;t connect to the AM to get the progress information.<br><br>Its in a very tight loop retrying while getting the exceptions.<br><br>snip of the client log is:<br>12/01/13 15:33:45 INFO security.SecurityUtil: Acquired token Ident: 00 1c 68 61 64 6f 6f 70 71 61 40 44 45 56 2e 59 47<br>52 49 44 2e 59 41 48 4f 4f 2e 43 4f 4d 08 6d 61 70 72 65 64 71 61 00 8a 01 34...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3679">MAPREDUCE-3679</a>.
+     Major improvement reported by mahadev and fixed by vinodkv (mrv2)<br>
+     <b>AM logs and others should not automatically refresh after every 1 second.</b><br>
+     <blockquote>If you are looking through the logs for AM or containers, the page is automatically refreshed after 1 second or so which makes it problematic to search through the page or debug using the content on the page. We should not refresh the logs page. There should be a button to manually refresh if the user needs to.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3681">MAPREDUCE-3681</a>.
+     Critical bug reported by tgraves and fixed by acmurthy (mrv2)<br>
+     <b>capacity scheduler LeafQueues calculate used capacity wrong</b><br>
+     <blockquote>In the Capacity scheduler if you configure the queues to be hierarchical where you have root -&gt; parent queue -&gt; leaf queue, the leaf queue doesn&apos;t calculate the used capacity properly. It seems to be using the entire cluster memory rather then its parents memory capacity. <br><br>In updateResource in LeafQueue:<br>    setUsedCapacity(<br>        usedResources.getMemory() / (clusterResource.getMemory() * capacity));<br><br>I think the clusterResource.getMemory() should be something like getParentsMemory().</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3683">MAPREDUCE-3683</a>.
+     Blocker bug reported by tgraves and fixed by acmurthy (mrv2)<br>
+     <b>Capacity scheduler LeafQueues maximum capacity calculation issues</b><br>
+     <blockquote>In the Capacity scheduler if you configure the queues to be hierarchical where you have root -&gt; parent queue -&gt; leaf queue, the leaf queue doesn&apos;t take into account its parents maximum capacity when calculate its own maximum capacity, instead it seems to use the parents capacity.  Looking at the code its using the parents absoluteCapacity and I think it should be using the parents absoluteMaximumCapacity.<br><br>It also seems to only use the parents capacity in the leaf queues max capacity calculat...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3684">MAPREDUCE-3684</a>.
+     Major bug reported by tomwhite and fixed by tomwhite (client)<br>
+     <b>LocalDistributedCacheManager does not shut down its thread pool</b><br>
+     <blockquote>This was observed by running a Hive job in local mode. The job completed but the client process did not exit for 60 seconds.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3689">MAPREDUCE-3689</a>.
+     Blocker bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>RM web UI doesn&apos;t handle newline in job name</b><br>
+     <blockquote>a user submitted a mapreduce job with a newline (\n) in the job name. This caused the resource manager web ui to get a javascript exception when loading the application and scheduler pages and the pages were pretty well useless after that since they didn&apos;t load everything.  Note that this only happens when the data is returned in the JS_ARRAY, which is when you get over 100 applications.<br><br>errors:<br>Uncaught SyntaxError: Unexpected token ILLEGAL<br>Uncaught ReferenceError: appsData is not defined<br><br>...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3691">MAPREDUCE-3691</a>.
+     Critical bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>webservices add support to compress response</b><br>
+     <blockquote> The web services currently don&apos;t support header &apos;Accept-Encoding: gzip&apos;<br><br>Given that the responses have a lot of duplicate data like the property names in JSON or the tag names in XML, it should<br>compress very well, and would save on bandwidth and download time when fetching a potentially large response, like the<br>ones from ws/v1/cluster/apps and ws/v1/history/mapreduce/jobs</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3692">MAPREDUCE-3692</a>.
+     Blocker improvement reported by eli and fixed by eli (mrv2)<br>
+     <b>yarn-resourcemanager out and log files can get big</b><br>
+     <blockquote>I&apos;m seeing 8gb resourcemanager out files and big log files, seeing lots of repeated logs (eg every rpc call or event) looks like we&apos;re being too verbose in  a couple of places.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3693">MAPREDUCE-3693</a>.
+     Minor improvement reported by rvs and fixed by rvs (mrv2)<br>
+     <b>Add admin env to mapred-default.xml</b><br>
+     <blockquote>I have noticed that org.apache.hadoop.mapred.MapReduceChildJVM doesn&apos;t forward the value of -Djava.library.path= from the parent JVM to the child JVM. Thus if one wants to use native libraries for compression the only option seems to be to manually include relevant java.library.path settings into the mapred-site.xml (as mapred.[map|reduce].child.java.opts).<br><br>This seems to be a change in behavior compared to MR1 where TaskRunner.java used to do that:<br><br>{noformat}<br>String libraryPath = System.get...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3696">MAPREDUCE-3696</a>.
+     Blocker bug reported by johnvijoe and fixed by johnvijoe (mrv2)<br>
+     <b>MR job via oozie does not work on hadoop 23</b><br>
+     <blockquote>NM throws an error on submitting an MR job via oozie on the latest Hadoop 23.<br>*Courtesy: Mona Chitnis (ooize)</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3697">MAPREDUCE-3697</a>.
+     Blocker bug reported by johnvijoe and fixed by mahadev (mrv2)<br>
+     <b>Hadoop Counters API limits Oozie&apos;s working across different hadoop versions</b><br>
+     <blockquote>Oozie uses Hadoop Counters API, by invoking Counters.getGroup(). However, in<br>hadoop 23, org.apache.hadoop.mapred.Counters does not implement getGroup(). Its<br>parent class AbstractCounters implements it. This is different from hadoop20X.<br>As a result, Oozie compiled with either hadoop version does not work with the<br>other version.<br>A specific scenario, Oozie compiled with .23 and run against 205, does not<br>update job status owing to a Counters API exception.<br><br>Will explicit re-compilation against th...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3698">MAPREDUCE-3698</a>.
+     Blocker sub-task reported by sseth and fixed by mahadev (mrv2)<br>
+     <b>Client cannot talk to the history server in secure mode</b><br>
+     <blockquote>{noformat}<br>12/01/19 02:56:22 ERROR security.UserGroupInformation: PriviledgedActionException as:XXX@XXX(auth:KERBEROS) cause:java.io.IOException: Failed to specify server&apos;s Kerberos principal name<br>12/01/19 02:56:22 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.<br>{noformat}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3701">MAPREDUCE-3701</a>.
+     Major bug reported by mahadev and fixed by mahadev (mrv2)<br>
+     <b>Delete HadoopYarnRPC from 0.23 branch.</b><br>
+     <blockquote>HadoopYarnRPC file exists in 0.23 (should have been removed with the new HadoopYarnProtoRPC). Trunk does not have this issue.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3702">MAPREDUCE-3702</a>.
+     Critical bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>internal server error trying access application master via proxy with filter enabled</b><br>
+     <blockquote>I had a hadoop.http.filter.initializers in place to do user authentication, but was purposely trying to let it bypass authentication on certain pages.  One of those was the proxy and the application master main page. When I then tried to go to the application master through the proxy it throws an internal server error:<br><br>Problem accessing /mapreduce. Reason:<br><br>    INTERNAL_SERVER_ERROR<br>Caused by:<br><br>java.lang.NullPointerException<br>	at org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFi...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3705">MAPREDUCE-3705</a>.
+     Blocker bug reported by tgraves and fixed by tgraves (mrv2)<br>
+     <b>ant build fails on 0.23 branch </b><br>
+     <blockquote>running the ant build in mapreduce on the latest 23 branch fails.  Looks like the ivy properties file still has 0.24.0 and then the gridmix dependencies need to have rumen as dependency.<br><br>The gridmix errors look like:<br>   [javac] /home/tgraves/anttest/hadoop-mapreduce-project/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/DistributedCacheEmulator.java:249: cannot find symbol<br>    [javac] symbol  : class JobStoryProducer<br>    [javac] location: class org.apache.hadoop.mapred.gridmix...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3708">MAPREDUCE-3708</a>.
+     Major bug reported by kam_iitkgp and fixed by kamesh (mrv2)<br>
+     <b>Metrics: Incorrect Apps Submitted Count</b><br>
+     <blockquote>Submitted an application with the following configuration<br>{code:xml}<br>&lt;property&gt;<br> &lt;name&gt;yarn.resourcemanager.am.max-retries&lt;/name&gt;<br> &lt;value&gt;2&lt;/value&gt;<br>&lt;/property&gt;<br>{code}<br>In the above case, application had failed first time. So AM attempted the same application again. <br>While attempting the same application, *Apps Submitted* counter also has been incremented.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3709">MAPREDUCE-3709</a>.
+     Major bug reported by eli and fixed by hitesh (mrv2, test)<br>
+     <b>TestDistributedShell is failing</b><br>
+     <blockquote>TestDistributedShell#testDSShell is failing the assert on line 90 on branch-23.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3712">MAPREDUCE-3712</a>.
+     Blocker bug reported by raviprak and fixed by mahadev (mrv2)<br>
+     <b>The mapreduce tar does not contain the hadoop-mapreduce-client-jobclient-tests.jar. </b><br>
+     <blockquote>Working MRv1 tests were moved into the maven build as part of MAPREDUCE-3582. Some classes like MRBench, SleepJob, FailJob which are essential for QE got moved to jobclient-tests.jar. However the tar.gz file does not contain this jar.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3717">MAPREDUCE-3717</a>.
+     Blocker bug reported by mahadev and fixed by mahadev (mrv2)<br>
+     <b>JobClient test jar has missing files to run all the test programs.</b><br>
+     <blockquote>Looks like MAPREDUCE-3582 forgot to move couple of files from the ant builds. The current test jar from jobclient does not work. </blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3718">MAPREDUCE-3718</a>.
+     Major sub-task reported by vinodkv and fixed by hitesh (mrv2, performance)<br>
+     <b>Default AM heartbeat interval should be one second</b><br>
+     <blockquote>Helps in improving app performance. RM should be able to handle this, as the heartbeats aren&apos;t really costly.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3721">MAPREDUCE-3721</a>.
+     Blocker bug reported by sseth and fixed by sseth (mrv2)<br>
+     <b>Race in shuffle can cause it to hang</b><br>
+     <blockquote>If all current {{Fetcher}}s complete while an in-memory merge is in progress - shuffle could hang. <br>Specifically - if the memory freed by an in-memory merge does not bring {{MergeManager.usedMemory}} below {{MergeManager.memoryLimit}} and all current Fetchers complete before the in-memory merge completes, another in-memory merge will not be triggered - and shuffle will hang. (All new fetchers are asked to WAIT).<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3723">MAPREDUCE-3723</a>.
+     Major bug reported by kam_iitkgp and fixed by kamesh (mrv2, test, webapps)<br>
+     <b>TestAMWebServicesJobs &amp; TestHSWebServicesJobs incorrectly asserting tests</b><br>
+     <blockquote>While testing a patch for one of the MR issues, I found TestAMWebServicesJobs &amp; TestHSWebServicesJobs incorrectly asserting tests. <br>Moreover tests may fail if<br>{noformat}<br>	index of counterGroups &gt; #counters in a particular counterGroup<br>{noformat}<br>{code:title=TestAMWebServicesJobs.java|borderStyle=solid}<br>for (int j = 0; j &lt; counters.length(); j++) {<br> JSONObject counter = counters.getJSONObject(i);<br>{code}<br><br>where is *i* is index of outer loop. It should be *j* instead of *i*.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3727">MAPREDUCE-3727</a>.
+     Critical bug reported by tucu00 and fixed by tucu00 (security)<br>
+     <b>jobtoken location property in jobconf refers to wrong jobtoken file</b><br>
+     <blockquote>Oozie launcher job (for MR/Pig/Hive/Sqoop action) reads the location of the jobtoken file from the *HADOOP_TOKEN_FILE_LOCATION* ENV var and seeds it as the *mapreduce.job.credentials.binary* property in the jobconf that will be used to launch the real (MR/Pig/Hive/Sqoop) job.<br><br>The MR/Pig/Hive/Sqoop submission code (via Hadoop job submission) uses correctly the injected *mapreduce.job.credentials.binary* property to load the credentials and submit their MR jobs.<br><br>The problem is that the *mapre...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3733">MAPREDUCE-3733</a>.
+     Major bug reported by mahadev and fixed by mahadev <br>
+     <b>Add Apache License Header to hadoop-distcp/pom.xml</b><br>
+     <blockquote>Looks like I missed the Apache Headers in the review. Adding it now.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3735">MAPREDUCE-3735</a>.
+     Blocker bug reported by mahadev and fixed by mahadev (mrv2)<br>
+     <b>Add distcp jar to the distribution (tar)</b><br>
+     <blockquote>Distcp jar isnt getting added to the tarball as of now. We need to add it along with archives/streaming and others.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3737">MAPREDUCE-3737</a>.
+     Critical bug reported by revans2 and fixed by revans2 (mrv2)<br>
+     <b>The Web Application Proxy&apos;s is not documented very well</b><br>
+     <blockquote>The Web Application Proxy is a security feature, but there is no documentation for what it does, why it does it, and more importantly what attacks it is known not protect against.  This is so that anyone addopting Hadoop can know exactly what they potential security issues they may encounter.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3742">MAPREDUCE-3742</a>.
+     Blocker bug reported by jlowe and fixed by jlowe (mrv2)<br>
+     <b>&quot;yarn logs&quot; command fails with ClassNotFoundException</b><br>
+     <blockquote>Executing &quot;yarn logs&quot; at a shell prompt fails with this error:<br><br>Exception in thread &quot;main&quot; java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogDumper<br>Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogDumper<br>	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)<br>	at java.security.AccessController.doPrivileged(Native Method)<br>	at java.net.URLClassLoader.findClass(U...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3744">MAPREDUCE-3744</a>.
+     Blocker bug reported by jlowe and fixed by jlowe (mrv2)<br>
+     <b>Unable to retrieve application logs via &quot;yarn logs&quot; or &quot;mapred job -logs&quot;</b><br>
+     <blockquote>Trying to retrieve application logs via the &quot;yarn logs&quot; shell command results in an error similar to this:<br><br>Exception in thread &quot;main&quot; java.io.FileNotFoundException: File /tmp/logs/application_1327694122989_0001 does not exist.<br>	at org.apache.hadoop.fs.Hdfs$DirListingIterator.&lt;init&gt;(Hdfs.java:226)<br>	at org.apache.hadoop.fs.Hdfs$DirListingIterator.&lt;init&gt;(Hdfs.java:217)<br>	at org.apache.hadoop.fs.Hdfs$2.&lt;init&gt;(Hdfs.java:192)<br>	at org.apache.hadoop.fs.Hdfs.listStatusIterator(Hdfs.java:192)<br>	at org.a...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3747">MAPREDUCE-3747</a>.
+     Major bug reported by rramya and fixed by acmurthy (mrv2)<br>
+     <b>Memory Total is not refreshed until an app is launched</b><br>
+     <blockquote>Memory Total on the RM UI is not refreshed until an application is launched. This is a problem when the cluster is started for the first time or when there are any lost/decommissioned NMs.<br>When the cluster is started for the first time, Active Nodes is &gt; 0 but the Memory Total=0. Also when there are any lost/decommissioned nodes, Memory Total has wrong value.<br>This is a useful tool for cluster admins and has to be updated correctly without having the need to submit an app each time.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3748">MAPREDUCE-3748</a>.
+     Minor bug reported by rramya and fixed by rramya (mrv2)<br>
+     <b>Move CS related nodeUpdate log messages to DEBUG</b><br>
+     <blockquote>Currently, the RM has nodeUpdate logs per NM per second such as the following:<br>2012-01-27 21:51:32,429 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: nodeUpdate: &lt;nodemanager1&gt;:&lt;port1&gt; clusterResources: memory: 57344<br>2012-01-27 21:51:32,510 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: nodeUpdate: &lt;nodemanager2&gt;:&lt;port2&gt; clusterResources: memory: 57344<br>2012-01-27 21:51:33,094 INFO org.apache.hadoop.yarn.server...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3749">MAPREDUCE-3749</a>.
+     Blocker bug reported by tomwhite and fixed by tomwhite (mrv2)<br>
+     <b>ConcurrentModificationException in counter groups</b><br>
+     <blockquote>Iterating over a counter&apos;s groups while adding more groups will cause a ConcurrentModificationException.<br><br>This was found while running Hive unit tests against a recent 0.23 version.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3756">MAPREDUCE-3756</a>.
+     Major improvement reported by acmurthy and fixed by hitesh (mrv2)<br>
+     <b>Make single shuffle limit configurable</b><br>
+     <blockquote>Make single shuffle limit configurable, currently it&apos;s hard-coded.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3759">MAPREDUCE-3759</a>.
+     Major bug reported by rramya and fixed by vinodkv (mrv2)<br>
+     <b>ClassCastException thrown in -list-active-trackers when there are a few unhealthy nodes</b><br>
+     <blockquote>When there are a few blacklisted nodes in the cluster, &quot;bin/mapred job -list-active-trackers&quot; throws &quot;java.lang.ClassCastException: org.apache.hadoop.yarn.server.resourcemanager.resource.Resources$1 cannot be cast to org.apache.hadoop.yarn.api.records.impl.pb.ResourcePBImpl&quot;</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3762">MAPREDUCE-3762</a>.
+     Critical bug reported by mahadev and fixed by mahadev (mrv2)<br>
+     <b>Resource Manager fails to come up with default capacity scheduler configs.</b><br>
+     <blockquote>Thanks to [~harip] for pointing out the issue. This is the stack trace for bringing up RM with default CS configs:<br><br>{code}<br>java.lang.IllegalArgumentException: Illegal value  of maximumCapacity -0.01 used in call to setMaxCapacity for queue default<br>        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.checkMaxCapacity(CSQueueUtils.java:28)<br>        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:21...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3764">MAPREDUCE-3764</a>.
+     Critical bug reported by sseth and fixed by acmurthy (mrv2)<br>
+     <b>AllocatedGB etc metrics incorrect if min-allocation-mb isn&apos;t a multiple of 1GB</b><br>
+     <blockquote>MutableGaugeInt incremented as {{allocatedGB.incr(res.getMemory() / GB * containers);}}<br><br>Setting yarn.scheduler.capacity.minimum-allocation-mb to 1536 - each increment is counted as 1GB.<br>Trying to analyze the metrics - looks like the cluster is never over 67-68% utilized, depending on high ram requests.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3765">MAPREDUCE-3765</a>.
+     Minor bug reported by hitesh and fixed by hitesh (mrv2)<br>
+     <b>FifoScheduler does not respect yarn.scheduler.fifo.minimum-allocation-mb setting</b><br>
+     <blockquote>FifoScheduler uses default min 1 GB regardless of the configuration value set for minimum memory allocation.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3771">MAPREDUCE-3771</a>.
+     Major improvement reported by acmurthy and fixed by acmurthy <br>
+     <b>Port MAPREDUCE-1735 to trunk/0.23</b><br>
+     <blockquote>Per discussion in general@, we should port MAPREDUCE-1735 to 0.23 &amp; trunk to &apos;undeprecate&apos; old mapred api:<br>http://s.apache.org/undeprecate-mapred-apis</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3775">MAPREDUCE-3775</a>.
+     Minor bug reported by hitesh and fixed by hitesh (mrv2)<br>
+     <b>Change MiniYarnCluster to escape special chars in testname</b><br>
+     <blockquote>When using MiniYarnCluster with the testname set to a nested classname, the &quot;$&quot; within the class name creates issues with the container launch scripts as they try to expand the $... within the paths/variables in use.  </blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3780">MAPREDUCE-3780</a>.
+     Blocker bug reported by rramya and fixed by hitesh (mrv2)<br>
+     <b>RM assigns containers to killed applications</b><br>
+     <blockquote>RM attempts to assign containers to killed applications. The applications were killed when they were inactive and waiting for AM allocation.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3791">MAPREDUCE-3791</a>.
+     Major bug reported by rvs and fixed by mahadev (documentation, mrv2)<br>
+     <b>can&apos;t build site in hadoop-yarn-server-common</b><br>
+     <blockquote>Here&apos;s how to reproduce:<br><br>{noformat}<br>$ mvn site site:stage -DskipTests -DskipTest -DskipITs<br>....<br>main:<br>[INFO] ------------------------------------------------------------------------<br>[INFO] Reactor Summary:<br>[INFO] <br>[INFO] Apache Hadoop Main ................................ SUCCESS [49.017s]<br>[INFO] Apache Hadoop Project POM ......................... SUCCESS [5.152s]<br>[INFO] Apache Hadoop Annotations ......................... SUCCESS [4.973s]<br>[INFO] Apache Hadoop Project Dist POM ..................</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3794">MAPREDUCE-3794</a>.
+     Major bug reported by tomwhite and fixed by tomwhite (mrv2)<br>
+     <b>Support mapred.Task.Counter and mapred.JobInProgress.Counter enums for compatibility</b><br>
+     <blockquote>The new counters are mapreduce.TaskCounter and mapreduce.JobCounter, but we should support the old ones too since they are public in Hadoop 1.x.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3795">MAPREDUCE-3795</a>.
+     Major bug reported by vinodkv and fixed by vinodkv (mrv2)<br>
+     <b>&quot;job -status&quot; command line output is malformed</b><br>
+     <blockquote>Misses new lines after numMaps and numReduces. Caused by MAPREDUCE-3720.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3803">MAPREDUCE-3803</a>.
+     Major test reported by raviprak and fixed by raviprak (build)<br>
+     <b>HDFS-2864 broke ant compilation</b><br>
+     <blockquote>compile:<br>     [echo] contrib: raid<br>    [javac] &lt;somePath&gt;/hadoop-mapreduce-project/src/contrib/build-contrib.xml:194: warning: &apos;includeantruntime&apos; was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds<br>    [javac] Compiling 28 source files to &lt;somepath&gt;/hadoop-mapreduce-project/build/contrib/raid/classes<br>    [javac] &lt;somepath&gt;/hadoop-mapreduce-project/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java:111: cannot find symbol<br> ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3809">MAPREDUCE-3809</a>.
+     Blocker sub-task reported by sseth and fixed by sseth (mrv2)<br>
+     <b>Tasks may take upto 3 seconds to exit after completion</b><br>
+     <blockquote>Task.TaskReporter.stopCommunicationThread can end up waiting for a thread.sleep(3000) before stopping the thread.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3810">MAPREDUCE-3810</a>.
+     Blocker sub-task reported by vinodkv and fixed by vinodkv (mrv2, performance)<br>
+     <b>MR AM&apos;s ContainerAllocator is assigning the allocated containers very slowly</b><br>
+     <blockquote>This is mostly due to logging and other not-so-cheap operations we are doing as part of the AM-&gt;RM heartbeat cycle.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3811">MAPREDUCE-3811</a>.
+     Critical task reported by sseth and fixed by sseth (mrv2)<br>
+     <b>Make the Client-AM IPC retry count configurable</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3813">MAPREDUCE-3813</a>.
+     Major sub-task reported by vinodkv and fixed by vinodkv (mrv2, performance)<br>
+     <b>RackResolver should maintain a cache to avoid repetitive lookups.</b><br>
+     <blockquote>With the current code, during task creation, we repeatedly resolve hosts and RackResolver doesn&apos;t cache any of the results. Caching will improve performance.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3814">MAPREDUCE-3814</a>.
+     Major bug reported by acmurthy and fixed by acmurthy (mrv1, mrv2)<br>
+     <b>MR1 compile fails</b><br>
+     <blockquote>$ ant veryclean all-jars -Dversion=0.23.1 -Dresolvers=internal<br><br><br>BUILD FAILED<br>/grid/0/dev/acm/hadoop-0.23/hadoop-mapreduce-project/build.xml:537: srcdir &quot;/grid/0/dev/acm/hadoop-0.23/hadoop-mapreduce-project/src/test/mapred/testjar&quot; does not exist!<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3817">MAPREDUCE-3817</a>.
+     Major bug reported by arpitgupta and fixed by arpitgupta (mrv2)<br>
+     <b>bin/mapred command cannot run distcp and archive jobs</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3826">MAPREDUCE-3826</a>.
+     Major bug reported by arpitgupta and fixed by jeagles (mrv2)<br>
+     <b>RM UI when loaded throws a message stating Data Tables warning and then the column sorting stops working</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3833">MAPREDUCE-3833</a>.
+     Major bug reported by jlowe and fixed by jlowe (mrv2)<br>
+     <b>Capacity scheduler queue refresh doesn&apos;t recompute queue capacities properly</b><br>
+     <blockquote>Refreshing the capacity scheduler configuration (e.g.: via yarn rmadmin -refreshQueues) can fail to compute the proper absolute capacity for leaf queues.</blockquote></li>
+
+
+</ul>
+
+
+
 <h2>Changes since Hadoop 0.22</h2>
 
 <ul>