CHANGES.txt 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306
  1. Hadoop Change Log
  2. Trunk (unreleased)
  3. 1. HADOOP-208. Enhance MapReduce web interface, adding new pages
  4. for failed tasks, and tasktrackers. (omalley via cutting)
  5. 2. HADOOP-204. Tweaks to metrics package. (David Bowen via cutting)
  6. 3. HADOOP-209. Add a MapReduce-based file copier. This will
  7. copy files within or between file systems in parallel.
  8. (Milind Bhandarkar via cutting)
  9. 4. HADOOP-146. Fix DFS to check when randomly generating a new block
  10. id that no existing blocks already have that id.
  11. (Milind Bhandarkar via cutting)
  12. 5. HADOOP-180. Make a daemon thread that does the actual task clean ups, so
  13. that the main offerService thread in the taskTracker doesn't get stuck
  14. and miss his heartbeat window. This was killing many task trackers as
  15. big jobs finished (300+ tasks / node). (omalley via cutting)
  16. 6. HADOOP-200. Avoid transmitting entire list of map task names to
  17. reduce tasks. Instead just transmit the number of map tasks and
  18. henceforth refer to them by number when collecting map output.
  19. (omalley via cutting)
  20. 7. HADOOP-219. Fix a NullPointerException when handling a checksum
  21. exception under SequenceFile.Sorter.sort(). (cutting & stack)
  22. 8. HADOOP-212. Permit alteration of the file block size in DFS. The
  23. default block size for new files may now be specified in the
  24. configuration with the dfs.block.size property. The block size
  25. may also be specified when files are opened.
  26. (omalley via cutting)
  27. 9. HADOOP-218. Avoid accessing configuration while looping through
  28. tasks in JobTracker. (Mahadev Konar via cutting)
  29. 10. HADOOP-161. Add hashCode() method to DFS's Block.
  30. (Milind Bhandarkar via cutting)
  31. 11. HADOOP-115. Map output types may now be specified. These are also
  32. used as reduce input types, thus permitting reduce input types to
  33. differ from reduce output types. (Runping Qi via cutting)
  34. 12. HADOOP-216. Add task progress to task status page.
  35. (Bryan Pendelton via cutting)
  36. 13. HADOOP-233. Add web server to task tracker that shows running
  37. tasks and logs. Also add log access to job tracker web interface.
  38. (omalley via cutting)
  39. 14. HADOOP-205. Incorporate pending tasks into tasktracker load
  40. calculations. (Mahadev Konar via cutting)
  41. 15. HADOOP-247. Fix sort progress to better handle exceptions.
  42. (Mahadev Konar via cutting)
  43. 16. HADOOP-195. Improve performance of the transfer of map outputs to
  44. reduce nodes by performing multiple transfers in parallel, each on
  45. a separate socket. (Sameer Paranjpye via cutting)
  46. 17. HADOOP-251. Fix task processes to be tolerant of failed progress
  47. reports to their parent process. (omalley via cutting)
  48. 18. HADOOP-325. Improve the FileNotFound exceptions thrown by
  49. LocalFileSystem to include the name of the file.
  50. (Benjamin Reed via cutting)
  51. 19. HADOOP-254. Use HTTP to transfer map output data to reduce
  52. nodes. This, together with HADOOP-195, greatly improves the
  53. performance of these transfers. (omalley via cutting)
  54. 20. HADOOP-163. Cause datanodes that are unable to either read or
  55. write data to exit, so that the namenode will no longer target
  56. them for new blocks and will replicate their data on other nodes.
  57. (Hairong Kuang via cutting)
  58. Release 0.2.1 - 2006-05-12
  59. 1. HADOOP-199. Fix reduce progress (broken by HADOOP-182).
  60. (omalley via cutting)
  61. 2. HADOOP-201. Fix 'bin/hadoop dfs -report'. (cutting)
  62. 3. HADOOP-207. Fix JDK 1.4 incompatibility introduced by HADOOP-96.
  63. System.getenv() does not work in JDK 1.4. (Hairong Kuang via cutting)
  64. Release 0.2.0 - 2006-05-05
  65. 1. Fix HADOOP-126. 'bin/hadoop dfs -cp' now correctly copies .crc
  66. files. (Konstantin Shvachko via cutting)
  67. 2. Fix HADOOP-51. Change DFS to support per-file replication counts.
  68. (Konstantin Shvachko via cutting)
  69. 3. Fix HADOOP-131. Add scripts to start/stop dfs and mapred daemons.
  70. Use these in start/stop-all scripts. (Chris Mattmann via cutting)
  71. 4. Stop using ssh options by default that are not yet in widely used
  72. versions of ssh. Folks can still enable their use by uncommenting
  73. a line in conf/hadoop-env.sh. (cutting)
  74. 5. Fix HADOOP-92. Show information about all attempts to run each
  75. task in the web ui. (Mahadev konar via cutting)
  76. 6. Fix HADOOP-128. Improved DFS error handling. (Owen O'Malley via cutting)
  77. 7. Fix HADOOP-129. Replace uses of java.io.File with new class named
  78. Path. This fixes bugs where java.io.File methods were called
  79. directly when FileSystem methods were desired, and reduces the
  80. likelihood of such bugs in the future. It also makes the handling
  81. of pathnames more consistent between local and dfs FileSystems and
  82. between Windows and Unix. java.io.File-based methods are still
  83. available for back-compatibility, but are deprecated and will be
  84. removed once 0.2 is released. (cutting)
  85. 8. Change dfs.data.dir and mapred.local.dir to be comma-separated
  86. lists of directories, no longer be space-separated. This fixes
  87. several bugs on Windows. (cutting)
  88. 9. Fix HADOOP-144. Use mapred task id for dfs client id, to
  89. facilitate debugging. (omalley via cutting)
  90. 10. Fix HADOOP-143. Do not line-wrap stack-traces in web ui.
  91. (omalley via cutting)
  92. 11. Fix HADOOP-118. In DFS, improve clean up of abandoned file
  93. creations. (omalley via cutting)
  94. 12. Fix HADOOP-138. Stop multiple tasks in a single heartbeat, rather
  95. than one per heartbeat. (Stefan via cutting)
  96. 13. Fix HADOOP-139. Remove a potential deadlock in
  97. LocalFileSystem.lock(). (Igor Bolotin via cutting)
  98. 14. Fix HADOOP-134. Don't hang jobs when the tasktracker is
  99. misconfigured to use an un-writable local directory. (omalley via cutting)
  100. 15. Fix HADOOP-115. Correct an error message. (Stack via cutting)
  101. 16. Fix HADOOP-133. Retry pings from child to parent, in case of
  102. (local) communcation problems. Also log exit status, so that one
  103. can distinguish patricide from other deaths. (omalley via cutting)
  104. 17. Fix HADOOP-142. Avoid re-running a task on a host where it has
  105. previously failed. (omalley via cutting)
  106. 18. Fix HADOOP-148. Maintain a task failure count for each
  107. tasktracker and display it in the web ui. (omalley via cutting)
  108. 19. Fix HADOOP-151. Close a potential socket leak, where new IPC
  109. connection pools were created per configuration instance that RPCs
  110. use. Now a global RPC connection pool is used again, as
  111. originally intended. (cutting)
  112. 20. Fix HADOOP-69. Don't throw a NullPointerException when getting
  113. hints for non-existing file split. (Bryan Pendelton via cutting)
  114. 21. Fix HADOOP-157. When a task that writes dfs files (e.g., a reduce
  115. task) failed and was retried, it would fail again and again,
  116. eventually failing the job. The problem was that dfs did not yet
  117. know that the failed task had abandoned the files, and would not
  118. yet let another task create files with the same names. Dfs now
  119. retries when creating a file long enough for locks on abandoned
  120. files to expire. (omalley via cutting)
  121. 22. Fix HADOOP-150. Improved task names that include job
  122. names. (omalley via cutting)
  123. 23. Fix HADOOP-162. Fix ConcurrentModificationException when
  124. releasing file locks. (omalley via cutting)
  125. 24. Fix HADOOP-132. Initial check-in of new Metrics API, including
  126. implementations for writing metric data to a file and for sending
  127. it to Ganglia. (David Bowen via cutting)
  128. 25. Fix HADOOP-160. Remove some uneeded synchronization around
  129. time-consuming operations in the TaskTracker. (omalley via cutting)
  130. 26. Fix HADOOP-166. RPCs failed when passed subclasses of a declared
  131. parameter type. This is fixed by changing ObjectWritable to store
  132. both the declared type and the instance type for Writables. Note
  133. that this incompatibly changes the format of ObjectWritable and
  134. will render unreadable any ObjectWritables stored in files.
  135. Nutch only uses ObjectWritable in intermediate files, so this
  136. should not be a problem for Nutch. (Stefan & cutting)
  137. 27. Fix HADOOP-168. MapReduce RPC protocol methods should all declare
  138. IOException, so that timeouts are handled appropriately.
  139. (omalley via cutting)
  140. 28. Fix HADOOP-169. Don't fail a reduce task if a call to the
  141. jobtracker to locate map outputs fails. (omalley via cutting)
  142. 29. Fix HADOOP-170. Permit FileSystem clients to examine and modify
  143. the replication count of individual files. Also fix a few
  144. replication-related bugs. (Konstantin Shvachko via cutting)
  145. 30. Permit specification of a higher replication levels for job
  146. submission files (job.xml and job.jar). This helps with large
  147. clusters, since these files are read by every node. (cutting)
  148. 31. HADOOP-173. Optimize allocation of tasks with local data. (cutting)
  149. 32. HADOOP-167. Reduce number of Configurations and JobConf's
  150. created. (omalley via cutting)
  151. 33. NUTCH-256. Change FileSystem#createNewFile() to create a .crc
  152. file. The lack of a .crc file was causing warnings. (cutting)
  153. 34. HADOOP-174. Change JobClient to not abort job until it has failed
  154. to contact the job tracker for five attempts, not just one as
  155. before. (omalley via cutting)
  156. 35. HADOOP-177. Change MapReduce web interface to page through tasks.
  157. Previously, when jobs had more than a few thousand tasks they
  158. could crash web browsers. (Mahadev Konar via cutting)
  159. 36. HADOOP-178. In DFS, piggyback blockwork requests from datanodes
  160. on heartbeat responses from namenode. This reduces the volume of
  161. RPC traffic. Also move startup delay in blockwork from datanode
  162. to namenode. This fixes a problem where restarting the namenode
  163. triggered a lot of uneeded replication. (Hairong Kuang via cutting)
  164. 37. HADOOP-183. If the DFS namenode is restarted with different
  165. minimum and/or maximum replication counts, existing files'
  166. replication counts are now automatically adjusted to be within the
  167. newly configured bounds. (Hairong Kuang via cutting)
  168. 38. HADOOP-186. Better error handling in TaskTracker's top-level
  169. loop. Also improve calculation of time to send next heartbeat.
  170. (omalley via cutting)
  171. 39. HADOOP-187. Add two MapReduce examples/benchmarks. One creates
  172. files containing random data. The second sorts the output of the
  173. first. (omalley via cutting)
  174. 40. HADOOP-185. Fix so that, when a task tracker times out making the
  175. RPC asking for a new task to run, the job tracker does not think
  176. that it is actually running the task returned. (omalley via cutting)
  177. 41. HADOOP-190. If a child process hangs after it has reported
  178. completion, its output should not be lost. (Stack via cutting)
  179. 42. HADOOP-184. Re-structure some test code to better support testing
  180. on a cluster. (Mahadev Konar via cutting)
  181. 43. HADOOP-191 Add streaming package, Hadoop's first contrib module.
  182. This permits folks to easily submit MapReduce jobs whose map and
  183. reduce functions are implemented by shell commands. Use
  184. 'bin/hadoop jar build/hadoop-streaming.jar' to get details.
  185. (Michel Tourn via cutting)
  186. 44. HADOOP-189. Fix MapReduce in standalone configuration to
  187. correctly handle job jar files that contain a lib directory with
  188. nested jar files. (cutting)
  189. 45. HADOOP-65. Initial version of record I/O framework that enables
  190. the specification of record types and generates marshalling code
  191. in both Java and C++. Generated Java code implements
  192. WritableComparable, but is not yet otherwise used by
  193. Hadoop. (Milind Bhandarkar via cutting)
  194. 46. HADOOP-193. Add a MapReduce-based FileSystem benchmark.
  195. (Konstantin Shvachko via cutting)
  196. 47. HADOOP-194. Add a MapReduce-based FileSystem checker. This reads
  197. every block in every file in the filesystem. (Konstantin Shvachko
  198. via cutting)
  199. 48. HADOOP-182. Fix so that lost task trackers to not change the
  200. status of reduce tasks or completed jobs. Also fixes the progress
  201. meter so that failed tasks are subtracted. (omalley via cutting)
  202. 49. HADOOP-96. Logging improvements. Log files are now separate from
  203. standard output and standard error files. Logs are now rolled.
  204. Logging of all DFS state changes can be enabled, to facilitate
  205. debugging. (Hairong Kuang via cutting)
  206. Release 0.1.1 - 2006-04-08
  207. 1. Added CHANGES.txt, logging all significant changes to Hadoop. (cutting)
  208. 2. Fix MapReduceBase.close() to throw IOException, as declared in the
  209. Closeable interface. This permits subclasses which override this
  210. method to throw that exception. (cutting)
  211. 3. Fix HADOOP-117. Pathnames were mistakenly transposed in
  212. JobConf.getLocalFile() causing many mapred temporary files to not
  213. be removed. (Raghavendra Prabhu via cutting)
  214. 4. Fix HADOOP-116. Clean up job submission files when jobs complete.
  215. (cutting)
  216. 5. Fix HADOOP-125. Fix handling of absolute paths on Windows (cutting)
  217. Release 0.1.0 - 2006-04-01
  218. 1. The first release of Hadoop.