CHANGES.txt 22 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524
  1. Hadoop Change Log
  2. Trunk (unreleased changes)
  3. 1. HADOOP-352. Fix shell scripts to use /bin/sh instead of
  4. /bin/bash, for better portability.
  5. (Jean-Baptiste Quenot via cutting)
  6. 2. HADOOP-313. Permit task state to be saved so that single tasks
  7. may be manually re-executed when debugging. (omalley via cutting)
  8. 3. HADOOP-339. Add method to JobClient API listing jobs that are
  9. not yet complete, i.e., that are queued or running.
  10. (Mahadev Konar via cutting)
  11. 4. HADOOP-355. Updates to the streaming contrib module, including
  12. API fixes, making reduce optional, and adding an input type for
  13. StreamSequenceRecordReader. (Michel Tourn via cutting)
  14. 5. HADOOP-358. Fix a NPE bug in Path.equals().
  15. (Frédéric Bertin via cutting)
  16. 6. HADOOP-327. Fix ToolBase to not call System.exit() when
  17. exceptions are thrown. (Hairong Kuang via cutting)
  18. 7. HADOOP-359. Permit map output to be compressed.
  19. (omalley via cutting)
  20. 8. HADOOP-341. Permit input URI to CopyFiles to use the HTTP
  21. protocol. This lets one, e.g., more easily copy log files into
  22. DFS. (Arun C Murthy via cutting)
  23. 9. HADOOP-361. Remove unix dependencies from streaming contrib
  24. module tests, making them pure java. (Michel Tourn via cutting)
  25. 10. HADOOP-354. Make public methods to stop DFS daemons.
  26. (Barry Kaplan via cutting)
  27. 11. HADOOP-252. Add versioning to RPC protocols.
  28. (Milind Bhandarkar via cutting)
  29. 12. HADOOP-356. Add contrib to "compile" and "test" build targets, so
  30. that this code is better maintained. (Michel Tourn via cutting)
  31. 13. HADOOP-307. Add smallJobsBenchmark contrib module. This runs
  32. lots of small jobs, in order to determine per-task overheads.
  33. (Sanjay Dahiya via cutting)
  34. 14. HADOOP-342. Add a tool for log analysis: Logalyzer.
  35. (Arun C Murthy via cutting)
  36. 15. HADOOP-347. Add web-based browsing of DFS content. The namenode
  37. redirects browsing requests to datanodes. Content requests are
  38. redirected to datanodes where the data is local when possible.
  39. (Devaraj Das via cutting)
  40. 16. HADOOP-351. Make Hadoop IPC kernel independent of Jetty.
  41. (Devaraj Das via cutting)
  42. Release 0.4.0 - 2006-06-28
  43. 1. HADOOP-298. Improved progress reports for CopyFiles utility, the
  44. distributed file copier. (omalley via cutting)
  45. 2. HADOOP-299. Fix the task tracker, permitting multiple jobs to
  46. more easily execute at the same time. (omalley via cutting)
  47. 3. HADOOP-250. Add an HTTP user interface to the namenode, running
  48. on port 50070. (Devaraj Das via cutting)
  49. 4. HADOOP-123. Add MapReduce unit tests that run a jobtracker and
  50. tasktracker, greatly increasing code coverage.
  51. (Milind Bhandarkar via cutting)
  52. 5. HADOOP-271. Add links from jobtracker's web ui to tasktracker's
  53. web ui. Also attempt to log a thread dump of child processes
  54. before they're killed. (omalley via cutting)
  55. 6. HADOOP-210. Change RPC server to use a selector instead of a
  56. thread per connection. This should make it easier to scale to
  57. larger clusters. Note that this incompatibly changes the RPC
  58. protocol: clients and servers must both be upgraded to the new
  59. version to ensure correct operation. (Devaraj Das via cutting)
  60. 7. HADOOP-311. Change DFS client to retry failed reads, so that a
  61. single read failure will not alone cause failure of a task.
  62. (omalley via cutting)
  63. 8. HADOOP-314. Remove the "append" phase when reducing. Map output
  64. files are now directly passed to the sorter, without first
  65. appending them into a single file. Now, the first third of reduce
  66. progress is "copy" (transferring map output to reduce nodes), the
  67. middle third is "sort" (sorting map output) and the last third is
  68. "reduce" (generating output). Long-term, the "sort" phase will
  69. also be removed. (omalley via cutting)
  70. 9. HADOOP-316. Fix a potential deadlock in the jobtracker.
  71. (omalley via cutting)
  72. 10. HADOOP-319. Fix FileSystem.close() to remove the FileSystem
  73. instance from the cache. (Hairong Kuang via cutting)
  74. 11. HADOOP-135. Fix potential deadlock in JobTracker by acquiring
  75. locks in a consistent order. (omalley via cutting)
  76. 12. HADOOP-278. Check for existence of input directories before
  77. starting MapReduce jobs, making it easier to debug this common
  78. error. (omalley via cutting)
  79. 13. HADOOP-304. Improve error message for
  80. UnregisterdDatanodeException to include expected node name.
  81. (Konstantin Shvachko via cutting)
  82. 14. HADOOP-305. Fix TaskTracker to ask for new tasks as soon as a
  83. task is finished, rather than waiting for the next heartbeat.
  84. This improves performance when tasks are short.
  85. (Mahadev Konar via cutting)
  86. 15. HADOOP-59. Add support for generic command line options. One may
  87. now specify the filesystem (-fs), the MapReduce jobtracker (-jt),
  88. a config file (-conf) or any configuration property (-D). The
  89. "dfs", "fsck", "job", and "distcp" commands currently support
  90. this, with more to be added. (Hairong Kuang via cutting)
  91. 16. HADOOP-296. Permit specification of the amount of reserved space
  92. on a DFS datanode. One may specify both the percentage free and
  93. the number of bytes. (Johan Oskarson via cutting)
  94. 17. HADOOP-325. Fix a problem initializing RPC parameter classes, and
  95. remove the workaround used to initialize classes.
  96. (omalley via cutting)
  97. 18. HADOOP-328. Add an option to the "distcp" command to ignore read
  98. errors while copying. (omalley via cutting)
  99. 19. HADOOP-27. Don't allocate tasks to trackers whose local free
  100. space is too low. (Johan Oskarson via cutting)
  101. 20. HADOOP-318. Keep slow DFS output from causing task timeouts.
  102. This incompatibly changes some public interfaces, adding a
  103. parameter to OutputFormat.getRecordWriter() and the new method
  104. Reporter.progress(), but it makes lots of tasks succeed that were
  105. previously failing. (Milind Bhandarkar via cutting)
  106. Release 0.3.2 - 2006-06-09
  107. 1. HADOOP-275. Update the streaming contrib module to use log4j for
  108. its logging. (Michel Tourn via cutting)
  109. 2. HADOOP-279. Provide defaults for log4j logging parameters, so
  110. that things still work reasonably when Hadoop-specific system
  111. properties are not provided. (omalley via cutting)
  112. 3. HADOOP-280. Fix a typo in AllTestDriver which caused the wrong
  113. test to be run when "DistributedFSCheck" was specified.
  114. (Konstantin Shvachko via cutting)
  115. 4. HADOOP-240. DFS's mkdirs() implementation no longer logs a warning
  116. when the directory already exists. (Hairong Kuang via cutting)
  117. 5. HADOOP-285. Fix DFS datanodes to be able to re-join the cluster
  118. after the connection to the namenode is lost. (omalley via cutting)
  119. 6. HADOOP-277. Fix a race condition when creating directories.
  120. (Sameer Paranjpye via cutting)
  121. 7. HADOOP-289. Improved exception handling in DFS datanode.
  122. (Konstantin Shvachko via cutting)
  123. 8. HADOOP-292. Fix client-side logging to go to standard error
  124. rather than standard output, so that it can be distinguished from
  125. application output. (omalley via cutting)
  126. 9. HADOOP-294. Fixed bug where conditions for retrying after errors
  127. in the DFS client were reversed. (omalley via cutting)
  128. Release 0.3.1 - 2006-06-05
  129. 1. HADOOP-272. Fix a bug in bin/hadoop setting log
  130. parameters. (omalley & cutting)
  131. 2. HADOOP-274. Change applications to log to standard output rather
  132. than to a rolling log file like daemons. (omalley via cutting)
  133. 3. HADOOP-262. Fix reduce tasks to report progress while they're
  134. waiting for map outputs, so that they do not time out.
  135. (Mahadev Konar via cutting)
  136. 4. HADOOP-245 and HADOOP-246. Improvements to record io package.
  137. (Mahadev Konar via cutting)
  138. 5. HADOOP-276. Add logging config files to jar file so that they're
  139. always found. (omalley via cutting)
  140. Release 0.3.0 - 2006-06-02
  141. 1. HADOOP-208. Enhance MapReduce web interface, adding new pages
  142. for failed tasks, and tasktrackers. (omalley via cutting)
  143. 2. HADOOP-204. Tweaks to metrics package. (David Bowen via cutting)
  144. 3. HADOOP-209. Add a MapReduce-based file copier. This will
  145. copy files within or between file systems in parallel.
  146. (Milind Bhandarkar via cutting)
  147. 4. HADOOP-146. Fix DFS to check when randomly generating a new block
  148. id that no existing blocks already have that id.
  149. (Milind Bhandarkar via cutting)
  150. 5. HADOOP-180. Make a daemon thread that does the actual task clean ups, so
  151. that the main offerService thread in the taskTracker doesn't get stuck
  152. and miss his heartbeat window. This was killing many task trackers as
  153. big jobs finished (300+ tasks / node). (omalley via cutting)
  154. 6. HADOOP-200. Avoid transmitting entire list of map task names to
  155. reduce tasks. Instead just transmit the number of map tasks and
  156. henceforth refer to them by number when collecting map output.
  157. (omalley via cutting)
  158. 7. HADOOP-219. Fix a NullPointerException when handling a checksum
  159. exception under SequenceFile.Sorter.sort(). (cutting & stack)
  160. 8. HADOOP-212. Permit alteration of the file block size in DFS. The
  161. default block size for new files may now be specified in the
  162. configuration with the dfs.block.size property. The block size
  163. may also be specified when files are opened.
  164. (omalley via cutting)
  165. 9. HADOOP-218. Avoid accessing configuration while looping through
  166. tasks in JobTracker. (Mahadev Konar via cutting)
  167. 10. HADOOP-161. Add hashCode() method to DFS's Block.
  168. (Milind Bhandarkar via cutting)
  169. 11. HADOOP-115. Map output types may now be specified. These are also
  170. used as reduce input types, thus permitting reduce input types to
  171. differ from reduce output types. (Runping Qi via cutting)
  172. 12. HADOOP-216. Add task progress to task status page.
  173. (Bryan Pendelton via cutting)
  174. 13. HADOOP-233. Add web server to task tracker that shows running
  175. tasks and logs. Also add log access to job tracker web interface.
  176. (omalley via cutting)
  177. 14. HADOOP-205. Incorporate pending tasks into tasktracker load
  178. calculations. (Mahadev Konar via cutting)
  179. 15. HADOOP-247. Fix sort progress to better handle exceptions.
  180. (Mahadev Konar via cutting)
  181. 16. HADOOP-195. Improve performance of the transfer of map outputs to
  182. reduce nodes by performing multiple transfers in parallel, each on
  183. a separate socket. (Sameer Paranjpye via cutting)
  184. 17. HADOOP-251. Fix task processes to be tolerant of failed progress
  185. reports to their parent process. (omalley via cutting)
  186. 18. HADOOP-325. Improve the FileNotFound exceptions thrown by
  187. LocalFileSystem to include the name of the file.
  188. (Benjamin Reed via cutting)
  189. 19. HADOOP-254. Use HTTP to transfer map output data to reduce
  190. nodes. This, together with HADOOP-195, greatly improves the
  191. performance of these transfers. (omalley via cutting)
  192. 20. HADOOP-163. Cause datanodes that\ are unable to either read or
  193. write data to exit, so that the namenode will no longer target
  194. them for new blocks and will replicate their data on other nodes.
  195. (Hairong Kuang via cutting)
  196. 21. HADOOP-222. Add a -setrep option to the dfs commands that alters
  197. file replication levels. (Johan Oskarson via cutting)
  198. 22. HADOOP-75. In DFS, only check for a complete file when the file
  199. is closed, rather than as each block is written.
  200. (Milind Bhandarkar via cutting)
  201. 23. HADOOP-124. Change DFS so that datanodes are identified by a
  202. persistent ID rather than by host and port. This solves a number
  203. of filesystem integrity problems, when, e.g., datanodes are
  204. restarted. (Konstantin Shvachko via cutting)
  205. 24. HADOOP-256. Add a C API for DFS. (Arun C Murthy via cutting)
  206. 25. HADOOP-211. Switch to use the Jakarta Commons logging internally,
  207. configured to use log4j by default. (Arun C Murthy and cutting)
  208. 26. HADOOP-265. Tasktracker now fails to start if it does not have a
  209. writable local directory for temporary files. In this case, it
  210. logs a message to the JobTracker and exits. (Hairong Kuang via cutting)
  211. 27. HADOOP-270. Fix potential deadlock in datanode shutdown.
  212. (Hairong Kuang via cutting)
  213. Release 0.2.1 - 2006-05-12
  214. 1. HADOOP-199. Fix reduce progress (broken by HADOOP-182).
  215. (omalley via cutting)
  216. 2. HADOOP-201. Fix 'bin/hadoop dfs -report'. (cutting)
  217. 3. HADOOP-207. Fix JDK 1.4 incompatibility introduced by HADOOP-96.
  218. System.getenv() does not work in JDK 1.4. (Hairong Kuang via cutting)
  219. Release 0.2.0 - 2006-05-05
  220. 1. Fix HADOOP-126. 'bin/hadoop dfs -cp' now correctly copies .crc
  221. files. (Konstantin Shvachko via cutting)
  222. 2. Fix HADOOP-51. Change DFS to support per-file replication counts.
  223. (Konstantin Shvachko via cutting)
  224. 3. Fix HADOOP-131. Add scripts to start/stop dfs and mapred daemons.
  225. Use these in start/stop-all scripts. (Chris Mattmann via cutting)
  226. 4. Stop using ssh options by default that are not yet in widely used
  227. versions of ssh. Folks can still enable their use by uncommenting
  228. a line in conf/hadoop-env.sh. (cutting)
  229. 5. Fix HADOOP-92. Show information about all attempts to run each
  230. task in the web ui. (Mahadev konar via cutting)
  231. 6. Fix HADOOP-128. Improved DFS error handling. (Owen O'Malley via cutting)
  232. 7. Fix HADOOP-129. Replace uses of java.io.File with new class named
  233. Path. This fixes bugs where java.io.File methods were called
  234. directly when FileSystem methods were desired, and reduces the
  235. likelihood of such bugs in the future. It also makes the handling
  236. of pathnames more consistent between local and dfs FileSystems and
  237. between Windows and Unix. java.io.File-based methods are still
  238. available for back-compatibility, but are deprecated and will be
  239. removed once 0.2 is released. (cutting)
  240. 8. Change dfs.data.dir and mapred.local.dir to be comma-separated
  241. lists of directories, no longer be space-separated. This fixes
  242. several bugs on Windows. (cutting)
  243. 9. Fix HADOOP-144. Use mapred task id for dfs client id, to
  244. facilitate debugging. (omalley via cutting)
  245. 10. Fix HADOOP-143. Do not line-wrap stack-traces in web ui.
  246. (omalley via cutting)
  247. 11. Fix HADOOP-118. In DFS, improve clean up of abandoned file
  248. creations. (omalley via cutting)
  249. 12. Fix HADOOP-138. Stop multiple tasks in a single heartbeat, rather
  250. than one per heartbeat. (Stefan via cutting)
  251. 13. Fix HADOOP-139. Remove a potential deadlock in
  252. LocalFileSystem.lock(). (Igor Bolotin via cutting)
  253. 14. Fix HADOOP-134. Don't hang jobs when the tasktracker is
  254. misconfigured to use an un-writable local directory. (omalley via cutting)
  255. 15. Fix HADOOP-115. Correct an error message. (Stack via cutting)
  256. 16. Fix HADOOP-133. Retry pings from child to parent, in case of
  257. (local) communcation problems. Also log exit status, so that one
  258. can distinguish patricide from other deaths. (omalley via cutting)
  259. 17. Fix HADOOP-142. Avoid re-running a task on a host where it has
  260. previously failed. (omalley via cutting)
  261. 18. Fix HADOOP-148. Maintain a task failure count for each
  262. tasktracker and display it in the web ui. (omalley via cutting)
  263. 19. Fix HADOOP-151. Close a potential socket leak, where new IPC
  264. connection pools were created per configuration instance that RPCs
  265. use. Now a global RPC connection pool is used again, as
  266. originally intended. (cutting)
  267. 20. Fix HADOOP-69. Don't throw a NullPointerException when getting
  268. hints for non-existing file split. (Bryan Pendelton via cutting)
  269. 21. Fix HADOOP-157. When a task that writes dfs files (e.g., a reduce
  270. task) failed and was retried, it would fail again and again,
  271. eventually failing the job. The problem was that dfs did not yet
  272. know that the failed task had abandoned the files, and would not
  273. yet let another task create files with the same names. Dfs now
  274. retries when creating a file long enough for locks on abandoned
  275. files to expire. (omalley via cutting)
  276. 22. Fix HADOOP-150. Improved task names that include job
  277. names. (omalley via cutting)
  278. 23. Fix HADOOP-162. Fix ConcurrentModificationException when
  279. releasing file locks. (omalley via cutting)
  280. 24. Fix HADOOP-132. Initial check-in of new Metrics API, including
  281. implementations for writing metric data to a file and for sending
  282. it to Ganglia. (David Bowen via cutting)
  283. 25. Fix HADOOP-160. Remove some uneeded synchronization around
  284. time-consuming operations in the TaskTracker. (omalley via cutting)
  285. 26. Fix HADOOP-166. RPCs failed when passed subclasses of a declared
  286. parameter type. This is fixed by changing ObjectWritable to store
  287. both the declared type and the instance type for Writables. Note
  288. that this incompatibly changes the format of ObjectWritable and
  289. will render unreadable any ObjectWritables stored in files.
  290. Nutch only uses ObjectWritable in intermediate files, so this
  291. should not be a problem for Nutch. (Stefan & cutting)
  292. 27. Fix HADOOP-168. MapReduce RPC protocol methods should all declare
  293. IOException, so that timeouts are handled appropriately.
  294. (omalley via cutting)
  295. 28. Fix HADOOP-169. Don't fail a reduce task if a call to the
  296. jobtracker to locate map outputs fails. (omalley via cutting)
  297. 29. Fix HADOOP-170. Permit FileSystem clients to examine and modify
  298. the replication count of individual files. Also fix a few
  299. replication-related bugs. (Konstantin Shvachko via cutting)
  300. 30. Permit specification of a higher replication levels for job
  301. submission files (job.xml and job.jar). This helps with large
  302. clusters, since these files are read by every node. (cutting)
  303. 31. HADOOP-173. Optimize allocation of tasks with local data. (cutting)
  304. 32. HADOOP-167. Reduce number of Configurations and JobConf's
  305. created. (omalley via cutting)
  306. 33. NUTCH-256. Change FileSystem#createNewFile() to create a .crc
  307. file. The lack of a .crc file was causing warnings. (cutting)
  308. 34. HADOOP-174. Change JobClient to not abort job until it has failed
  309. to contact the job tracker for five attempts, not just one as
  310. before. (omalley via cutting)
  311. 35. HADOOP-177. Change MapReduce web interface to page through tasks.
  312. Previously, when jobs had more than a few thousand tasks they
  313. could crash web browsers. (Mahadev Konar via cutting)
  314. 36. HADOOP-178. In DFS, piggyback blockwork requests from datanodes
  315. on heartbeat responses from namenode. This reduces the volume of
  316. RPC traffic. Also move startup delay in blockwork from datanode
  317. to namenode. This fixes a problem where restarting the namenode
  318. triggered a lot of uneeded replication. (Hairong Kuang via cutting)
  319. 37. HADOOP-183. If the DFS namenode is restarted with different
  320. minimum and/or maximum replication counts, existing files'
  321. replication counts are now automatically adjusted to be within the
  322. newly configured bounds. (Hairong Kuang via cutting)
  323. 38. HADOOP-186. Better error handling in TaskTracker's top-level
  324. loop. Also improve calculation of time to send next heartbeat.
  325. (omalley via cutting)
  326. 39. HADOOP-187. Add two MapReduce examples/benchmarks. One creates
  327. files containing random data. The second sorts the output of the
  328. first. (omalley via cutting)
  329. 40. HADOOP-185. Fix so that, when a task tracker times out making the
  330. RPC asking for a new task to run, the job tracker does not think
  331. that it is actually running the task returned. (omalley via cutting)
  332. 41. HADOOP-190. If a child process hangs after it has reported
  333. completion, its output should not be lost. (Stack via cutting)
  334. 42. HADOOP-184. Re-structure some test code to better support testing
  335. on a cluster. (Mahadev Konar via cutting)
  336. 43. HADOOP-191 Add streaming package, Hadoop's first contrib module.
  337. This permits folks to easily submit MapReduce jobs whose map and
  338. reduce functions are implemented by shell commands. Use
  339. 'bin/hadoop jar build/hadoop-streaming.jar' to get details.
  340. (Michel Tourn via cutting)
  341. 44. HADOOP-189. Fix MapReduce in standalone configuration to
  342. correctly handle job jar files that contain a lib directory with
  343. nested jar files. (cutting)
  344. 45. HADOOP-65. Initial version of record I/O framework that enables
  345. the specification of record types and generates marshalling code
  346. in both Java and C++. Generated Java code implements
  347. WritableComparable, but is not yet otherwise used by
  348. Hadoop. (Milind Bhandarkar via cutting)
  349. 46. HADOOP-193. Add a MapReduce-based FileSystem benchmark.
  350. (Konstantin Shvachko via cutting)
  351. 47. HADOOP-194. Add a MapReduce-based FileSystem checker. This reads
  352. every block in every file in the filesystem. (Konstantin Shvachko
  353. via cutting)
  354. 48. HADOOP-182. Fix so that lost task trackers to not change the
  355. status of reduce tasks or completed jobs. Also fixes the progress
  356. meter so that failed tasks are subtracted. (omalley via cutting)
  357. 49. HADOOP-96. Logging improvements. Log files are now separate from
  358. standard output and standard error files. Logs are now rolled.
  359. Logging of all DFS state changes can be enabled, to facilitate
  360. debugging. (Hairong Kuang via cutting)
  361. Release 0.1.1 - 2006-04-08
  362. 1. Added CHANGES.txt, logging all significant changes to Hadoop. (cutting)
  363. 2. Fix MapReduceBase.close() to throw IOException, as declared in the
  364. Closeable interface. This permits subclasses which override this
  365. method to throw that exception. (cutting)
  366. 3. Fix HADOOP-117. Pathnames were mistakenly transposed in
  367. JobConf.getLocalFile() causing many mapred temporary files to not
  368. be removed. (Raghavendra Prabhu via cutting)
  369. 4. Fix HADOOP-116. Clean up job submission files when jobs complete.
  370. (cutting)
  371. 5. Fix HADOOP-125. Fix handling of absolute paths on Windows (cutting)
  372. Release 0.1.0 - 2006-04-01
  373. 1. The first release of Hadoop.