CHANGES.txt 39 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954
  1. Hadoop Change Log
  2. Trunk (unreleased changes)
  3. 1. HADOOP-243. Fix rounding in the display of task and job progress
  4. so that things are not shown to be 100% complete until they are in
  5. fact finished. (omalley via cutting)
  6. 2. HADOOP-438. Limit the length of absolute paths in DFS, since the
  7. file format used to store pathnames has some limitations.
  8. (Wendy Chien via cutting)
  9. 3. HADOOP-530. Improve error messages in SequenceFile when keys or
  10. values are of the wrong type. (Hairong Kuang via cutting)
  11. 4. HADOOP-288. Add a file caching system and use it in MapReduce to
  12. cache job jar files on slave nodes. (Mahadev Konar via cutting)
  13. 5. HADOOP-533. Fix unit test to not modify conf directory.
  14. (Hairong Kuang via cutting)
  15. 6. HADOOP-527. Permit specification of the local address that various
  16. Hadoop daemons should bind to. (Philippe Gassmann via cutting)
  17. 7. HADOOP-542. Updates to contrib/streaming: reformatted source code,
  18. on-the-fly merge sort, a fix for HADOOP-540, etc.
  19. (Michel Tourn via cutting)
  20. 8. HADOOP-545. Remove an unused config file parameter.
  21. (Philippe Gassmann via cutting)
  22. 9. HADOOP-548. Add an Ant property "test.output" to build.xml that
  23. causes test output to be logged to the console. (omalley via cutting)
  24. 10. HADOOP-261. Record an error message when map output is lost.
  25. (omalley via cutting)
  26. 11. HADOOP-293. Report the full list of task error messages in the
  27. web ui, not just the most recent. (omalley via cutting)
  28. 12. HADOOP-551. Restore JobClient's console printouts to only include
  29. a maximum of one update per one percent of progress.
  30. (omalley via cutting)
  31. 13. HADOOP-306. Add a "safe" mode to DFS. The name node enters this
  32. when less than a specified percentage of file data is complete.
  33. Currently safe mode is only used on startup, but eventually it
  34. will also be entered when datanodes disconnect and file data
  35. becomes incomplete. While in safe mode no filesystem
  36. modifications are permitted and block replication is inhibited.
  37. (Konstantin Shvachko via cutting)
  38. 14. HADOOP-431. Change 'dfs -rm' to not operate recursively and add a
  39. new command, 'dfs -rmr' which operates recursively.
  40. (Sameer Paranjpye via cutting)
  41. 15. HADOOP-263. Include timestamps for job transitions. The web
  42. interface now displays the start and end times of tasks and the
  43. start times of sorting and reducing for reduce tasks. Also,
  44. extend ObjectWritable to handle enums, so that they can be passed
  45. as RPC parameters. (Sanjay Dahiya via cutting)
  46. 16. HADOOP-556. Contrib/streaming: send keep-alive reports to task
  47. tracker every 10 seconds rather than every 100 records, to avoid
  48. task timeouts. (Michel Tourn via cutting)
  49. 17. HADOOP-547. Fix reduce tasks to ping tasktracker while copying
  50. data, rather than only between copies, avoiding task timeouts.
  51. (Sanjay Dahiya via cutting)
  52. 18. HADOOP-537. Fix src/c++/libhdfs build process to create files in
  53. build/, no longer modifying the source tree.
  54. (Arun C Murthy via cutting)
  55. 19. HADOOP-487. Throw a more informative exception for unknown RPC
  56. hosts. (Sameer Paranjpye via cutting)
  57. 20. HADOOP-559. Add file name globbing (pattern matching) support to
  58. the FileSystem API, and use it in DFSShell ('bin/hadoop dfs')
  59. commands. (Hairong Kuang via cutting)
  60. 21. HADOOP-508. Fix a bug in FSDataInputStream. Incorrect data was
  61. returned after seeking to a random location.
  62. (Milind Bhandarkar via cutting)
  63. 22. HADOOP-560. Add a "killed" task state. This can be used to
  64. distinguish kills from other failures. Task state has also been
  65. converted to use an enum type instead of an int, uncovering a bug
  66. elsewhere. The web interface is also updated to display killed
  67. tasks. (omalley via cutting)
  68. 23. HADOOP-423. Normalize Paths containing directories named "." and
  69. "..", using the standard, unix interpretation. Also add checks in
  70. DFS, prohibiting the use of "." or ".." as directory or file
  71. names. (Wendy Chien via cutting)
  72. 24. HADOOP-513. Replace map output handling with a servlet, rather
  73. than a JSP page. This fixes an issue where
  74. IllegalStateException's were logged, sets content-length
  75. correctly, and better handles some errors. (omalley via cutting)
  76. 25. HADOOP-552. Improved error checking when copying map output files
  77. to reduce nodes. (omalley via cutting)
  78. 26. HADOOP-566. Fix scripts to work correctly when accessed through
  79. relative symbolic links. (Lee Faris via cutting)
  80. 27. HADOOP-519. Add positioned read methods to FSInputStream. These
  81. permit one to read from a stream without moving its position, and
  82. can hence be performed by multiple threads at once on a single
  83. stream. Implement an optimized version for DFS and local FS.
  84. (Milind Bhandarkar via cutting)
  85. 28. HADOOP-522. Permit block compression with MapFile and SetFile.
  86. Since these formats are always sorted, block compression can
  87. provide a big advantage. (cutting)
  88. 29. HADOOP-567. Record version and revision information in builds. A
  89. package manifest is added to the generated jar file containing
  90. version information, and a VersionInfo utility is added that
  91. includes further information, including the build date and user,
  92. and the subversion revision and repository. A 'bin/hadoop
  93. version' comand is added to show this information, and it is also
  94. added to various web interfaces. (omalley via cutting)
  95. 30. HADOOP-568. Fix so that errors while initializing tasks on a
  96. tasktracker correctly report the task as failed to the jobtracker,
  97. so that it will be rescheduled. (omalley via cutting)
  98. 31. HADOOP-550. Disable automatic UTF-8 validation in Text. This
  99. permits, e.g., TextInputFormat to again operate on non-UTF-8 data.
  100. (Hairong and Mahadev via cutting)
  101. 32. HADOOP-343. Fix mapred copying so that a failed tasktracker
  102. doesn't cause other copies to slow. (Sameer Paranjpye via cutting)
  103. 33. HADOOP-239. Add a persistent job history mechanism, so that basic
  104. job statistics are not lost after 24 hours and/or when the
  105. jobtracker is restarted. (Sanjay Dahiya via cutting)
  106. 34. HADOOP-506. Ignore heartbeats from stale task trackers.
  107. (Sanjay Dahiya via cutting)
  108. Release 0.6.2 - 2006-09-18
  109. 1. HADOOP-532. Fix a bug reading value-compressed sequence files,
  110. where an exception was thrown reporting that the full value had not
  111. been read. (omalley via cutting)
  112. 2. HADOOP-534. Change the default value class in JobConf to be Text
  113. instead of the now-deprecated UTF8. This fixes the Grep example
  114. program, which was updated to use Text, but relies on this
  115. default. (Hairong Kuang via cutting)
  116. Release 0.6.1 - 2006-09-13
  117. 1. HADOOP-520. Fix a bug in libhdfs, where write failures were not
  118. correctly returning error codes. (Arun C Murthy via cutting)
  119. 2. HADOOP-523. Fix a NullPointerException when TextInputFormat is
  120. explicitly specified. Also add a test case for this.
  121. (omalley via cutting)
  122. 3. HADOOP-521. Fix another NullPointerException finding the
  123. ClassLoader when using libhdfs. (omalley via cutting)
  124. 4. HADOOP-526. Fix a NullPointerException when attempting to start
  125. two datanodes in the same directory. (Milind Bhandarkar via cutting)
  126. 5. HADOOP-529. Fix a NullPointerException when opening
  127. value-compressed sequence files generated by pre-0.6.0 Hadoop.
  128. (omalley via cutting)
  129. Release 0.6.0 - 2006-09-08
  130. 1. HADOOP-427. Replace some uses of DatanodeDescriptor in the DFS
  131. web UI code with DatanodeInfo, the preferred public class.
  132. (Devaraj Das via cutting)
  133. 2. HADOOP-426. Fix streaming contrib module to work correctly on
  134. Solaris. This was causing nightly builds to fail.
  135. (Michel Tourn via cutting)
  136. 3. HADOOP-400. Improvements to task assignment. Tasks are no longer
  137. re-run on nodes where they have failed (unless no other node is
  138. available). Also, tasks are better load-balanced among nodes.
  139. (omalley via cutting)
  140. 4. HADOOP-324. Fix datanode to not exit when a disk is full, but
  141. rather simply to fail writes. (Wendy Chien via cutting)
  142. 5. HADOOP-434. Change smallJobsBenchmark to use standard Hadoop
  143. scripts. (Sanjay Dahiya via cutting)
  144. 6. HADOOP-453. Fix a bug in Text.setCapacity(). (siren via cutting)
  145. 7. HADOOP-450. Change so that input types are determined by the
  146. RecordReader rather than specified directly in the JobConf. This
  147. facilitates jobs with a variety of input types.
  148. WARNING: This contains incompatible API changes! The RecordReader
  149. interface has two new methods that all user-defined InputFormats
  150. must now define. Also, the values returned by TextInputFormat are
  151. no longer of class UTF8, but now of class Text.
  152. 8. HADOOP-436. Fix an error-handling bug in the web ui.
  153. (Devaraj Das via cutting)
  154. 9. HADOOP-455. Fix a bug in Text, where DEL was not permitted.
  155. (Hairong Kuang via cutting)
  156. 10. HADOOP-456. Change the DFS namenode to keep a persistent record
  157. of the set of known datanodes. This will be used to implement a
  158. "safe mode" where filesystem changes are prohibited when a
  159. critical percentage of the datanodes are unavailable.
  160. (Konstantin Shvachko via cutting)
  161. 11. HADOOP-322. Add a job control utility. This permits one to
  162. specify job interdependencies. Each job is submitted only after
  163. the jobs it depends on have successfully completed.
  164. (Runping Qi via cutting)
  165. 12. HADOOP-176. Fix a bug in IntWritable.Comparator.
  166. (Dick King via cutting)
  167. 13. HADOOP-421. Replace uses of String in recordio package with Text
  168. class, for improved handling of UTF-8 data.
  169. (Milind Bhandarkar via cutting)
  170. 14. HADOOP-464. Improved error message when job jar not found.
  171. (Michel Tourn via cutting)
  172. 15. HADOOP-469. Fix /bin/bash specifics that have crept into our
  173. /bin/sh scripts since HADOOP-352.
  174. (Jean-Baptiste Quenot via cutting)
  175. 16. HADOOP-468. Add HADOOP_NICENESS environment variable to set
  176. scheduling priority for daemons. (Vetle Roeim via cutting)
  177. 17. HADOOP-473. Fix TextInputFormat to correctly handle more EOL
  178. formats. Things now work correctly with CR, LF or CRLF.
  179. (Dennis Kubes & James White via cutting)
  180. 18. HADOOP-461. Make Java 1.5 an explicit requirement. (cutting)
  181. 19. HADOOP-54. Add block compression to SequenceFile. One may now
  182. specify that blocks of keys and values are compressed together,
  183. improving compression for small keys and values.
  184. SequenceFile.Writer's constructor is now deprecated and replaced
  185. with a factory method. (Arun C Murthy via cutting)
  186. 20. HADOOP-281. Prohibit DFS files that are also directories.
  187. (Wendy Chien via cutting)
  188. 21. HADOOP-486. Add the job username to JobStatus instances returned
  189. by JobClient. (Mahadev Konar via cutting)
  190. 22. HADOOP-437. contrib/streaming: Add support for gzipped inputs.
  191. (Michel Tourn via cutting)
  192. 23. HADOOP-463. Add variable expansion to config files.
  193. Configuration property values may now contain variable
  194. expressions. A variable is referenced with the syntax
  195. '${variable}'. Variables values are found first in the
  196. configuration, and then in Java system properties. The default
  197. configuration is modified so that temporary directories are now
  198. under ${hadoop.tmp.dir}, which is, by default,
  199. /tmp/hadoop-${user.name}. (Michel Tourn via cutting)
  200. 24. HADOOP-419. Fix a NullPointerException finding the ClassLoader
  201. when using libhdfs. (omalley via cutting)
  202. 25. HADOOP-460. Fix contrib/smallJobsBenchmark to use Text instead of
  203. UTF8. (Sanjay Dahiya via cutting)
  204. 26. HADOOP-196. Fix Configuration(Configuration) constructor to work
  205. correctly. (Sami Siren via cutting)
  206. 27. HADOOP-501. Fix Configuration.toString() to handle URL resources.
  207. (Thomas Friol via cutting)
  208. 28. HADOOP-499. Reduce the use of Strings in contrib/streaming,
  209. replacing them with Text for better performance.
  210. (Hairong Kuang via cutting)
  211. 29. HADOOP-64. Manage multiple volumes with a single DataNode.
  212. Previously DataNode would create a separate daemon per configured
  213. volume, each with its own connection to the NameNode. Now all
  214. volumes are handled by a single DataNode daemon, reducing the load
  215. on the NameNode. (Milind Bhandarkar via cutting)
  216. 30. HADOOP-424. Fix MapReduce so that jobs which generate zero splits
  217. do not fail. (Frédéric Bertin via cutting)
  218. 31. HADOOP-408. Adjust some timeouts and remove some others so that
  219. unit tests run faster. (cutting)
  220. 32. HADOOP-507. Fix an IllegalAccessException in DFS.
  221. (omalley via cutting)
  222. 33. HADOOP-320. Fix so that checksum files are correctly copied when
  223. the destination of a file copy is a directory.
  224. (Hairong Kuang via cutting)
  225. 34. HADOOP-286. In DFSClient, avoid pinging the NameNode with
  226. renewLease() calls when no files are being written.
  227. (Konstantin Shvachko via cutting)
  228. 35. HADOOP-312. Close idle IPC connections. All IPC connections were
  229. cached forever. Now, after a connection has been idle for more
  230. than a configurable amount of time (one second by default), the
  231. connection is closed, conserving resources on both client and
  232. server. (Devaraj Das via cutting)
  233. 36. HADOOP-497. Permit the specification of the network interface and
  234. nameserver to be used when determining the local hostname
  235. advertised by datanodes and tasktrackers.
  236. (Lorenzo Thione via cutting)
  237. 37. HADOOP-441. Add a compression codec API and extend SequenceFile
  238. to use it. This will permit the use of alternate compression
  239. codecs in SequenceFile. (Arun C Murthy via cutting)
  240. 38. HADOOP-483. Improvements to libhdfs build and documentation.
  241. (Arun C Murthy via cutting)
  242. 39. HADOOP-458. Fix a memory corruption bug in libhdfs.
  243. (Arun C Murthy via cutting)
  244. 40. HADOOP-517. Fix a contrib/streaming bug in end-of-line detection.
  245. (Hairong Kuang via cutting)
  246. 41. HADOOP-474. Add CompressionCodecFactory, and use it in
  247. TextInputFormat and TextOutputFormat. Compressed input files are
  248. automatically decompressed when they have the correct extension.
  249. Output files will, when output compression is specified, be
  250. generated with an approprate extension. Also add a gzip codec and
  251. fix problems with UTF8 text inputs. (omalley via cutting)
  252. Release 0.5.0 - 2006-08-04
  253. 1. HADOOP-352. Fix shell scripts to use /bin/sh instead of
  254. /bin/bash, for better portability.
  255. (Jean-Baptiste Quenot via cutting)
  256. 2. HADOOP-313. Permit task state to be saved so that single tasks
  257. may be manually re-executed when debugging. (omalley via cutting)
  258. 3. HADOOP-339. Add method to JobClient API listing jobs that are
  259. not yet complete, i.e., that are queued or running.
  260. (Mahadev Konar via cutting)
  261. 4. HADOOP-355. Updates to the streaming contrib module, including
  262. API fixes, making reduce optional, and adding an input type for
  263. StreamSequenceRecordReader. (Michel Tourn via cutting)
  264. 5. HADOOP-358. Fix a NPE bug in Path.equals().
  265. (Frédéric Bertin via cutting)
  266. 6. HADOOP-327. Fix ToolBase to not call System.exit() when
  267. exceptions are thrown. (Hairong Kuang via cutting)
  268. 7. HADOOP-359. Permit map output to be compressed.
  269. (omalley via cutting)
  270. 8. HADOOP-341. Permit input URI to CopyFiles to use the HTTP
  271. protocol. This lets one, e.g., more easily copy log files into
  272. DFS. (Arun C Murthy via cutting)
  273. 9. HADOOP-361. Remove unix dependencies from streaming contrib
  274. module tests, making them pure java. (Michel Tourn via cutting)
  275. 10. HADOOP-354. Make public methods to stop DFS daemons.
  276. (Barry Kaplan via cutting)
  277. 11. HADOOP-252. Add versioning to RPC protocols.
  278. (Milind Bhandarkar via cutting)
  279. 12. HADOOP-356. Add contrib to "compile" and "test" build targets, so
  280. that this code is better maintained. (Michel Tourn via cutting)
  281. 13. HADOOP-307. Add smallJobsBenchmark contrib module. This runs
  282. lots of small jobs, in order to determine per-task overheads.
  283. (Sanjay Dahiya via cutting)
  284. 14. HADOOP-342. Add a tool for log analysis: Logalyzer.
  285. (Arun C Murthy via cutting)
  286. 15. HADOOP-347. Add web-based browsing of DFS content. The namenode
  287. redirects browsing requests to datanodes. Content requests are
  288. redirected to datanodes where the data is local when possible.
  289. (Devaraj Das via cutting)
  290. 16. HADOOP-351. Make Hadoop IPC kernel independent of Jetty.
  291. (Devaraj Das via cutting)
  292. 17. HADOOP-237. Add metric reporting to DFS and MapReduce. With only
  293. minor configuration changes, one can now monitor many Hadoop
  294. system statistics using Ganglia or other monitoring systems.
  295. (Milind Bhandarkar via cutting)
  296. 18. HADOOP-376. Fix datanode's HTTP server to scan for a free port.
  297. (omalley via cutting)
  298. 19. HADOOP-260. Add --config option to shell scripts, specifying an
  299. alternate configuration directory. (Milind Bhandarkar via cutting)
  300. 20. HADOOP-381. Permit developers to save the temporary files for
  301. tasks whose names match a regular expression, to facilliate
  302. debugging. (omalley via cutting)
  303. 21. HADOOP-344. Fix some Windows-related problems with DF.
  304. (Konstantin Shvachko via cutting)
  305. 22. HADOOP-380. Fix reduce tasks to poll less frequently for map
  306. outputs. (Mahadev Konar via cutting)
  307. 23. HADOOP-321. Refactor DatanodeInfo, in preparation for
  308. HADOOP-306. (Konstantin Shvachko & omalley via cutting)
  309. 24. HADOOP-385. Fix some bugs in record io code generation.
  310. (Milind Bhandarkar via cutting)
  311. 25. HADOOP-302. Add new Text class to replace UTF8, removing
  312. limitations of that class. Also refactor utility methods for
  313. writing zero-compressed integers (VInts and VLongs).
  314. (Hairong Kuang via cutting)
  315. 26. HADOOP-335. Refactor DFS namespace/transaction logging in
  316. namenode. (Konstantin Shvachko via cutting)
  317. 27. HADOOP-375. Fix handling of the datanode HTTP daemon's port so
  318. that multiple datanode's can be run on a single host.
  319. (Devaraj Das via cutting)
  320. 28. HADOOP-386. When removing excess DFS block replicas, remove those
  321. on nodes with the least free space first.
  322. (Johan Oskarson via cutting)
  323. 29. HADOOP-389. Fix intermittent failures of mapreduce unit tests.
  324. Also fix some build dependencies.
  325. (Mahadev & Konstantin via cutting)
  326. 30. HADOOP-362. Fix a problem where jobs hang when status messages
  327. are recieved out-of-order. (omalley via cutting)
  328. 31. HADOOP-394. Change order of DFS shutdown in unit tests to
  329. minimize errors logged. (Konstantin Shvachko via cutting)
  330. 32. HADOOP-396. Make DatanodeID implement Writable.
  331. (Konstantin Shvachko via cutting)
  332. 33. HADOOP-377. Permit one to add URL resources to a Configuration.
  333. (Jean-Baptiste Quenot via cutting)
  334. 34. HADOOP-345. Permit iteration over Configuration key/value pairs.
  335. (Michel Tourn via cutting)
  336. 35. HADOOP-409. Streaming contrib module: make configuration
  337. properties available to commands as environment variables.
  338. (Michel Tourn via cutting)
  339. 36. HADOOP-369. Add -getmerge option to dfs command that appends all
  340. files in a directory into a single local file.
  341. (Johan Oskarson via cutting)
  342. 37. HADOOP-410. Replace some TreeMaps with HashMaps in DFS, for
  343. a 17% performance improvement. (Milind Bhandarkar via cutting)
  344. 38. HADOOP-411. Add unit tests for command line parser.
  345. (Hairong Kuang via cutting)
  346. 39. HADOOP-412. Add MapReduce input formats that support filtering
  347. of SequenceFile data, including sampling and regex matching.
  348. Also, move JobConf.newInstance() to a new utility class.
  349. (Hairong Kuang via cutting)
  350. 40. HADOOP-226. Fix fsck command to properly consider replication
  351. counts, now that these can vary per file. (Bryan Pendleton via cutting)
  352. 41. HADOOP-425. Add a Python MapReduce example, using Jython.
  353. (omalley via cutting)
  354. Release 0.4.0 - 2006-06-28
  355. 1. HADOOP-298. Improved progress reports for CopyFiles utility, the
  356. distributed file copier. (omalley via cutting)
  357. 2. HADOOP-299. Fix the task tracker, permitting multiple jobs to
  358. more easily execute at the same time. (omalley via cutting)
  359. 3. HADOOP-250. Add an HTTP user interface to the namenode, running
  360. on port 50070. (Devaraj Das via cutting)
  361. 4. HADOOP-123. Add MapReduce unit tests that run a jobtracker and
  362. tasktracker, greatly increasing code coverage.
  363. (Milind Bhandarkar via cutting)
  364. 5. HADOOP-271. Add links from jobtracker's web ui to tasktracker's
  365. web ui. Also attempt to log a thread dump of child processes
  366. before they're killed. (omalley via cutting)
  367. 6. HADOOP-210. Change RPC server to use a selector instead of a
  368. thread per connection. This should make it easier to scale to
  369. larger clusters. Note that this incompatibly changes the RPC
  370. protocol: clients and servers must both be upgraded to the new
  371. version to ensure correct operation. (Devaraj Das via cutting)
  372. 7. HADOOP-311. Change DFS client to retry failed reads, so that a
  373. single read failure will not alone cause failure of a task.
  374. (omalley via cutting)
  375. 8. HADOOP-314. Remove the "append" phase when reducing. Map output
  376. files are now directly passed to the sorter, without first
  377. appending them into a single file. Now, the first third of reduce
  378. progress is "copy" (transferring map output to reduce nodes), the
  379. middle third is "sort" (sorting map output) and the last third is
  380. "reduce" (generating output). Long-term, the "sort" phase will
  381. also be removed. (omalley via cutting)
  382. 9. HADOOP-316. Fix a potential deadlock in the jobtracker.
  383. (omalley via cutting)
  384. 10. HADOOP-319. Fix FileSystem.close() to remove the FileSystem
  385. instance from the cache. (Hairong Kuang via cutting)
  386. 11. HADOOP-135. Fix potential deadlock in JobTracker by acquiring
  387. locks in a consistent order. (omalley via cutting)
  388. 12. HADOOP-278. Check for existence of input directories before
  389. starting MapReduce jobs, making it easier to debug this common
  390. error. (omalley via cutting)
  391. 13. HADOOP-304. Improve error message for
  392. UnregisterdDatanodeException to include expected node name.
  393. (Konstantin Shvachko via cutting)
  394. 14. HADOOP-305. Fix TaskTracker to ask for new tasks as soon as a
  395. task is finished, rather than waiting for the next heartbeat.
  396. This improves performance when tasks are short.
  397. (Mahadev Konar via cutting)
  398. 15. HADOOP-59. Add support for generic command line options. One may
  399. now specify the filesystem (-fs), the MapReduce jobtracker (-jt),
  400. a config file (-conf) or any configuration property (-D). The
  401. "dfs", "fsck", "job", and "distcp" commands currently support
  402. this, with more to be added. (Hairong Kuang via cutting)
  403. 16. HADOOP-296. Permit specification of the amount of reserved space
  404. on a DFS datanode. One may specify both the percentage free and
  405. the number of bytes. (Johan Oskarson via cutting)
  406. 17. HADOOP-325. Fix a problem initializing RPC parameter classes, and
  407. remove the workaround used to initialize classes.
  408. (omalley via cutting)
  409. 18. HADOOP-328. Add an option to the "distcp" command to ignore read
  410. errors while copying. (omalley via cutting)
  411. 19. HADOOP-27. Don't allocate tasks to trackers whose local free
  412. space is too low. (Johan Oskarson via cutting)
  413. 20. HADOOP-318. Keep slow DFS output from causing task timeouts.
  414. This incompatibly changes some public interfaces, adding a
  415. parameter to OutputFormat.getRecordWriter() and the new method
  416. Reporter.progress(), but it makes lots of tasks succeed that were
  417. previously failing. (Milind Bhandarkar via cutting)
  418. Release 0.3.2 - 2006-06-09
  419. 1. HADOOP-275. Update the streaming contrib module to use log4j for
  420. its logging. (Michel Tourn via cutting)
  421. 2. HADOOP-279. Provide defaults for log4j logging parameters, so
  422. that things still work reasonably when Hadoop-specific system
  423. properties are not provided. (omalley via cutting)
  424. 3. HADOOP-280. Fix a typo in AllTestDriver which caused the wrong
  425. test to be run when "DistributedFSCheck" was specified.
  426. (Konstantin Shvachko via cutting)
  427. 4. HADOOP-240. DFS's mkdirs() implementation no longer logs a warning
  428. when the directory already exists. (Hairong Kuang via cutting)
  429. 5. HADOOP-285. Fix DFS datanodes to be able to re-join the cluster
  430. after the connection to the namenode is lost. (omalley via cutting)
  431. 6. HADOOP-277. Fix a race condition when creating directories.
  432. (Sameer Paranjpye via cutting)
  433. 7. HADOOP-289. Improved exception handling in DFS datanode.
  434. (Konstantin Shvachko via cutting)
  435. 8. HADOOP-292. Fix client-side logging to go to standard error
  436. rather than standard output, so that it can be distinguished from
  437. application output. (omalley via cutting)
  438. 9. HADOOP-294. Fixed bug where conditions for retrying after errors
  439. in the DFS client were reversed. (omalley via cutting)
  440. Release 0.3.1 - 2006-06-05
  441. 1. HADOOP-272. Fix a bug in bin/hadoop setting log
  442. parameters. (omalley & cutting)
  443. 2. HADOOP-274. Change applications to log to standard output rather
  444. than to a rolling log file like daemons. (omalley via cutting)
  445. 3. HADOOP-262. Fix reduce tasks to report progress while they're
  446. waiting for map outputs, so that they do not time out.
  447. (Mahadev Konar via cutting)
  448. 4. HADOOP-245 and HADOOP-246. Improvements to record io package.
  449. (Mahadev Konar via cutting)
  450. 5. HADOOP-276. Add logging config files to jar file so that they're
  451. always found. (omalley via cutting)
  452. Release 0.3.0 - 2006-06-02
  453. 1. HADOOP-208. Enhance MapReduce web interface, adding new pages
  454. for failed tasks, and tasktrackers. (omalley via cutting)
  455. 2. HADOOP-204. Tweaks to metrics package. (David Bowen via cutting)
  456. 3. HADOOP-209. Add a MapReduce-based file copier. This will
  457. copy files within or between file systems in parallel.
  458. (Milind Bhandarkar via cutting)
  459. 4. HADOOP-146. Fix DFS to check when randomly generating a new block
  460. id that no existing blocks already have that id.
  461. (Milind Bhandarkar via cutting)
  462. 5. HADOOP-180. Make a daemon thread that does the actual task clean ups, so
  463. that the main offerService thread in the taskTracker doesn't get stuck
  464. and miss his heartbeat window. This was killing many task trackers as
  465. big jobs finished (300+ tasks / node). (omalley via cutting)
  466. 6. HADOOP-200. Avoid transmitting entire list of map task names to
  467. reduce tasks. Instead just transmit the number of map tasks and
  468. henceforth refer to them by number when collecting map output.
  469. (omalley via cutting)
  470. 7. HADOOP-219. Fix a NullPointerException when handling a checksum
  471. exception under SequenceFile.Sorter.sort(). (cutting & stack)
  472. 8. HADOOP-212. Permit alteration of the file block size in DFS. The
  473. default block size for new files may now be specified in the
  474. configuration with the dfs.block.size property. The block size
  475. may also be specified when files are opened.
  476. (omalley via cutting)
  477. 9. HADOOP-218. Avoid accessing configuration while looping through
  478. tasks in JobTracker. (Mahadev Konar via cutting)
  479. 10. HADOOP-161. Add hashCode() method to DFS's Block.
  480. (Milind Bhandarkar via cutting)
  481. 11. HADOOP-115. Map output types may now be specified. These are also
  482. used as reduce input types, thus permitting reduce input types to
  483. differ from reduce output types. (Runping Qi via cutting)
  484. 12. HADOOP-216. Add task progress to task status page.
  485. (Bryan Pendelton via cutting)
  486. 13. HADOOP-233. Add web server to task tracker that shows running
  487. tasks and logs. Also add log access to job tracker web interface.
  488. (omalley via cutting)
  489. 14. HADOOP-205. Incorporate pending tasks into tasktracker load
  490. calculations. (Mahadev Konar via cutting)
  491. 15. HADOOP-247. Fix sort progress to better handle exceptions.
  492. (Mahadev Konar via cutting)
  493. 16. HADOOP-195. Improve performance of the transfer of map outputs to
  494. reduce nodes by performing multiple transfers in parallel, each on
  495. a separate socket. (Sameer Paranjpye via cutting)
  496. 17. HADOOP-251. Fix task processes to be tolerant of failed progress
  497. reports to their parent process. (omalley via cutting)
  498. 18. HADOOP-325. Improve the FileNotFound exceptions thrown by
  499. LocalFileSystem to include the name of the file.
  500. (Benjamin Reed via cutting)
  501. 19. HADOOP-254. Use HTTP to transfer map output data to reduce
  502. nodes. This, together with HADOOP-195, greatly improves the
  503. performance of these transfers. (omalley via cutting)
  504. 20. HADOOP-163. Cause datanodes that\ are unable to either read or
  505. write data to exit, so that the namenode will no longer target
  506. them for new blocks and will replicate their data on other nodes.
  507. (Hairong Kuang via cutting)
  508. 21. HADOOP-222. Add a -setrep option to the dfs commands that alters
  509. file replication levels. (Johan Oskarson via cutting)
  510. 22. HADOOP-75. In DFS, only check for a complete file when the file
  511. is closed, rather than as each block is written.
  512. (Milind Bhandarkar via cutting)
  513. 23. HADOOP-124. Change DFS so that datanodes are identified by a
  514. persistent ID rather than by host and port. This solves a number
  515. of filesystem integrity problems, when, e.g., datanodes are
  516. restarted. (Konstantin Shvachko via cutting)
  517. 24. HADOOP-256. Add a C API for DFS. (Arun C Murthy via cutting)
  518. 25. HADOOP-211. Switch to use the Jakarta Commons logging internally,
  519. configured to use log4j by default. (Arun C Murthy and cutting)
  520. 26. HADOOP-265. Tasktracker now fails to start if it does not have a
  521. writable local directory for temporary files. In this case, it
  522. logs a message to the JobTracker and exits. (Hairong Kuang via cutting)
  523. 27. HADOOP-270. Fix potential deadlock in datanode shutdown.
  524. (Hairong Kuang via cutting)
  525. Release 0.2.1 - 2006-05-12
  526. 1. HADOOP-199. Fix reduce progress (broken by HADOOP-182).
  527. (omalley via cutting)
  528. 2. HADOOP-201. Fix 'bin/hadoop dfs -report'. (cutting)
  529. 3. HADOOP-207. Fix JDK 1.4 incompatibility introduced by HADOOP-96.
  530. System.getenv() does not work in JDK 1.4. (Hairong Kuang via cutting)
  531. Release 0.2.0 - 2006-05-05
  532. 1. Fix HADOOP-126. 'bin/hadoop dfs -cp' now correctly copies .crc
  533. files. (Konstantin Shvachko via cutting)
  534. 2. Fix HADOOP-51. Change DFS to support per-file replication counts.
  535. (Konstantin Shvachko via cutting)
  536. 3. Fix HADOOP-131. Add scripts to start/stop dfs and mapred daemons.
  537. Use these in start/stop-all scripts. (Chris Mattmann via cutting)
  538. 4. Stop using ssh options by default that are not yet in widely used
  539. versions of ssh. Folks can still enable their use by uncommenting
  540. a line in conf/hadoop-env.sh. (cutting)
  541. 5. Fix HADOOP-92. Show information about all attempts to run each
  542. task in the web ui. (Mahadev konar via cutting)
  543. 6. Fix HADOOP-128. Improved DFS error handling. (Owen O'Malley via cutting)
  544. 7. Fix HADOOP-129. Replace uses of java.io.File with new class named
  545. Path. This fixes bugs where java.io.File methods were called
  546. directly when FileSystem methods were desired, and reduces the
  547. likelihood of such bugs in the future. It also makes the handling
  548. of pathnames more consistent between local and dfs FileSystems and
  549. between Windows and Unix. java.io.File-based methods are still
  550. available for back-compatibility, but are deprecated and will be
  551. removed once 0.2 is released. (cutting)
  552. 8. Change dfs.data.dir and mapred.local.dir to be comma-separated
  553. lists of directories, no longer be space-separated. This fixes
  554. several bugs on Windows. (cutting)
  555. 9. Fix HADOOP-144. Use mapred task id for dfs client id, to
  556. facilitate debugging. (omalley via cutting)
  557. 10. Fix HADOOP-143. Do not line-wrap stack-traces in web ui.
  558. (omalley via cutting)
  559. 11. Fix HADOOP-118. In DFS, improve clean up of abandoned file
  560. creations. (omalley via cutting)
  561. 12. Fix HADOOP-138. Stop multiple tasks in a single heartbeat, rather
  562. than one per heartbeat. (Stefan via cutting)
  563. 13. Fix HADOOP-139. Remove a potential deadlock in
  564. LocalFileSystem.lock(). (Igor Bolotin via cutting)
  565. 14. Fix HADOOP-134. Don't hang jobs when the tasktracker is
  566. misconfigured to use an un-writable local directory. (omalley via cutting)
  567. 15. Fix HADOOP-115. Correct an error message. (Stack via cutting)
  568. 16. Fix HADOOP-133. Retry pings from child to parent, in case of
  569. (local) communcation problems. Also log exit status, so that one
  570. can distinguish patricide from other deaths. (omalley via cutting)
  571. 17. Fix HADOOP-142. Avoid re-running a task on a host where it has
  572. previously failed. (omalley via cutting)
  573. 18. Fix HADOOP-148. Maintain a task failure count for each
  574. tasktracker and display it in the web ui. (omalley via cutting)
  575. 19. Fix HADOOP-151. Close a potential socket leak, where new IPC
  576. connection pools were created per configuration instance that RPCs
  577. use. Now a global RPC connection pool is used again, as
  578. originally intended. (cutting)
  579. 20. Fix HADOOP-69. Don't throw a NullPointerException when getting
  580. hints for non-existing file split. (Bryan Pendelton via cutting)
  581. 21. Fix HADOOP-157. When a task that writes dfs files (e.g., a reduce
  582. task) failed and was retried, it would fail again and again,
  583. eventually failing the job. The problem was that dfs did not yet
  584. know that the failed task had abandoned the files, and would not
  585. yet let another task create files with the same names. Dfs now
  586. retries when creating a file long enough for locks on abandoned
  587. files to expire. (omalley via cutting)
  588. 22. Fix HADOOP-150. Improved task names that include job
  589. names. (omalley via cutting)
  590. 23. Fix HADOOP-162. Fix ConcurrentModificationException when
  591. releasing file locks. (omalley via cutting)
  592. 24. Fix HADOOP-132. Initial check-in of new Metrics API, including
  593. implementations for writing metric data to a file and for sending
  594. it to Ganglia. (David Bowen via cutting)
  595. 25. Fix HADOOP-160. Remove some uneeded synchronization around
  596. time-consuming operations in the TaskTracker. (omalley via cutting)
  597. 26. Fix HADOOP-166. RPCs failed when passed subclasses of a declared
  598. parameter type. This is fixed by changing ObjectWritable to store
  599. both the declared type and the instance type for Writables. Note
  600. that this incompatibly changes the format of ObjectWritable and
  601. will render unreadable any ObjectWritables stored in files.
  602. Nutch only uses ObjectWritable in intermediate files, so this
  603. should not be a problem for Nutch. (Stefan & cutting)
  604. 27. Fix HADOOP-168. MapReduce RPC protocol methods should all declare
  605. IOException, so that timeouts are handled appropriately.
  606. (omalley via cutting)
  607. 28. Fix HADOOP-169. Don't fail a reduce task if a call to the
  608. jobtracker to locate map outputs fails. (omalley via cutting)
  609. 29. Fix HADOOP-170. Permit FileSystem clients to examine and modify
  610. the replication count of individual files. Also fix a few
  611. replication-related bugs. (Konstantin Shvachko via cutting)
  612. 30. Permit specification of a higher replication levels for job
  613. submission files (job.xml and job.jar). This helps with large
  614. clusters, since these files are read by every node. (cutting)
  615. 31. HADOOP-173. Optimize allocation of tasks with local data. (cutting)
  616. 32. HADOOP-167. Reduce number of Configurations and JobConf's
  617. created. (omalley via cutting)
  618. 33. NUTCH-256. Change FileSystem#createNewFile() to create a .crc
  619. file. The lack of a .crc file was causing warnings. (cutting)
  620. 34. HADOOP-174. Change JobClient to not abort job until it has failed
  621. to contact the job tracker for five attempts, not just one as
  622. before. (omalley via cutting)
  623. 35. HADOOP-177. Change MapReduce web interface to page through tasks.
  624. Previously, when jobs had more than a few thousand tasks they
  625. could crash web browsers. (Mahadev Konar via cutting)
  626. 36. HADOOP-178. In DFS, piggyback blockwork requests from datanodes
  627. on heartbeat responses from namenode. This reduces the volume of
  628. RPC traffic. Also move startup delay in blockwork from datanode
  629. to namenode. This fixes a problem where restarting the namenode
  630. triggered a lot of uneeded replication. (Hairong Kuang via cutting)
  631. 37. HADOOP-183. If the DFS namenode is restarted with different
  632. minimum and/or maximum replication counts, existing files'
  633. replication counts are now automatically adjusted to be within the
  634. newly configured bounds. (Hairong Kuang via cutting)
  635. 38. HADOOP-186. Better error handling in TaskTracker's top-level
  636. loop. Also improve calculation of time to send next heartbeat.
  637. (omalley via cutting)
  638. 39. HADOOP-187. Add two MapReduce examples/benchmarks. One creates
  639. files containing random data. The second sorts the output of the
  640. first. (omalley via cutting)
  641. 40. HADOOP-185. Fix so that, when a task tracker times out making the
  642. RPC asking for a new task to run, the job tracker does not think
  643. that it is actually running the task returned. (omalley via cutting)
  644. 41. HADOOP-190. If a child process hangs after it has reported
  645. completion, its output should not be lost. (Stack via cutting)
  646. 42. HADOOP-184. Re-structure some test code to better support testing
  647. on a cluster. (Mahadev Konar via cutting)
  648. 43. HADOOP-191 Add streaming package, Hadoop's first contrib module.
  649. This permits folks to easily submit MapReduce jobs whose map and
  650. reduce functions are implemented by shell commands. Use
  651. 'bin/hadoop jar build/hadoop-streaming.jar' to get details.
  652. (Michel Tourn via cutting)
  653. 44. HADOOP-189. Fix MapReduce in standalone configuration to
  654. correctly handle job jar files that contain a lib directory with
  655. nested jar files. (cutting)
  656. 45. HADOOP-65. Initial version of record I/O framework that enables
  657. the specification of record types and generates marshalling code
  658. in both Java and C++. Generated Java code implements
  659. WritableComparable, but is not yet otherwise used by
  660. Hadoop. (Milind Bhandarkar via cutting)
  661. 46. HADOOP-193. Add a MapReduce-based FileSystem benchmark.
  662. (Konstantin Shvachko via cutting)
  663. 47. HADOOP-194. Add a MapReduce-based FileSystem checker. This reads
  664. every block in every file in the filesystem. (Konstantin Shvachko
  665. via cutting)
  666. 48. HADOOP-182. Fix so that lost task trackers to not change the
  667. status of reduce tasks or completed jobs. Also fixes the progress
  668. meter so that failed tasks are subtracted. (omalley via cutting)
  669. 49. HADOOP-96. Logging improvements. Log files are now separate from
  670. standard output and standard error files. Logs are now rolled.
  671. Logging of all DFS state changes can be enabled, to facilitate
  672. debugging. (Hairong Kuang via cutting)
  673. Release 0.1.1 - 2006-04-08
  674. 1. Added CHANGES.txt, logging all significant changes to Hadoop. (cutting)
  675. 2. Fix MapReduceBase.close() to throw IOException, as declared in the
  676. Closeable interface. This permits subclasses which override this
  677. method to throw that exception. (cutting)
  678. 3. Fix HADOOP-117. Pathnames were mistakenly transposed in
  679. JobConf.getLocalFile() causing many mapred temporary files to not
  680. be removed. (Raghavendra Prabhu via cutting)
  681. 4. Fix HADOOP-116. Clean up job submission files when jobs complete.
  682. (cutting)
  683. 5. Fix HADOOP-125. Fix handling of absolute paths on Windows (cutting)
  684. Release 0.1.0 - 2006-04-01
  685. 1. The first release of Hadoop.