releasenotes.html 21 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612
  1. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
  2. <html><head>
  3. <title>Hadoop 0.18.0 Release Notes</title></head>
  4. <body>
  5. <font face="sans-serif">
  6. <h1>Hadoop 0.18.0 Release Notes</h1>
  7. These release notes include new developer and user facing incompatibilities, features, and major improvements.
  8. The table below is sorted by Component.
  9. <ul><a name="changes">
  10. <h2>Changes Since Hadoop 0.17.2</h2>
  11. <table border="1""100%" cellpadding="4">
  12. <tbody><tr>
  13. <td><b>Issue</b></td>
  14. <td><b>Component</b></td>
  15. <td><b>Notes</b></td>
  16. </tr>
  17. <tr>
  18. <td><a
  19. href="https://issues.apache.org/jira/browse/HADOOP-3355">HADOOP-3355</a></td>
  20. <td>conf</td>
  21. <td>Added support for hexadecimal values in
  22. Configuration</td>
  23. </tr>
  24. <tr>
  25. <td><a
  26. href="https://issues.apache.org/jira/browse/HADOOP-1702">HADOOP-1702</a></td>
  27. <td>dfs</td>
  28. <td>Reduced buffer copies as data is written to HDFS.
  29. The order of sending data bytes and control information has changed, but this
  30. will not be observed by client applications.</td>
  31. </tr>
  32. <tr>
  33. <td><a
  34. href="https://issues.apache.org/jira/browse/HADOOP-2065">HADOOP-2065</a></td>
  35. <td>dfs</td>
  36. <td>Added &quot;corrupt&quot; flag to LocatedBlock to
  37. indicate that all replicas of the block thought to be corrupt.</td>
  38. </tr>
  39. <tr>
  40. <td><a
  41. href="https://issues.apache.org/jira/browse/HADOOP-2585">HADOOP-2585</a></td>
  42. <td>dfs</td>
  43. <td>Improved management of replicas of the name space
  44. image. If all replicas on the Name Node are lost, the latest check point can
  45. be loaded from the secondary Name Node. Use parameter
  46. &quot;-importCheckpoint&quot; and specify the location with &quot;fs.checkpoint.dir.&quot;
  47. The directory structure on the secondary Name Node has changed to match the
  48. primary Name Node.</td>
  49. </tr>
  50. <tr>
  51. <td><a
  52. href="https://issues.apache.org/jira/browse/HADOOP-2656">HADOOP-2656</a></td>
  53. <td>dfs</td>
  54. <td>Associated a generation stamp with each block. On
  55. data nodes, the generation stamp is stored as part of the file name of the
  56. block's meta-data file.</td>
  57. </tr>
  58. <tr>
  59. <td><a
  60. href="https://issues.apache.org/jira/browse/HADOOP-2703">HADOOP-2703</a></td>
  61. <td>dfs</td>
  62. <td>Changed fsck to ignore files opened for writing.
  63. Introduced new option &quot;-openforwrite&quot; to explicitly show open
  64. files.</td>
  65. </tr>
  66. <tr>
  67. <td><a
  68. href="https://issues.apache.org/jira/browse/HADOOP-2797">HADOOP-2797</a></td>
  69. <td>dfs</td>
  70. <td>Withdrew the upgrade-to-CRC facility. HDFS will no
  71. longer support upgrades from versions without CRCs for block data. Users
  72. upgrading from version 0.13 or earlier must first upgrade to an intermediate
  73. (0.14, 0.15, 0.16, 0.17) version before doing upgrade to version 0.18 or
  74. later.</td>
  75. </tr>
  76. <tr>
  77. <td><a
  78. href="https://issues.apache.org/jira/browse/HADOOP-2865">HADOOP-2865</a></td>
  79. <td>dfs</td>
  80. <td>Changed the output of the &quot;fs -ls&quot; command
  81. to more closely match familiar Linux format. Additional changes were made by
  82. HADOOP-3459. Applications that parse the command output should be reviewed.</td>
  83. </tr>
  84. <tr>
  85. <td><a
  86. href="https://issues.apache.org/jira/browse/HADOOP-3035">HADOOP-3035</a></td>
  87. <td>dfs</td>
  88. <td>Changed protocol for transferring blocks between
  89. data nodes to report corrupt blocks to data node for re-replication from a
  90. good replica.</td>
  91. </tr>
  92. <tr>
  93. <td><a
  94. href="https://issues.apache.org/jira/browse/HADOOP-3113">HADOOP-3113</a></td>
  95. <td>dfs</td>
  96. <td>Added sync() method to FSDataOutputStream to really,
  97. really persist data in HDFS. InterDatanodeProtocol to implement this feature.</td>
  98. </tr>
  99. <tr>
  100. <td><a
  101. href="https://issues.apache.org/jira/browse/HADOOP-3164">HADOOP-3164</a></td>
  102. <td>dfs</td>
  103. <td>Changed data node to use FileChannel.tranferTo() to
  104. transfer block data. <br>
  105. </td>
  106. </tr>
  107. <tr>
  108. <td><a
  109. href="https://issues.apache.org/jira/browse/HADOOP-3177">HADOOP-3177</a></td>
  110. <td>dfs</td>
  111. <td>Added a new public interface Syncable which declares
  112. the sync() operation. FSDataOutputStream implements Syncable. If the
  113. wrappedStream in FSDataOutputStream is Syncalbe, calling
  114. FSDataOutputStream.sync() is equivalent to call wrappedStream.sync(). Otherwise,
  115. FSDataOutputStream.sync() is a no-op. Both DistributedFileSystem and
  116. LocalFileSystem support the sync() operation.</td>
  117. </tr>
  118. <tr>
  119. <td><a
  120. href="https://issues.apache.org/jira/browse/HADOOP-3187">HADOOP-3187</a></td>
  121. <td>dfs</td>
  122. <td>Introduced directory quota as hard limits on the
  123. number of names in the tree rooted at that directory. An administrator may
  124. set quotas on individual directories explicitly. Newly created directories
  125. have no associated quota. File/directory creations fault if the quota would
  126. be exceeded. The attempt to set a quota faults if the directory would be in
  127. violation of the new quota.</td>
  128. </tr>
  129. <tr>
  130. <td><a
  131. href="https://issues.apache.org/jira/browse/HADOOP-3193">HADOOP-3193</a></td>
  132. <td>dfs</td>
  133. <td>Added reporter to FSNamesystem stateChangeLog, and a
  134. new metric to track the number of corrupted replicas.</td>
  135. </tr>
  136. <tr>
  137. <td><a
  138. href="https://issues.apache.org/jira/browse/HADOOP-3232">HADOOP-3232</a></td>
  139. <td>dfs</td>
  140. <td>Changed 'du' command to run in a seperate thread so
  141. that it does not block user.</td>
  142. </tr>
  143. <tr>
  144. <td><a
  145. href="https://issues.apache.org/jira/browse/HADOOP-3310">HADOOP-3310</a></td>
  146. <td>dfs</td>
  147. <td>Implemented Lease Recovery to sync the last bock of
  148. a file. Added ClientDatanodeProtocol for client trigging block recovery.
  149. Changed DatanodeProtocol to support block synchronization. Changed
  150. InterDatanodeProtocol to support block update.</td>
  151. </tr>
  152. <tr>
  153. <td><a
  154. href="https://issues.apache.org/jira/browse/HADOOP-3317">HADOOP-3317</a></td>
  155. <td>dfs</td>
  156. <td>Changed the default port for &quot;hdfs:&quot; URIs
  157. to be 8020, so that one may simply use URIs of the form
  158. &quot;hdfs://example.com/dir/file&quot;.</td>
  159. </tr>
  160. <tr>
  161. <td><a
  162. href="https://issues.apache.org/jira/browse/HADOOP-3329">HADOOP-3329</a></td>
  163. <td>dfs</td>
  164. <td>Changed format of file system image to not store
  165. locations of last block.</td>
  166. </tr>
  167. <tr>
  168. <td><a
  169. href="https://issues.apache.org/jira/browse/HADOOP-3336">HADOOP-3336</a></td>
  170. <td>dfs</td>
  171. <td>Added a log4j appender that emits events from
  172. FSNamesystem for audit logging</td>
  173. </tr>
  174. <tr>
  175. <td><a
  176. href="https://issues.apache.org/jira/browse/HADOOP-3339">HADOOP-3339</a></td>
  177. <td>dfs</td>
  178. <td>Improved failure handling of last Data Node in write
  179. pipeline. <br>
  180. </td>
  181. </tr>
  182. <tr>
  183. <td><a
  184. href="https://issues.apache.org/jira/browse/HADOOP-3390">HADOOP-3390</a></td>
  185. <td>dfs</td>
  186. <td>Removed deprecated
  187. ClientProtocol.abandonFileInProgress().</td>
  188. </tr>
  189. <tr>
  190. <td><a
  191. href="https://issues.apache.org/jira/browse/HADOOP-3452">HADOOP-3452</a></td>
  192. <td>dfs</td>
  193. <td>Changed exit status of fsck to report whether the
  194. files system is healthy or corrupt.</td>
  195. </tr>
  196. <tr>
  197. <td><a
  198. href="https://issues.apache.org/jira/browse/HADOOP-3459">HADOOP-3459</a></td>
  199. <td>dfs</td>
  200. <td>Changed the output of the &quot;fs -ls&quot; command
  201. to more closely match familiar Linux format. Applications that parse the
  202. command output should be reviewed.</td>
  203. </tr>
  204. <tr>
  205. <td><a
  206. href="https://issues.apache.org/jira/browse/HADOOP-3486">HADOOP-3486</a></td>
  207. <td>dfs</td>
  208. <td>Changed the default value of
  209. dfs.blockreport.initialDelay to be 0 seconds.</td>
  210. </tr>
  211. <tr>
  212. <td><a
  213. href="https://issues.apache.org/jira/browse/HADOOP-3677">HADOOP-3677</a></td>
  214. <td>dfs</td>
  215. <td>Simplify generation stamp upgrade by making is a
  216. local upgrade on datandodes. Deleted distributed upgrade.</td>
  217. </tr>
  218. <tr>
  219. <td><a
  220. href="https://issues.apache.org/jira/browse/HADOOP-2188">HADOOP-2188</a></td>
  221. <td>dfs <br>
  222. ipc</td>
  223. <td>Replaced timeouts with pings to check that client
  224. connection is alive. Removed the property ipc.client.timeout from the default
  225. Hadoop configuration. Removed the metric RpcOpsDiscardedOPsNum. <br>
  226. </td>
  227. </tr>
  228. <tr>
  229. <td><a
  230. href="https://issues.apache.org/jira/browse/HADOOP-3283">HADOOP-3283</a></td>
  231. <td>dfs <br>
  232. ipc</td>
  233. <td>Added an IPC server in DataNode and a new IPC
  234. protocol InterDatanodeProtocol. Added conf properties
  235. dfs.datanode.ipc.address and dfs.datanode.handler.count with defaults
  236. &quot;0.0.0.0:50020&quot; and 3, respectively. <br>
  237. Changed the serialization in DatanodeRegistration
  238. and DatanodeInfo, and therefore, updated the versionID in ClientProtocol,
  239. DatanodeProtocol, NamenodeProtocol.</td>
  240. </tr>
  241. <tr>
  242. <td><a
  243. href="https://issues.apache.org/jira/browse/HADOOP-3058">HADOOP-3058</a></td>
  244. <td>dfs <br>
  245. metrics</td>
  246. <td>Added FSNamesystem status metrics.</td>
  247. </tr>
  248. <tr>
  249. <td><a
  250. href="https://issues.apache.org/jira/browse/HADOOP-3683">HADOOP-3683</a></td>
  251. <td>dfs <br>
  252. metrics</td>
  253. <td>Change FileListed to getNumGetListingOps and add
  254. CreateFileOps, DeleteFileOps and AddBlockOps metrics.</td>
  255. </tr>
  256. <tr>
  257. <td><a
  258. href="https://issues.apache.org/jira/browse/HADOOP-3265">HADOOP-3265</a></td>
  259. <td>fs</td>
  260. <td>Removed deprecated API getFileCacheHints</td>
  261. </tr>
  262. <tr>
  263. <td><a
  264. href="https://issues.apache.org/jira/browse/HADOOP-3307">HADOOP-3307</a></td>
  265. <td>fs</td>
  266. <td>Introduced archive feature to Hadoop. A Map/Reduce
  267. job can be run to create an archive with indexes. A FileSystem abstraction is
  268. provided over the archive.</td>
  269. </tr>
  270. <tr>
  271. <td><a
  272. href="https://issues.apache.org/jira/browse/HADOOP-930">HADOOP-930</a></td>
  273. <td>fs</td>
  274. <td>Added support for reading and writing native S3
  275. files. Native S3 files are referenced using s3n URIs. See
  276. http://wiki.apache.org/hadoop/AmazonS3 for more details.</td>
  277. </tr>
  278. <tr>
  279. <td><a
  280. href="https://issues.apache.org/jira/browse/HADOOP-3095">HADOOP-3095</a></td>
  281. <td>fs <br>
  282. fs/s3</td>
  283. <td>Added overloaded method
  284. getFileBlockLocations(FileStatus, long, long). This is an incompatible change
  285. for FileSystem implementations which override getFileBlockLocations(Path,
  286. long, long). They should have the signature of this method changed to
  287. getFileBlockLocations(FileStatus, long, long) to work correctly.</td>
  288. </tr>
  289. <tr>
  290. <td><a
  291. href="https://issues.apache.org/jira/browse/HADOOP-4">HADOOP-4</a></td>
  292. <td>fuse-dfs</td>
  293. <td>Introduced FUSE module for HDFS. Module allows mount
  294. of HDFS as a Unix filesystem, and optionally the export of that mount point
  295. to other machines. Writes are disabled. rmdir, mv, mkdir, rm are supported,
  296. but not cp, touch, and the like. Usage information is attached to the Jira
  297. record. <br>
  298. <br>
  299. </td>
  300. </tr>
  301. <tr>
  302. <td><a
  303. href="https://issues.apache.org/jira/browse/HADOOP-3184">HADOOP-3184</a></td>
  304. <td>hod</td>
  305. <td>Modified HOD to handle master (NameNode or
  306. JobTracker) failures on bad nodes by trying to bring them up on another node
  307. in the ring. Introduced new property ringmaster.max-master-failures to
  308. specify the maximum number of times a master is allowed to fail.</td>
  309. </tr>
  310. <tr>
  311. <td><a
  312. href="https://issues.apache.org/jira/browse/HADOOP-3266">HADOOP-3266</a></td>
  313. <td>hod</td>
  314. <td>Moved HOD change items from CHANGES.txt to a new
  315. file src/contrib/hod/CHANGES.txt.</td>
  316. </tr>
  317. <tr>
  318. <td><a
  319. href="https://issues.apache.org/jira/browse/HADOOP-3376">HADOOP-3376</a></td>
  320. <td>hod</td>
  321. <td>Modified HOD client to look for specific messages
  322. related to resource limit overruns and take appropriate actions - such as
  323. either failing to allocate the cluster, or issuing a warning to the user. A
  324. tool is provided, specific to Maui and Torque, that will set these specific
  325. messages.</td>
  326. </tr>
  327. <tr>
  328. <td><a
  329. href="https://issues.apache.org/jira/browse/HADOOP-3464">HADOOP-3464</a></td>
  330. <td>hod</td>
  331. <td>Implemented a mechanism to transfer HOD errors that
  332. occur on compute nodes to the submit node running the HOD client, so users
  333. have good feedback on why an allocation failed.</td>
  334. </tr>
  335. <tr>
  336. <td><a
  337. href="https://issues.apache.org/jira/browse/HADOOP-3483">HADOOP-3483</a></td>
  338. <td>hod</td>
  339. <td>Modified HOD to create a cluster directory if one
  340. does not exist and to auto-deallocate a cluster while reallocating it, if it
  341. is already dead.</td>
  342. </tr>
  343. <tr>
  344. <td><a
  345. href="https://issues.apache.org/jira/browse/HADOOP-3564">HADOOP-3564</a></td>
  346. <td>hod</td>
  347. <td>Modifed HOD to generate the dfs.datanode.ipc.address
  348. parameter in the hadoop-site.xml of datanodes that it launches.</td>
  349. </tr>
  350. <tr>
  351. <td><a
  352. href="https://issues.apache.org/jira/browse/HADOOP-3610">HADOOP-3610</a></td>
  353. <td>hod</td>
  354. <td>Modified HOD to automatically create a cluster
  355. directory if the one specified with the script command does not exist.</td>
  356. </tr>
  357. <tr>
  358. <td><a
  359. href="https://issues.apache.org/jira/browse/HADOOP-3703">HADOOP-3703</a></td>
  360. <td>hod</td>
  361. <td>Modified logcondense.py to use the new format of
  362. hadoop dfs -lsr output. This version of logcondense would not work with
  363. previous versions of Hadoop and hence is incompatible.</td>
  364. </tr>
  365. <tr>
  366. <td><a
  367. href="https://issues.apache.org/jira/browse/HADOOP-3061">HADOOP-3061</a></td>
  368. <td>io</td>
  369. <td>Introduced ByteWritable and DoubleWritable
  370. (implementing WritableComparable) implementations for Byte and Double.</td>
  371. </tr>
  372. <tr>
  373. <td><a
  374. href="https://issues.apache.org/jira/browse/HADOOP-3299">HADOOP-3299</a></td>
  375. <td>io <br>
  376. mapred</td>
  377. <td>Changed the TextInputFormat and KeyValueTextInput
  378. classes to initialize the compressionCodecs member variable before
  379. dereferencing it.</td>
  380. </tr>
  381. <tr>
  382. <td><a
  383. href="https://issues.apache.org/jira/browse/HADOOP-2909">HADOOP-2909</a></td>
  384. <td>ipc</td>
  385. <td>Removed property ipc.client.maxidletime from the
  386. default configuration. The allowed idle time is twice
  387. ipc.client.connection.maxidletime. <br>
  388. </td>
  389. </tr>
  390. <tr>
  391. <td><a
  392. href="https://issues.apache.org/jira/browse/HADOOP-3569">HADOOP-3569</a></td>
  393. <td>KFS</td>
  394. <td>Fixed KFS to have read() read and return 1 byte
  395. instead of 4.</td>
  396. </tr>
  397. <tr>
  398. <td><a
  399. href="https://issues.apache.org/jira/browse/HADOOP-1915">HADOOP-1915</a></td>
  400. <td>mapred</td>
  401. <td>Provided a new method to update counters.
  402. &quot;incrCounter(String group, String counter, long amount)&quot;</td>
  403. </tr>
  404. <tr>
  405. <td><a
  406. href="https://issues.apache.org/jira/browse/HADOOP-2019">HADOOP-2019</a></td>
  407. <td>mapred</td>
  408. <td>Added support for .tar, .tgz and .tar.gz files in
  409. DistributedCache. File sizes are limited to 2GB.</td>
  410. </tr>
  411. <tr>
  412. <td><a
  413. href="https://issues.apache.org/jira/browse/HADOOP-2095">HADOOP-2095</a></td>
  414. <td>mapred</td>
  415. <td>Reduced in-memory copies of keys and values as they
  416. flow through the Map-Reduce framework. Changed the storage of intermediate
  417. map outputs to use new IFile instead of SequenceFile for better compression.</td>
  418. </tr>
  419. <tr>
  420. <td><a
  421. href="https://issues.apache.org/jira/browse/HADOOP-2132">HADOOP-2132</a></td>
  422. <td>mapred</td>
  423. <td>Changed &quot;job -kill&quot; to only allow a job
  424. that is in the RUNNING or PREP state to be killed.</td>
  425. </tr>
  426. <tr>
  427. <td><a
  428. href="https://issues.apache.org/jira/browse/HADOOP-2181">HADOOP-2181</a></td>
  429. <td>mapred</td>
  430. <td>Added logging for input splits in job tracker log
  431. and job history log. Added web UI for viewing input splits in the job UI and
  432. history UI.</td>
  433. </tr>
  434. <tr>
  435. <td><a
  436. href="https://issues.apache.org/jira/browse/HADOOP-236">HADOOP-236</a></td>
  437. <td>mapred</td>
  438. <td>Changed connection protocol job tracker and task
  439. tracker so that task tracker will not connect to a job tracker with a
  440. different build version.</td>
  441. </tr>
  442. <tr>
  443. <td><a
  444. href="https://issues.apache.org/jira/browse/HADOOP-2427">HADOOP-2427</a></td>
  445. <td>mapred</td>
  446. <td>The current working directory of a task, i.e.
  447. ${mapred.local.dir}/taskTracker/jobcache/&lt;jobid&gt;/&lt;task_dir&gt;/work
  448. is cleanedup, as soon as the task is finished.</td>
  449. </tr>
  450. <tr>
  451. <td><a
  452. href="https://issues.apache.org/jira/browse/HADOOP-2867">HADOOP-2867</a></td>
  453. <td>mapred</td>
  454. <td>Added task's cwd to its LD_LIBRARY_PATH.</td>
  455. </tr>
  456. <tr>
  457. <td><a
  458. href="https://issues.apache.org/jira/browse/HADOOP-3135">HADOOP-3135</a></td>
  459. <td>mapred</td>
  460. <td>Changed job submission protocol to not allow
  461. submission if the client's value of mapred.system.dir does not match the job
  462. tracker's. Deprecated JobConf.getSystemDir(); use JobClient.getSystemDir().</td>
  463. </tr>
  464. <tr>
  465. <td><a
  466. href="https://issues.apache.org/jira/browse/HADOOP-3221">HADOOP-3221</a></td>
  467. <td>mapred</td>
  468. <td>Added org.apache.hadoop.mapred.lib.NLineInputFormat,
  469. which splits N lines of input as one split. N can be specified by
  470. configuration property &quot;mapred.line.input.format.linespermap&quot;,
  471. which defaults to 1.</td>
  472. </tr>
  473. <tr>
  474. <td><a
  475. href="https://issues.apache.org/jira/browse/HADOOP-3226">HADOOP-3226</a></td>
  476. <td>mapred</td>
  477. <td>Changed policy for running combiner. The combiner
  478. may be run multiple times as the map's output is sorted and merged.
  479. Additionally, it may be run on the reduce side as data is merged. The old
  480. semantics are available in Hadoop 0.18 if the user calls: <br>
  481. job.setCombineOnlyOnce(true); <br>
  482. </td>
  483. </tr>
  484. <tr>
  485. <td><a
  486. href="https://issues.apache.org/jira/browse/HADOOP-3326">HADOOP-3326</a></td>
  487. <td>mapred</td>
  488. <td>Changed fetchOutputs() so that LocalFSMerger and
  489. InMemFSMergeThread threads are spawned only once. The thread gets notified
  490. when something is ready for merge. The merge happens when thresholds are met.</td>
  491. </tr>
  492. <tr>
  493. <td><a
  494. href="https://issues.apache.org/jira/browse/HADOOP-3366">HADOOP-3366</a></td>
  495. <td>mapred</td>
  496. <td>Improved shuffle so that all fetched map-outputs are
  497. kept in-memory before being merged by stalling the shuffle so that the
  498. in-memory merge executes and frees up memory for the shuffle.</td>
  499. </tr>
  500. <tr>
  501. <td><a
  502. href="https://issues.apache.org/jira/browse/HADOOP-3405">HADOOP-3405</a></td>
  503. <td>mapred</td>
  504. <td>Refactored previously public classes MapTaskStatus,
  505. ReduceTaskStatus, JobSubmissionProtocol, CompletedJobStatusStore to be
  506. package local.</td>
  507. </tr>
  508. <tr>
  509. <td><a
  510. href="https://issues.apache.org/jira/browse/HADOOP-3417">HADOOP-3417</a></td>
  511. <td>mapred</td>
  512. <td>Removed the public class
  513. org.apache.hadoop.mapred.JobShell. <br>
  514. Command line options -libjars, -files and -archives are moved to
  515. GenericCommands. Thus applications have to implement
  516. org.apache.hadoop.util.Tool to use the options.</td>
  517. </tr>
  518. <tr>
  519. <td><a
  520. href="https://issues.apache.org/jira/browse/HADOOP-3427">HADOOP-3427</a></td>
  521. <td>mapred</td>
  522. <td>Changed shuffle scheduler policy to wait for
  523. notifications from shuffle threads before scheduling more.</td>
  524. </tr>
  525. <tr>
  526. <td><a
  527. href="https://issues.apache.org/jira/browse/HADOOP-3460">HADOOP-3460</a></td>
  528. <td>mapred</td>
  529. <td>Created SequenceFileAsBinaryOutputFormat to write
  530. raw bytes as keys and values to a SequenceFile.</td>
  531. </tr>
  532. <tr>
  533. <td><a
  534. href="https://issues.apache.org/jira/browse/HADOOP-3512">HADOOP-3512</a></td>
  535. <td>mapred</td>
  536. <td>Separated Distcp, Logalyzer and Archiver into a
  537. tools jar.</td>
  538. </tr>
  539. <tr>
  540. <td><a
  541. href="https://issues.apache.org/jira/browse/HADOOP-3565">HADOOP-3565</a></td>
  542. <td>mapred</td>
  543. <td>Change the Java serialization framework, which is
  544. not enabled by default, to correctly make the objects independent of the
  545. previous objects.</td>
  546. </tr>
  547. <tr>
  548. <td><a
  549. href="https://issues.apache.org/jira/browse/HADOOP-3598">HADOOP-3598</a></td>
  550. <td>mapred</td>
  551. <td>Changed Map-Reduce framework to no longer create
  552. temporary task output directories for staging outputs if staging outputs
  553. isn't necessary. ${mapred.out.dir}/_temporary/_${taskid}</td>
  554. </tr>
  555. <tr>
  556. <td><a
  557. href="https://issues.apache.org/jira/browse/HADOOP-544">HADOOP-544</a></td>
  558. <td>mapred</td>
  559. <td>Introduced new classes JobID, TaskID and
  560. TaskAttemptID, which should be used instead of their string counterparts.
  561. Deprecated functions in JobClient, TaskReport, RunningJob, jobcontrol.Job and
  562. TaskCompletionEvent that use string arguments. Applications can use
  563. xxxID.toString() and xxxID.forName() methods to convert/restore objects
  564. to/from strings. <br>
  565. </td>
  566. </tr>
  567. <tr>
  568. <td><a
  569. href="https://issues.apache.org/jira/browse/HADOOP-3230">HADOOP-3230</a></td>
  570. <td>scripts</td>
  571. <td>Added command line tool &quot;job -counter
  572. &lt;job-id&gt; &lt;group-name&gt; &lt;counter-name&gt;&quot; to access
  573. counters.</td>
  574. </tr>
  575. <tr>
  576. <td><a
  577. href="https://issues.apache.org/jira/browse/HADOOP-1328">HADOOP-1328</a></td>
  578. <td>streaming</td>
  579. <td>Introduced a way for a streaming process to update
  580. global counters and status using stderr stream to emit information. Use
  581. &quot;reporter:counter:&lt;group&gt;,&lt;counter&gt;,&lt;amount&gt; &quot; to
  582. update a counter. Use &quot;reporter:status:&lt;message&gt;&quot; to update
  583. status.</td>
  584. </tr>
  585. <tr>
  586. <td><a
  587. href="https://issues.apache.org/jira/browse/HADOOP-3429">HADOOP-3429</a></td>
  588. <td>streaming</td>
  589. <td>Increased the size of the buffer used in the
  590. communication between the Java task and the Streaming process to 128KB.</td>
  591. </tr>
  592. <tr>
  593. <td><a
  594. href="https://issues.apache.org/jira/browse/HADOOP-3379">HADOOP-3379</a></td>
  595. <td>streaming <br>
  596. documentation</td>
  597. <td>Set default value for configuration property
  598. &quot;stream.non.zero.exit.status.is.failure&quot; to be &quot;true&quot;.</td>
  599. </tr>
  600. <tr>
  601. <td><a
  602. href="https://issues.apache.org/jira/browse/HADOOP-3246">HADOOP-3246</a></td>
  603. <td>util</td>
  604. <td>Introduced an FTPFileSystem backed by Apache Commons
  605. FTPClient to directly store data into HDFS.</td>
  606. </tr>
  607. </tbody></table>
  608. </ul>
  609. </body>
  610. </html>