hadoop-default.html 35 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768
  1. <html>
  2. <body>
  3. <table border="1">
  4. <tr>
  5. <td>name</td><td>value</td><td>description</td>
  6. </tr>
  7. <tr>
  8. <td><a name="hadoop.tmp.dir">hadoop.tmp.dir</a></td><td>/tmp/hadoop-${user.name}</td><td>A base for other temporary directories.</td>
  9. </tr>
  10. <tr>
  11. <td><a name="hadoop.native.lib">hadoop.native.lib</a></td><td>true</td><td>Should native hadoop libraries, if present, be used.</td>
  12. </tr>
  13. <tr>
  14. <td><a name="hadoop.logfile.size">hadoop.logfile.size</a></td><td>10000000</td><td>The max size of each log file</td>
  15. </tr>
  16. <tr>
  17. <td><a name="hadoop.logfile.count">hadoop.logfile.count</a></td><td>10</td><td>The max number of log files</td>
  18. </tr>
  19. <tr>
  20. <td><a name="hadoop.job.history.location">hadoop.job.history.location</a></td><td></td><td> If job tracker is static the history files are stored
  21. in this single well known place. If No value is set here, by default,
  22. it is in the local file system at ${hadoop.log.dir}/history.
  23. </td>
  24. </tr>
  25. <tr>
  26. <td><a name="hadoop.job.history.user.location">hadoop.job.history.user.location</a></td><td></td><td> User can specify a location to store the history files of
  27. a particular job. If nothing is specified, the logs are stored in
  28. output directory. The files are stored in "_logs/history/" in the directory.
  29. User can stop logging by giving the value "none".
  30. </td>
  31. </tr>
  32. <tr>
  33. <td><a name="dfs.namenode.logging.level">dfs.namenode.logging.level</a></td><td>info</td><td>The logging level for dfs namenode. Other values are "dir"(trac
  34. e namespace mutations), "block"(trace block under/over replications and block
  35. creations/deletions), or "all".</td>
  36. </tr>
  37. <tr>
  38. <td><a name="io.sort.factor">io.sort.factor</a></td><td>10</td><td>The number of streams to merge at once while sorting
  39. files. This determines the number of open file handles.</td>
  40. </tr>
  41. <tr>
  42. <td><a name="io.sort.mb">io.sort.mb</a></td><td>100</td><td>The total amount of buffer memory to use while sorting
  43. files, in megabytes. By default, gives each merge stream 1MB, which
  44. should minimize seeks.</td>
  45. </tr>
  46. <tr>
  47. <td><a name="io.sort.record.percent">io.sort.record.percent</a></td><td>0.05</td><td>The percentage of io.sort.mb dedicated to tracking record
  48. boundaries. Let this value be r, io.sort.mb be x. The maximum number
  49. of records collected before the collection thread must block is equal
  50. to (r * x) / 4</td>
  51. </tr>
  52. <tr>
  53. <td><a name="io.sort.spill.percent">io.sort.spill.percent</a></td><td>0.80</td><td>The soft limit in either the buffer or record collection
  54. buffers. Once reached, a thread will begin to spill the contents to disk
  55. in the background. Note that this does not imply any chunking of data to
  56. the spill. A value less than 0.5 is not recommended.</td>
  57. </tr>
  58. <tr>
  59. <td><a name="io.file.buffer.size">io.file.buffer.size</a></td><td>4096</td><td>The size of buffer for use in sequence files.
  60. The size of this buffer should probably be a multiple of hardware
  61. page size (4096 on Intel x86), and it determines how much data is
  62. buffered during read and write operations.</td>
  63. </tr>
  64. <tr>
  65. <td><a name="io.bytes.per.checksum">io.bytes.per.checksum</a></td><td>512</td><td>The number of bytes per checksum. Must not be larger than
  66. io.file.buffer.size.</td>
  67. </tr>
  68. <tr>
  69. <td><a name="io.skip.checksum.errors">io.skip.checksum.errors</a></td><td>false</td><td>If true, when a checksum error is encountered while
  70. reading a sequence file, entries are skipped, instead of throwing an
  71. exception.</td>
  72. </tr>
  73. <tr>
  74. <td><a name="io.map.index.skip">io.map.index.skip</a></td><td>0</td><td>Number of index entries to skip between each entry.
  75. Zero by default. Setting this to values larger than zero can
  76. facilitate opening large map files using less memory.</td>
  77. </tr>
  78. <tr>
  79. <td><a name="io.compression.codecs">io.compression.codecs</a></td><td>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.LzopCodec</td><td>A list of the compression codec classes that can be used
  80. for compression/decompression.</td>
  81. </tr>
  82. <tr>
  83. <td><a name="io.serializations">io.serializations</a></td><td>org.apache.hadoop.io.serializer.WritableSerialization</td><td>A list of serialization classes that can be used for
  84. obtaining serializers and deserializers.</td>
  85. </tr>
  86. <tr>
  87. <td><a name="fs.default.name">fs.default.name</a></td><td>file:///</td><td>The name of the default file system. A URI whose
  88. scheme and authority determine the FileSystem implementation. The
  89. uri's scheme determines the config property (fs.SCHEME.impl) naming
  90. the FileSystem implementation class. The uri's authority is used to
  91. determine the host, port, etc. for a filesystem.</td>
  92. </tr>
  93. <tr>
  94. <td><a name="fs.trash.interval">fs.trash.interval</a></td><td>0</td><td>Number of minutes between trash checkpoints.
  95. If zero, the trash feature is disabled.
  96. </td>
  97. </tr>
  98. <tr>
  99. <td><a name="fs.file.impl">fs.file.impl</a></td><td>org.apache.hadoop.fs.LocalFileSystem</td><td>The FileSystem for file: uris.</td>
  100. </tr>
  101. <tr>
  102. <td><a name="fs.hdfs.impl">fs.hdfs.impl</a></td><td>org.apache.hadoop.hdfs.DistributedFileSystem</td><td>The FileSystem for hdfs: uris.</td>
  103. </tr>
  104. <tr>
  105. <td><a name="fs.s3.impl">fs.s3.impl</a></td><td>org.apache.hadoop.fs.s3.S3FileSystem</td><td>The FileSystem for s3: uris.</td>
  106. </tr>
  107. <tr>
  108. <td><a name="fs.s3n.impl">fs.s3n.impl</a></td><td>org.apache.hadoop.fs.s3native.NativeS3FileSystem</td><td>The FileSystem for s3n: (Native S3) uris.</td>
  109. </tr>
  110. <tr>
  111. <td><a name="fs.kfs.impl">fs.kfs.impl</a></td><td>org.apache.hadoop.fs.kfs.KosmosFileSystem</td><td>The FileSystem for kfs: uris.</td>
  112. </tr>
  113. <tr>
  114. <td><a name="fs.hftp.impl">fs.hftp.impl</a></td><td>org.apache.hadoop.hdfs.HftpFileSystem</td><td></td>
  115. </tr>
  116. <tr>
  117. <td><a name="fs.hsftp.impl">fs.hsftp.impl</a></td><td>org.apache.hadoop.hdfs.HsftpFileSystem</td><td></td>
  118. </tr>
  119. <tr>
  120. <td><a name="fs.ftp.impl">fs.ftp.impl</a></td><td>org.apache.hadoop.fs.ftp.FTPFileSystem</td><td>The FileSystem for ftp: uris.</td>
  121. </tr>
  122. <tr>
  123. <td><a name="fs.ramfs.impl">fs.ramfs.impl</a></td><td>org.apache.hadoop.fs.InMemoryFileSystem</td><td>The FileSystem for ramfs: uris.</td>
  124. </tr>
  125. <tr>
  126. <td><a name="fs.har.impl">fs.har.impl</a></td><td>org.apache.hadoop.fs.HarFileSystem</td><td>The filesystem for Hadoop archives. </td>
  127. </tr>
  128. <tr>
  129. <td><a name="fs.inmemory.size.mb">fs.inmemory.size.mb</a></td><td>75</td><td>The size of the in-memory filsystem instance in MB</td>
  130. </tr>
  131. <tr>
  132. <td><a name="fs.checkpoint.dir">fs.checkpoint.dir</a></td><td>${hadoop.tmp.dir}/dfs/namesecondary</td><td>Determines where on the local filesystem the DFS secondary
  133. name node should store the temporary images and edits to merge.
  134. If this is a comma-delimited list of directories then the image is
  135. replicated in all of the directories for redundancy.
  136. </td>
  137. </tr>
  138. <tr>
  139. <td><a name="fs.checkpoint.period">fs.checkpoint.period</a></td><td>3600</td><td>The number of seconds between two periodic checkpoints.
  140. </td>
  141. </tr>
  142. <tr>
  143. <td><a name="fs.checkpoint.size">fs.checkpoint.size</a></td><td>67108864</td><td>The size of the current edit log (in bytes) that triggers
  144. a periodic checkpoint even if the fs.checkpoint.period hasn't expired.
  145. </td>
  146. </tr>
  147. <tr>
  148. <td><a name="dfs.secondary.http.address">dfs.secondary.http.address</a></td><td>0.0.0.0:50090</td><td>
  149. The secondary namenode http server address and port.
  150. If the port is 0 then the server will start on a free port.
  151. </td>
  152. </tr>
  153. <tr>
  154. <td><a name="dfs.datanode.address">dfs.datanode.address</a></td><td>0.0.0.0:50010</td><td>
  155. The address where the datanode server will listen to.
  156. If the port is 0 then the server will start on a free port.
  157. </td>
  158. </tr>
  159. <tr>
  160. <td><a name="dfs.datanode.http.address">dfs.datanode.http.address</a></td><td>0.0.0.0:50075</td><td>
  161. The datanode http server address and port.
  162. If the port is 0 then the server will start on a free port.
  163. </td>
  164. </tr>
  165. <tr>
  166. <td><a name="dfs.datanode.ipc.address">dfs.datanode.ipc.address</a></td><td>0.0.0.0:50020</td><td>
  167. The datanode ipc server address and port.
  168. If the port is 0 then the server will start on a free port.
  169. </td>
  170. </tr>
  171. <tr>
  172. <td><a name="dfs.datanode.handler.count">dfs.datanode.handler.count</a></td><td>3</td><td>The number of server threads for the datanode.</td>
  173. </tr>
  174. <tr>
  175. <td><a name="dfs.http.address">dfs.http.address</a></td><td>0.0.0.0:50070</td><td>
  176. The address and the base port where the dfs namenode web ui will listen on.
  177. If the port is 0 then the server will start on a free port.
  178. </td>
  179. </tr>
  180. <tr>
  181. <td><a name="dfs.datanode.https.address">dfs.datanode.https.address</a></td><td>0.0.0.0:50475</td><td></td>
  182. </tr>
  183. <tr>
  184. <td><a name="dfs.https.address">dfs.https.address</a></td><td>0.0.0.0:50470</td><td></td>
  185. </tr>
  186. <tr>
  187. <td><a name="https.keystore.info.rsrc">https.keystore.info.rsrc</a></td><td>sslinfo.xml</td><td>The name of the resource from which ssl keystore information
  188. will be extracted
  189. </td>
  190. </tr>
  191. <tr>
  192. <td><a name="dfs.datanode.dns.interface">dfs.datanode.dns.interface</a></td><td>default</td><td>The name of the Network Interface from which a data node should
  193. report its IP address.
  194. </td>
  195. </tr>
  196. <tr>
  197. <td><a name="dfs.datanode.dns.nameserver">dfs.datanode.dns.nameserver</a></td><td>default</td><td>The host name or IP address of the name server (DNS)
  198. which a DataNode should use to determine the host name used by the
  199. NameNode for communication and display purposes.
  200. </td>
  201. </tr>
  202. <tr>
  203. <td><a name="dfs.replication.considerLoad">dfs.replication.considerLoad</a></td><td>true</td><td>Decide if chooseTarget considers the target's load or not
  204. </td>
  205. </tr>
  206. <tr>
  207. <td><a name="dfs.default.chunk.view.size">dfs.default.chunk.view.size</a></td><td>32768</td><td>The number of bytes to view for a file on the browser.
  208. </td>
  209. </tr>
  210. <tr>
  211. <td><a name="dfs.datanode.du.reserved">dfs.datanode.du.reserved</a></td><td>0</td><td>Reserved space in bytes per volume. Always leave this much space free for non dfs use.
  212. </td>
  213. </tr>
  214. <tr>
  215. <td><a name="dfs.datanode.du.pct">dfs.datanode.du.pct</a></td><td>0.98f</td><td>When calculating remaining space, only use this percentage of the real available space
  216. </td>
  217. </tr>
  218. <tr>
  219. <td><a name="dfs.name.dir">dfs.name.dir</a></td><td>${hadoop.tmp.dir}/dfs/name</td><td>Determines where on the local filesystem the DFS name node
  220. should store the name table. If this is a comma-delimited list
  221. of directories then the name table is replicated in all of the
  222. directories, for redundancy. </td>
  223. </tr>
  224. <tr>
  225. <td><a name="dfs.web.ugi">dfs.web.ugi</a></td><td>webuser,webgroup</td><td>The user account used by the web interface.
  226. Syntax: USERNAME,GROUP1,GROUP2, ...
  227. </td>
  228. </tr>
  229. <tr>
  230. <td><a name="dfs.permissions">dfs.permissions</a></td><td>true</td><td>
  231. If "true", enable permission checking in HDFS.
  232. If "false", permission checking is turned off,
  233. but all other behavior is unchanged.
  234. Switching from one parameter value to the other does not change the mode,
  235. owner or group of files or directories.
  236. </td>
  237. </tr>
  238. <tr>
  239. <td><a name="dfs.permissions.supergroup">dfs.permissions.supergroup</a></td><td>supergroup</td><td>The name of the group of super-users.</td>
  240. </tr>
  241. <tr>
  242. <td><a name="dfs.client.buffer.dir">dfs.client.buffer.dir</a></td><td>${hadoop.tmp.dir}/dfs/tmp</td><td>Determines where on the local filesystem an DFS client
  243. should store its blocks before it sends them to the datanode.
  244. </td>
  245. </tr>
  246. <tr>
  247. <td><a name="dfs.data.dir">dfs.data.dir</a></td><td>${hadoop.tmp.dir}/dfs/data</td><td>Determines where on the local filesystem an DFS data node
  248. should store its blocks. If this is a comma-delimited
  249. list of directories, then data will be stored in all named
  250. directories, typically on different devices.
  251. Directories that do not exist are ignored.
  252. </td>
  253. </tr>
  254. <tr>
  255. <td><a name="dfs.replication">dfs.replication</a></td><td>3</td><td>Default block replication.
  256. The actual number of replications can be specified when the file is created.
  257. The default is used if replication is not specified in create time.
  258. </td>
  259. </tr>
  260. <tr>
  261. <td><a name="dfs.replication.max">dfs.replication.max</a></td><td>512</td><td>Maximal block replication.
  262. </td>
  263. </tr>
  264. <tr>
  265. <td><a name="dfs.replication.min">dfs.replication.min</a></td><td>1</td><td>Minimal block replication.
  266. </td>
  267. </tr>
  268. <tr>
  269. <td><a name="dfs.block.size">dfs.block.size</a></td><td>67108864</td><td>The default block size for new files.</td>
  270. </tr>
  271. <tr>
  272. <td><a name="dfs.df.interval">dfs.df.interval</a></td><td>60000</td><td>Disk usage statistics refresh interval in msec.</td>
  273. </tr>
  274. <tr>
  275. <td><a name="dfs.client.block.write.retries">dfs.client.block.write.retries</a></td><td>3</td><td>The number of retries for writing blocks to the data nodes,
  276. before we signal failure to the application.
  277. </td>
  278. </tr>
  279. <tr>
  280. <td><a name="dfs.blockreport.intervalMsec">dfs.blockreport.intervalMsec</a></td><td>3600000</td><td>Determines block reporting interval in milliseconds.</td>
  281. </tr>
  282. <tr>
  283. <td><a name="dfs.blockreport.initialDelay">dfs.blockreport.initialDelay</a></td><td>0</td><td>Delay for first block report in seconds.</td>
  284. </tr>
  285. <tr>
  286. <td><a name="dfs.heartbeat.interval">dfs.heartbeat.interval</a></td><td>3</td><td>Determines datanode heartbeat interval in seconds.</td>
  287. </tr>
  288. <tr>
  289. <td><a name="dfs.namenode.handler.count">dfs.namenode.handler.count</a></td><td>10</td><td>The number of server threads for the namenode.</td>
  290. </tr>
  291. <tr>
  292. <td><a name="dfs.safemode.threshold.pct">dfs.safemode.threshold.pct</a></td><td>0.999f</td><td>
  293. Specifies the percentage of blocks that should satisfy
  294. the minimal replication requirement defined by dfs.replication.min.
  295. Values less than or equal to 0 mean not to start in safe mode.
  296. Values greater than 1 will make safe mode permanent.
  297. </td>
  298. </tr>
  299. <tr>
  300. <td><a name="dfs.safemode.extension">dfs.safemode.extension</a></td><td>30000</td><td>
  301. Determines extension of safe mode in milliseconds
  302. after the threshold level is reached.
  303. </td>
  304. </tr>
  305. <tr>
  306. <td><a name="dfs.balance.bandwidthPerSec">dfs.balance.bandwidthPerSec</a></td><td>1048576</td><td>
  307. Specifies the maximum amount of bandwidth that each datanode
  308. can utilize for the balancing purpose in term of
  309. the number of bytes per second.
  310. </td>
  311. </tr>
  312. <tr>
  313. <td><a name="dfs.hosts">dfs.hosts</a></td><td></td><td>Names a file that contains a list of hosts that are
  314. permitted to connect to the namenode. The full pathname of the file
  315. must be specified. If the value is empty, all hosts are
  316. permitted.</td>
  317. </tr>
  318. <tr>
  319. <td><a name="dfs.hosts.exclude">dfs.hosts.exclude</a></td><td></td><td>Names a file that contains a list of hosts that are
  320. not permitted to connect to the namenode. The full pathname of the
  321. file must be specified. If the value is empty, no hosts are
  322. excluded.</td>
  323. </tr>
  324. <tr>
  325. <td><a name="dfs.max.objects">dfs.max.objects</a></td><td>0</td><td>The maximum number of files, directories and blocks
  326. dfs supports. A value of zero indicates no limit to the number
  327. of objects that dfs supports.
  328. </td>
  329. </tr>
  330. <tr>
  331. <td><a name="dfs.namenode.decommission.interval">dfs.namenode.decommission.interval</a></td><td>300</td><td>Namenode periodicity in seconds to check if decommission is complete.</td>
  332. </tr>
  333. <tr>
  334. <td><a name="dfs.replication.interval">dfs.replication.interval</a></td><td>3</td><td>The periodicity in seconds with which the namenode computes repliaction work for datanodes. </td>
  335. </tr>
  336. <tr>
  337. <td><a name="fs.s3.block.size">fs.s3.block.size</a></td><td>67108864</td><td>Block size to use when writing files to S3.</td>
  338. </tr>
  339. <tr>
  340. <td><a name="fs.s3.buffer.dir">fs.s3.buffer.dir</a></td><td>${hadoop.tmp.dir}/s3</td><td>Determines where on the local filesystem the S3 filesystem
  341. should store files before sending them to S3
  342. (or after retrieving them from S3).
  343. </td>
  344. </tr>
  345. <tr>
  346. <td><a name="fs.s3.maxRetries">fs.s3.maxRetries</a></td><td>4</td><td>The maximum number of retries for reading or writing files to S3,
  347. before we signal failure to the application.
  348. </td>
  349. </tr>
  350. <tr>
  351. <td><a name="fs.s3.sleepTimeSeconds">fs.s3.sleepTimeSeconds</a></td><td>10</td><td>The number of seconds to sleep between each S3 retry.
  352. </td>
  353. </tr>
  354. <tr>
  355. <td><a name="mapred.job.tracker">mapred.job.tracker</a></td><td>local</td><td>The host and port that the MapReduce job tracker runs
  356. at. If "local", then jobs are run in-process as a single map
  357. and reduce task.
  358. </td>
  359. </tr>
  360. <tr>
  361. <td><a name="mapred.job.tracker.http.address">mapred.job.tracker.http.address</a></td><td>0.0.0.0:50030</td><td>
  362. The job tracker http server address and port the server will listen on.
  363. If the port is 0 then the server will start on a free port.
  364. </td>
  365. </tr>
  366. <tr>
  367. <td><a name="mapred.job.tracker.handler.count">mapred.job.tracker.handler.count</a></td><td>10</td><td>
  368. The number of server threads for the JobTracker. This should be roughly
  369. 4% of the number of tasktracker nodes.
  370. </td>
  371. </tr>
  372. <tr>
  373. <td><a name="mapred.task.tracker.report.address">mapred.task.tracker.report.address</a></td><td>127.0.0.1:0</td><td>The interface and port that task tracker server listens on.
  374. Since it is only connected to by the tasks, it uses the local interface.
  375. EXPERT ONLY. Should only be changed if your host does not have the loopback
  376. interface.</td>
  377. </tr>
  378. <tr>
  379. <td><a name="mapred.local.dir">mapred.local.dir</a></td><td>${hadoop.tmp.dir}/mapred/local</td><td>The local directory where MapReduce stores intermediate
  380. data files. May be a comma-separated list of
  381. directories on different devices in order to spread disk i/o.
  382. Directories that do not exist are ignored.
  383. </td>
  384. </tr>
  385. <tr>
  386. <td><a name="local.cache.size">local.cache.size</a></td><td>10737418240</td><td>The limit on the size of cache you want to keep, set by default
  387. to 10GB. This will act as a soft limit on the cache directory for out of band data.
  388. </td>
  389. </tr>
  390. <tr>
  391. <td><a name="mapred.system.dir">mapred.system.dir</a></td><td>${hadoop.tmp.dir}/mapred/system</td><td>The shared directory where MapReduce stores control files.
  392. </td>
  393. </tr>
  394. <tr>
  395. <td><a name="mapred.temp.dir">mapred.temp.dir</a></td><td>${hadoop.tmp.dir}/mapred/temp</td><td>A shared directory for temporary files.
  396. </td>
  397. </tr>
  398. <tr>
  399. <td><a name="mapred.local.dir.minspacestart">mapred.local.dir.minspacestart</a></td><td>0</td><td>If the space in mapred.local.dir drops under this,
  400. do not ask for more tasks.
  401. Value in bytes.
  402. </td>
  403. </tr>
  404. <tr>
  405. <td><a name="mapred.local.dir.minspacekill">mapred.local.dir.minspacekill</a></td><td>0</td><td>If the space in mapred.local.dir drops under this,
  406. do not ask more tasks until all the current ones have finished and
  407. cleaned up. Also, to save the rest of the tasks we have running,
  408. kill one of them, to clean up some space. Start with the reduce tasks,
  409. then go with the ones that have finished the least.
  410. Value in bytes.
  411. </td>
  412. </tr>
  413. <tr>
  414. <td><a name="mapred.tasktracker.expiry.interval">mapred.tasktracker.expiry.interval</a></td><td>600000</td><td>Expert: The time-interval, in miliseconds, after which
  415. a tasktracker is declared 'lost' if it doesn't send heartbeats.
  416. </td>
  417. </tr>
  418. <tr>
  419. <td><a name="mapred.map.tasks">mapred.map.tasks</a></td><td>2</td><td>The default number of map tasks per job. Typically set
  420. to a prime several times greater than number of available hosts.
  421. Ignored when mapred.job.tracker is "local".
  422. </td>
  423. </tr>
  424. <tr>
  425. <td><a name="mapred.reduce.tasks">mapred.reduce.tasks</a></td><td>1</td><td>The default number of reduce tasks per job. Typically set
  426. to a prime close to the number of available hosts. Ignored when
  427. mapred.job.tracker is "local".
  428. </td>
  429. </tr>
  430. <tr>
  431. <td><a name="mapred.map.max.attempts">mapred.map.max.attempts</a></td><td>4</td><td>Expert: The maximum number of attempts per map task.
  432. In other words, framework will try to execute a map task these many number
  433. of times before giving up on it.
  434. </td>
  435. </tr>
  436. <tr>
  437. <td><a name="mapred.reduce.max.attempts">mapred.reduce.max.attempts</a></td><td>4</td><td>Expert: The maximum number of attempts per reduce task.
  438. In other words, framework will try to execute a reduce task these many number
  439. of times before giving up on it.
  440. </td>
  441. </tr>
  442. <tr>
  443. <td><a name="mapred.reduce.parallel.copies">mapred.reduce.parallel.copies</a></td><td>5</td><td>The default number of parallel transfers run by reduce
  444. during the copy(shuffle) phase.
  445. </td>
  446. </tr>
  447. <tr>
  448. <td><a name="mapred.reduce.copy.backoff">mapred.reduce.copy.backoff</a></td><td>300</td><td>The maximum amount of time (in seconds) a reducer spends on
  449. fetching one map output before declaring it as failed.
  450. </td>
  451. </tr>
  452. <tr>
  453. <td><a name="mapred.task.timeout">mapred.task.timeout</a></td><td>600000</td><td>The number of milliseconds before a task will be
  454. terminated if it neither reads an input, writes an output, nor
  455. updates its status string.
  456. </td>
  457. </tr>
  458. <tr>
  459. <td><a name="mapred.tasktracker.map.tasks.maximum">mapred.tasktracker.map.tasks.maximum</a></td><td>2</td><td>The maximum number of map tasks that will be run
  460. simultaneously by a task tracker.
  461. </td>
  462. </tr>
  463. <tr>
  464. <td><a name="mapred.tasktracker.reduce.tasks.maximum">mapred.tasktracker.reduce.tasks.maximum</a></td><td>2</td><td>The maximum number of reduce tasks that will be run
  465. simultaneously by a task tracker.
  466. </td>
  467. </tr>
  468. <tr>
  469. <td><a name="mapred.jobtracker.completeuserjobs.maximum">mapred.jobtracker.completeuserjobs.maximum</a></td><td>100</td><td>The maximum number of complete jobs per user to keep around before delegating them to the job history.
  470. </td>
  471. </tr>
  472. <tr>
  473. <td><a name="mapred.child.java.opts">mapred.child.java.opts</a></td><td>-Xmx200m</td><td>Java opts for the task tracker child processes.
  474. The following symbol, if present, will be interpolated: @taskid@ is replaced
  475. by current TaskID. Any other occurrences of '@' will go unchanged.
  476. For example, to enable verbose gc logging to a file named for the taskid in
  477. /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of:
  478. -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc
  479. The configuration variable mapred.child.ulimit can be used to control the
  480. maximum virtual memory of the child processes.
  481. </td>
  482. </tr>
  483. <tr>
  484. <td><a name="mapred.child.ulimit">mapred.child.ulimit</a></td><td></td><td>The maximum virtual memory, in KB, of a process launched by the
  485. Map-Reduce framework. This can be used to control both the Mapper/Reducer
  486. tasks and applications using Hadoop Pipes, Hadoop Streaming etc.
  487. By default it is left unspecified to let cluster admins control it via
  488. limits.conf and other such relevant mechanisms.
  489. Note: mapred.child.ulimit must be greater than or equal to the -Xmx passed to
  490. JavaVM, else the VM might not start.
  491. </td>
  492. </tr>
  493. <tr>
  494. <td><a name="mapred.child.tmp">mapred.child.tmp</a></td><td>./tmp</td><td> To set the value of tmp directory for map and reduce tasks.
  495. If the value is an absolute path, it is directly assigned. Otherwise, it is
  496. prepended with task's working directory. The java tasks are executed with
  497. option -Djava.io.tmpdir='the absolute path of the tmp dir'. Pipes and
  498. streaming are set with environment variable,
  499. TMPDIR='the absolute path of the tmp dir'
  500. </td>
  501. </tr>
  502. <tr>
  503. <td><a name="mapred.inmem.merge.threshold">mapred.inmem.merge.threshold</a></td><td>1000</td><td>The threshold, in terms of the number of files
  504. for the in-memory merge process. When we accumulate threshold number of files
  505. we initiate the in-memory merge and spill to disk. A value of 0 or less than
  506. 0 indicates we want to DON'T have any threshold and instead depend only on
  507. the ramfs's memory consumption to trigger the merge.
  508. </td>
  509. </tr>
  510. <tr>
  511. <td><a name="mapred.map.tasks.speculative.execution">mapred.map.tasks.speculative.execution</a></td><td>true</td><td>If true, then multiple instances of some map tasks
  512. may be executed in parallel.</td>
  513. </tr>
  514. <tr>
  515. <td><a name="mapred.reduce.tasks.speculative.execution">mapred.reduce.tasks.speculative.execution</a></td><td>true</td><td>If true, then multiple instances of some reduce tasks
  516. may be executed in parallel.</td>
  517. </tr>
  518. <tr>
  519. <td><a name="mapred.min.split.size">mapred.min.split.size</a></td><td>0</td><td>The minimum size chunk that map input should be split
  520. into. Note that some file formats may have minimum split sizes that
  521. take priority over this setting.</td>
  522. </tr>
  523. <tr>
  524. <td><a name="mapred.submit.replication">mapred.submit.replication</a></td><td>10</td><td>The replication level for submitted job files. This
  525. should be around the square root of the number of nodes.
  526. </td>
  527. </tr>
  528. <tr>
  529. <td><a name="mapred.tasktracker.dns.interface">mapred.tasktracker.dns.interface</a></td><td>default</td><td>The name of the Network Interface from which a task
  530. tracker should report its IP address.
  531. </td>
  532. </tr>
  533. <tr>
  534. <td><a name="mapred.tasktracker.dns.nameserver">mapred.tasktracker.dns.nameserver</a></td><td>default</td><td>The host name or IP address of the name server (DNS)
  535. which a TaskTracker should use to determine the host name used by
  536. the JobTracker for communication and display purposes.
  537. </td>
  538. </tr>
  539. <tr>
  540. <td><a name="tasktracker.http.threads">tasktracker.http.threads</a></td><td>40</td><td>The number of worker threads that for the http server. This is
  541. used for map output fetching
  542. </td>
  543. </tr>
  544. <tr>
  545. <td><a name="mapred.task.tracker.http.address">mapred.task.tracker.http.address</a></td><td>0.0.0.0:50060</td><td>
  546. The task tracker http server address and port.
  547. If the port is 0 then the server will start on a free port.
  548. </td>
  549. </tr>
  550. <tr>
  551. <td><a name="keep.failed.task.files">keep.failed.task.files</a></td><td>false</td><td>Should the files for failed tasks be kept. This should only be
  552. used on jobs that are failing, because the storage is never
  553. reclaimed. It also prevents the map outputs from being erased
  554. from the reduce directory as they are consumed.</td>
  555. </tr>
  556. <tr>
  557. <td><a name="mapred.output.compress">mapred.output.compress</a></td><td>false</td><td>Should the job outputs be compressed?
  558. </td>
  559. </tr>
  560. <tr>
  561. <td><a name="mapred.output.compression.type">mapred.output.compression.type</a></td><td>RECORD</td><td>If the job outputs are to compressed as SequenceFiles, how should
  562. they be compressed? Should be one of NONE, RECORD or BLOCK.
  563. </td>
  564. </tr>
  565. <tr>
  566. <td><a name="mapred.output.compression.codec">mapred.output.compression.codec</a></td><td>org.apache.hadoop.io.compress.DefaultCodec</td><td>If the job outputs are compressed, how should they be compressed?
  567. </td>
  568. </tr>
  569. <tr>
  570. <td><a name="mapred.compress.map.output">mapred.compress.map.output</a></td><td>false</td><td>Should the outputs of the maps be compressed before being
  571. sent across the network. Uses SequenceFile compression.
  572. </td>
  573. </tr>
  574. <tr>
  575. <td><a name="mapred.map.output.compression.codec">mapred.map.output.compression.codec</a></td><td>org.apache.hadoop.io.compress.DefaultCodec</td><td>If the map outputs are compressed, how should they be
  576. compressed?
  577. </td>
  578. </tr>
  579. <tr>
  580. <td><a name="io.seqfile.compress.blocksize">io.seqfile.compress.blocksize</a></td><td>1000000</td><td>The minimum block size for compression in block compressed
  581. SequenceFiles.
  582. </td>
  583. </tr>
  584. <tr>
  585. <td><a name="io.seqfile.lazydecompress">io.seqfile.lazydecompress</a></td><td>true</td><td>Should values of block-compressed SequenceFiles be decompressed
  586. only when necessary.
  587. </td>
  588. </tr>
  589. <tr>
  590. <td><a name="io.seqfile.sorter.recordlimit">io.seqfile.sorter.recordlimit</a></td><td>1000000</td><td>The limit on number of records to be kept in memory in a spill
  591. in SequenceFiles.Sorter
  592. </td>
  593. </tr>
  594. <tr>
  595. <td><a name="map.sort.class">map.sort.class</a></td><td>org.apache.hadoop.util.QuickSort</td><td>The default sort class for sorting keys.
  596. </td>
  597. </tr>
  598. <tr>
  599. <td><a name="mapred.userlog.limit.kb">mapred.userlog.limit.kb</a></td><td>0</td><td>The maximum size of user-logs of each task in KB. 0 disables the cap.
  600. </td>
  601. </tr>
  602. <tr>
  603. <td><a name="mapred.userlog.retain.hours">mapred.userlog.retain.hours</a></td><td>24</td><td>The maximum time, in hours, for which the user-logs are to be
  604. retained.
  605. </td>
  606. </tr>
  607. <tr>
  608. <td><a name="mapred.hosts">mapred.hosts</a></td><td></td><td>Names a file that contains the list of nodes that may
  609. connect to the jobtracker. If the value is empty, all hosts are
  610. permitted.</td>
  611. </tr>
  612. <tr>
  613. <td><a name="mapred.hosts.exclude">mapred.hosts.exclude</a></td><td></td><td>Names a file that contains the list of hosts that
  614. should be excluded by the jobtracker. If the value is empty, no
  615. hosts are excluded.</td>
  616. </tr>
  617. <tr>
  618. <td><a name="mapred.max.tracker.failures">mapred.max.tracker.failures</a></td><td>4</td><td>The number of task-failures on a tasktracker of a given job
  619. after which new tasks of that job aren't assigned to it.
  620. </td>
  621. </tr>
  622. <tr>
  623. <td><a name="jobclient.output.filter">jobclient.output.filter</a></td><td>FAILED</td><td>The filter for controlling the output of the task's userlogs sent
  624. to the console of the JobClient.
  625. The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and
  626. ALL.
  627. </td>
  628. </tr>
  629. <tr>
  630. <td><a name="mapred.job.tracker.persist.jobstatus.active">mapred.job.tracker.persist.jobstatus.active</a></td><td>false</td><td>Indicates if persistency of job status information is
  631. active or not.
  632. </td>
  633. </tr>
  634. <tr>
  635. <td><a name="mapred.job.tracker.persist.jobstatus.hours">mapred.job.tracker.persist.jobstatus.hours</a></td><td>0</td><td>The number of hours job status information is persisted in DFS.
  636. The job status information will be available after it drops of the memory
  637. queue and between jobtracker restarts. With a zero value the job status
  638. information is not persisted at all in DFS.
  639. </td>
  640. </tr>
  641. <tr>
  642. <td><a name="mapred.job.tracker.persist.jobstatus.dir">mapred.job.tracker.persist.jobstatus.dir</a></td><td>/jobtracker/jobsInfo</td><td>The directory where the job status information is persisted
  643. in a file system to be available after it drops of the memory queue and
  644. between jobtracker restarts.
  645. </td>
  646. </tr>
  647. <tr>
  648. <td><a name="mapred.task.profile">mapred.task.profile</a></td><td>false</td><td>To set whether the system should collect profiler
  649. information for some of the tasks in this job? The information is stored
  650. in the the user log directory. The value is "true" if task profiling
  651. is enabled.</td>
  652. </tr>
  653. <tr>
  654. <td><a name="mapred.task.profile.maps">mapred.task.profile.maps</a></td><td>0-2</td><td> To set the ranges of map tasks to profile.
  655. mapred.task.profile has to be set to true for the value to be accounted.
  656. </td>
  657. </tr>
  658. <tr>
  659. <td><a name="mapred.task.profile.reduces">mapred.task.profile.reduces</a></td><td>0-2</td><td> To set the ranges of reduce tasks to profile.
  660. mapred.task.profile has to be set to true for the value to be accounted.
  661. </td>
  662. </tr>
  663. <tr>
  664. <td><a name="mapred.line.input.format.linespermap">mapred.line.input.format.linespermap</a></td><td>1</td><td> Number of lines per split in NLineInputFormat.
  665. </td>
  666. </tr>
  667. <tr>
  668. <td><a name="ipc.client.idlethreshold">ipc.client.idlethreshold</a></td><td>4000</td><td>Defines the threshold number of connections after which
  669. connections will be inspected for idleness.
  670. </td>
  671. </tr>
  672. <tr>
  673. <td><a name="ipc.client.kill.max">ipc.client.kill.max</a></td><td>10</td><td>Defines the maximum number of clients to disconnect in one go.
  674. </td>
  675. </tr>
  676. <tr>
  677. <td><a name="ipc.client.connection.maxidletime">ipc.client.connection.maxidletime</a></td><td>10000</td><td>The maximum time in msec after which a client will bring down the
  678. connection to the server.
  679. </td>
  680. </tr>
  681. <tr>
  682. <td><a name="ipc.client.connect.max.retries">ipc.client.connect.max.retries</a></td><td>10</td><td>Indicates the number of retries a client will make to establish
  683. a server connection.
  684. </td>
  685. </tr>
  686. <tr>
  687. <td><a name="ipc.server.listen.queue.size">ipc.server.listen.queue.size</a></td><td>128</td><td>Indicates the length of the listen queue for servers accepting
  688. client connections.
  689. </td>
  690. </tr>
  691. <tr>
  692. <td><a name="ipc.server.tcpnodelay">ipc.server.tcpnodelay</a></td><td>false</td><td>Turn on/off Nagle's algorithm for the TCP socket connection on
  693. the server. Setting to true disables the algorithm and may decrease latency
  694. with a cost of more/smaller packets.
  695. </td>
  696. </tr>
  697. <tr>
  698. <td><a name="ipc.client.tcpnodelay">ipc.client.tcpnodelay</a></td><td>false</td><td>Turn on/off Nagle's algorithm for the TCP socket connection on
  699. the client. Setting to true disables the algorithm and may decrease latency
  700. with a cost of more/smaller packets.
  701. </td>
  702. </tr>
  703. <tr>
  704. <td><a name="job.end.retry.attempts">job.end.retry.attempts</a></td><td>0</td><td>Indicates how many times hadoop should attempt to contact the
  705. notification URL </td>
  706. </tr>
  707. <tr>
  708. <td><a name="job.end.retry.interval">job.end.retry.interval</a></td><td>30000</td><td>Indicates time in milliseconds between notification URL retry
  709. calls</td>
  710. </tr>
  711. <tr>
  712. <td><a name="webinterface.private.actions">webinterface.private.actions</a></td><td>false</td><td> If set to true, the web interfaces of JT and NN may contain
  713. actions, such as kill job, delete file, etc., that should
  714. not be exposed to public. Enable this option if the interfaces
  715. are only reachable by those who have the right authorization.
  716. </td>
  717. </tr>
  718. <tr>
  719. <td><a name="hadoop.rpc.socket.factory.class.default">hadoop.rpc.socket.factory.class.default</a></td><td>org.apache.hadoop.net.StandardSocketFactory</td><td> Default SocketFactory to use. This parameter is expected to be
  720. formatted as "package.FactoryClassName".
  721. </td>
  722. </tr>
  723. <tr>
  724. <td><a name="hadoop.rpc.socket.factory.class.ClientProtocol">hadoop.rpc.socket.factory.class.ClientProtocol</a></td><td></td><td> SocketFactory to use to connect to a DFS. If null or empty, use
  725. hadoop.rpc.socket.class.default. This socket factory is also used by
  726. DFSClient to create sockets to DataNodes.
  727. </td>
  728. </tr>
  729. <tr>
  730. <td><a name="hadoop.rpc.socket.factory.class.JobSubmissionProtocol">hadoop.rpc.socket.factory.class.JobSubmissionProtocol</a></td><td></td><td> SocketFactory to use to connect to a Map/Reduce master
  731. (JobTracker). If null or empty, then use hadoop.rpc.socket.class.default.
  732. </td>
  733. </tr>
  734. <tr>
  735. <td><a name="hadoop.socks.server">hadoop.socks.server</a></td><td></td><td> Address (host:port) of the SOCKS server to be used by the
  736. SocksSocketFactory.
  737. </td>
  738. </tr>
  739. <tr>
  740. <td><a name="topology.node.switch.mapping.impl">topology.node.switch.mapping.impl</a></td><td>org.apache.hadoop.net.ScriptBasedMapping</td><td> The default implementation of the DNSToSwitchMapping. It
  741. invokes a script specified in topology.script.file.name to resolve
  742. node names. If the value for topology.script.file.name is not set, the
  743. default value of DEFAULT_RACK is returned for all node names.
  744. </td>
  745. </tr>
  746. <tr>
  747. <td><a name="topology.script.file.name">topology.script.file.name</a></td><td></td><td> The script name that should be invoked to resolve DNS names to
  748. NetworkTopology names. Example: the script would take host.foo.bar as an
  749. argument, and return /rack1 as the output.
  750. </td>
  751. </tr>
  752. <tr>
  753. <td><a name="topology.script.number.args">topology.script.number.args</a></td><td>20</td><td> The max number of args that the script configured with
  754. topology.script.file.name should be run with. Each arg is an
  755. IP address.
  756. </td>
  757. </tr>
  758. <tr>
  759. <td><a name="mapred.task.cache.levels">mapred.task.cache.levels</a></td><td>2</td><td> This is the max level of the task cache. For example, if
  760. the level is 2, the tasks cached are at the host level and at the rack
  761. level.
  762. </td>
  763. </tr>
  764. </table>
  765. </body>
  766. </html>