hdfs-default.html 9.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226
  1. <html>
  2. <body>
  3. <table border="1">
  4. <tr>
  5. <td>name</td><td>value</td><td>description</td>
  6. </tr>
  7. <tr>
  8. <td><a name="dfs.namenode.logging.level">dfs.namenode.logging.level</a></td><td>info</td><td>The logging level for dfs namenode. Other values are "dir"(trac
  9. e namespace mutations), "block"(trace block under/over replications and block
  10. creations/deletions), or "all".</td>
  11. </tr>
  12. <tr>
  13. <td><a name="dfs.secondary.http.address">dfs.secondary.http.address</a></td><td>0.0.0.0:50090</td><td>
  14. The secondary namenode http server address and port.
  15. If the port is 0 then the server will start on a free port.
  16. </td>
  17. </tr>
  18. <tr>
  19. <td><a name="dfs.datanode.address">dfs.datanode.address</a></td><td>0.0.0.0:50010</td><td>
  20. The address where the datanode server will listen to.
  21. If the port is 0 then the server will start on a free port.
  22. </td>
  23. </tr>
  24. <tr>
  25. <td><a name="dfs.datanode.http.address">dfs.datanode.http.address</a></td><td>0.0.0.0:50075</td><td>
  26. The datanode http server address and port.
  27. If the port is 0 then the server will start on a free port.
  28. </td>
  29. </tr>
  30. <tr>
  31. <td><a name="dfs.datanode.ipc.address">dfs.datanode.ipc.address</a></td><td>0.0.0.0:50020</td><td>
  32. The datanode ipc server address and port.
  33. If the port is 0 then the server will start on a free port.
  34. </td>
  35. </tr>
  36. <tr>
  37. <td><a name="dfs.datanode.handler.count">dfs.datanode.handler.count</a></td><td>3</td><td>The number of server threads for the datanode.</td>
  38. </tr>
  39. <tr>
  40. <td><a name="dfs.http.address">dfs.http.address</a></td><td>0.0.0.0:50070</td><td>
  41. The address and the base port where the dfs namenode web ui will listen on.
  42. If the port is 0 then the server will start on a free port.
  43. </td>
  44. </tr>
  45. <tr>
  46. <td><a name="dfs.https.enable">dfs.https.enable</a></td><td>false</td><td>Decide if HTTPS(SSL) is supported on HDFS
  47. </td>
  48. </tr>
  49. <tr>
  50. <td><a name="dfs.https.need.client.auth">dfs.https.need.client.auth</a></td><td>false</td><td>Whether SSL client certificate authentication is required
  51. </td>
  52. </tr>
  53. <tr>
  54. <td><a name="dfs.https.server.keystore.resource">dfs.https.server.keystore.resource</a></td><td>ssl-server.xml</td><td>Resource file from which ssl server keystore
  55. information will be extracted
  56. </td>
  57. </tr>
  58. <tr>
  59. <td><a name="dfs.https.client.keystore.resource">dfs.https.client.keystore.resource</a></td><td>ssl-client.xml</td><td>Resource file from which ssl client keystore
  60. information will be extracted
  61. </td>
  62. </tr>
  63. <tr>
  64. <td><a name="dfs.datanode.https.address">dfs.datanode.https.address</a></td><td>0.0.0.0:50475</td><td></td>
  65. </tr>
  66. <tr>
  67. <td><a name="dfs.https.address">dfs.https.address</a></td><td>0.0.0.0:50470</td><td></td>
  68. </tr>
  69. <tr>
  70. <td><a name="dfs.datanode.dns.interface">dfs.datanode.dns.interface</a></td><td>default</td><td>The name of the Network Interface from which a data node should
  71. report its IP address.
  72. </td>
  73. </tr>
  74. <tr>
  75. <td><a name="dfs.datanode.dns.nameserver">dfs.datanode.dns.nameserver</a></td><td>default</td><td>The host name or IP address of the name server (DNS)
  76. which a DataNode should use to determine the host name used by the
  77. NameNode for communication and display purposes.
  78. </td>
  79. </tr>
  80. <tr>
  81. <td><a name="dfs.replication.considerLoad">dfs.replication.considerLoad</a></td><td>true</td><td>Decide if chooseTarget considers the target's load or not
  82. </td>
  83. </tr>
  84. <tr>
  85. <td><a name="dfs.default.chunk.view.size">dfs.default.chunk.view.size</a></td><td>32768</td><td>The number of bytes to view for a file on the browser.
  86. </td>
  87. </tr>
  88. <tr>
  89. <td><a name="dfs.datanode.du.reserved">dfs.datanode.du.reserved</a></td><td>0</td><td>Reserved space in bytes per volume. Always leave this much space free for non dfs use.
  90. </td>
  91. </tr>
  92. <tr>
  93. <td><a name="dfs.name.dir">dfs.name.dir</a></td><td>${hadoop.tmp.dir}/dfs/name</td><td>Determines where on the local filesystem the DFS name node
  94. should store the name table(fsimage). If this is a comma-delimited list
  95. of directories then the name table is replicated in all of the
  96. directories, for redundancy. </td>
  97. </tr>
  98. <tr>
  99. <td><a name="dfs.name.edits.dir">dfs.name.edits.dir</a></td><td>${dfs.name.dir}</td><td>Determines where on the local filesystem the DFS name node
  100. should store the transaction (edits) file. If this is a comma-delimited list
  101. of directories then the transaction file is replicated in all of the
  102. directories, for redundancy. Default value is same as dfs.name.dir
  103. </td>
  104. </tr>
  105. <tr>
  106. <td><a name="dfs.web.ugi">dfs.web.ugi</a></td><td>webuser,webgroup</td><td>The user account used by the web interface.
  107. Syntax: USERNAME,GROUP1,GROUP2, ...
  108. </td>
  109. </tr>
  110. <tr>
  111. <td><a name="dfs.permissions">dfs.permissions</a></td><td>true</td><td>
  112. If "true", enable permission checking in HDFS.
  113. If "false", permission checking is turned off,
  114. but all other behavior is unchanged.
  115. Switching from one parameter value to the other does not change the mode,
  116. owner or group of files or directories.
  117. </td>
  118. </tr>
  119. <tr>
  120. <td><a name="dfs.permissions.supergroup">dfs.permissions.supergroup</a></td><td>supergroup</td><td>The name of the group of super-users.</td>
  121. </tr>
  122. <tr>
  123. <td><a name="dfs.data.dir">dfs.data.dir</a></td><td>${hadoop.tmp.dir}/dfs/data</td><td>Determines where on the local filesystem an DFS data node
  124. should store its blocks. If this is a comma-delimited
  125. list of directories, then data will be stored in all named
  126. directories, typically on different devices.
  127. Directories that do not exist are ignored.
  128. </td>
  129. </tr>
  130. <tr>
  131. <td><a name="dfs.replication">dfs.replication</a></td><td>3</td><td>Default block replication.
  132. The actual number of replications can be specified when the file is created.
  133. The default is used if replication is not specified in create time.
  134. </td>
  135. </tr>
  136. <tr>
  137. <td><a name="dfs.replication.max">dfs.replication.max</a></td><td>512</td><td>Maximal block replication.
  138. </td>
  139. </tr>
  140. <tr>
  141. <td><a name="dfs.replication.min">dfs.replication.min</a></td><td>1</td><td>Minimal block replication.
  142. </td>
  143. </tr>
  144. <tr>
  145. <td><a name="dfs.block.size">dfs.block.size</a></td><td>67108864</td><td>The default block size for new files.</td>
  146. </tr>
  147. <tr>
  148. <td><a name="dfs.df.interval">dfs.df.interval</a></td><td>60000</td><td>Disk usage statistics refresh interval in msec.</td>
  149. </tr>
  150. <tr>
  151. <td><a name="dfs.client.block.write.retries">dfs.client.block.write.retries</a></td><td>3</td><td>The number of retries for writing blocks to the data nodes,
  152. before we signal failure to the application.
  153. </td>
  154. </tr>
  155. <tr>
  156. <td><a name="dfs.blockreport.intervalMsec">dfs.blockreport.intervalMsec</a></td><td>3600000</td><td>Determines block reporting interval in milliseconds.</td>
  157. </tr>
  158. <tr>
  159. <td><a name="dfs.blockreport.initialDelay">dfs.blockreport.initialDelay</a></td><td>0</td><td>Delay for first block report in seconds.</td>
  160. </tr>
  161. <tr>
  162. <td><a name="dfs.heartbeat.interval">dfs.heartbeat.interval</a></td><td>3</td><td>Determines datanode heartbeat interval in seconds.</td>
  163. </tr>
  164. <tr>
  165. <td><a name="dfs.namenode.handler.count">dfs.namenode.handler.count</a></td><td>10</td><td>The number of server threads for the namenode.</td>
  166. </tr>
  167. <tr>
  168. <td><a name="dfs.safemode.threshold.pct">dfs.safemode.threshold.pct</a></td><td>0.999f</td><td>
  169. Specifies the percentage of blocks that should satisfy
  170. the minimal replication requirement defined by dfs.replication.min.
  171. Values less than or equal to 0 mean not to start in safe mode.
  172. Values greater than 1 will make safe mode permanent.
  173. </td>
  174. </tr>
  175. <tr>
  176. <td><a name="dfs.safemode.extension">dfs.safemode.extension</a></td><td>30000</td><td>
  177. Determines extension of safe mode in milliseconds
  178. after the threshold level is reached.
  179. </td>
  180. </tr>
  181. <tr>
  182. <td><a name="dfs.balance.bandwidthPerSec">dfs.balance.bandwidthPerSec</a></td><td>1048576</td><td>
  183. Specifies the maximum amount of bandwidth that each datanode
  184. can utilize for the balancing purpose in term of
  185. the number of bytes per second.
  186. </td>
  187. </tr>
  188. <tr>
  189. <td><a name="dfs.hosts">dfs.hosts</a></td><td></td><td>Names a file that contains a list of hosts that are
  190. permitted to connect to the namenode. The full pathname of the file
  191. must be specified. If the value is empty, all hosts are
  192. permitted.</td>
  193. </tr>
  194. <tr>
  195. <td><a name="dfs.hosts.exclude">dfs.hosts.exclude</a></td><td></td><td>Names a file that contains a list of hosts that are
  196. not permitted to connect to the namenode. The full pathname of the
  197. file must be specified. If the value is empty, no hosts are
  198. excluded.</td>
  199. </tr>
  200. <tr>
  201. <td><a name="dfs.max.objects">dfs.max.objects</a></td><td>0</td><td>The maximum number of files, directories and blocks
  202. dfs supports. A value of zero indicates no limit to the number
  203. of objects that dfs supports.
  204. </td>
  205. </tr>
  206. <tr>
  207. <td><a name="dfs.namenode.decommission.interval">dfs.namenode.decommission.interval</a></td><td>30</td><td>Namenode periodicity in seconds to check if decommission is
  208. complete.</td>
  209. </tr>
  210. <tr>
  211. <td><a name="dfs.namenode.decommission.nodes.per.interval">dfs.namenode.decommission.nodes.per.interval</a></td><td>5</td><td>The number of nodes namenode checks if decommission is complete
  212. in each dfs.namenode.decommission.interval.</td>
  213. </tr>
  214. <tr>
  215. <td><a name="dfs.replication.interval">dfs.replication.interval</a></td><td>3</td><td>The periodicity in seconds with which the namenode computes
  216. repliaction work for datanodes. </td>
  217. </tr>
  218. <tr>
  219. <td><a name="dfs.access.time.precision">dfs.access.time.precision</a></td><td>3600000</td><td>The access time for HDFS file is precise upto this value.
  220. The default value is 1 hour. Setting a value of 0 disables
  221. access times for HDFS.
  222. </td>
  223. </tr>
  224. </table>
  225. </body>
  226. </html>