HdfsUserGuide.apt.vm 23 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499
  1. ~~ Licensed under the Apache License, Version 2.0 (the "License");
  2. ~~ you may not use this file except in compliance with the License.
  3. ~~ You may obtain a copy of the License at
  4. ~~
  5. ~~ http://www.apache.org/licenses/LICENSE-2.0
  6. ~~
  7. ~~ Unless required by applicable law or agreed to in writing, software
  8. ~~ distributed under the License is distributed on an "AS IS" BASIS,
  9. ~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  10. ~~ See the License for the specific language governing permissions and
  11. ~~ limitations under the License. See accompanying LICENSE file.
  12. ---
  13. HDFS Users Guide
  14. ---
  15. ---
  16. ${maven.build.timestamp}
  17. HDFS Users Guide
  18. %{toc|section=1|fromDepth=0}
  19. * Purpose
  20. This document is a starting point for users working with Hadoop
  21. Distributed File System (HDFS) either as a part of a Hadoop cluster or
  22. as a stand-alone general purpose distributed file system. While HDFS is
  23. designed to "just work" in many environments, a working knowledge of
  24. HDFS helps greatly with configuration improvements and diagnostics on a
  25. specific cluster.
  26. * Overview
  27. HDFS is the primary distributed storage used by Hadoop applications. A
  28. HDFS cluster primarily consists of a NameNode that manages the file
  29. system metadata and DataNodes that store the actual data. The HDFS
  30. Architecture Guide describes HDFS in detail. This user guide primarily
  31. deals with the interaction of users and administrators with HDFS
  32. clusters. The HDFS architecture diagram depicts basic interactions
  33. among NameNode, the DataNodes, and the clients. Clients contact
  34. NameNode for file metadata or file modifications and perform actual
  35. file I/O directly with the DataNodes.
  36. The following are some of the salient features that could be of
  37. interest to many users.
  38. * Hadoop, including HDFS, is well suited for distributed storage and
  39. distributed processing using commodity hardware. It is fault
  40. tolerant, scalable, and extremely simple to expand. MapReduce, well
  41. known for its simplicity and applicability for large set of
  42. distributed applications, is an integral part of Hadoop.
  43. * HDFS is highly configurable with a default configuration well
  44. suited for many installations. Most of the time, configuration
  45. needs to be tuned only for very large clusters.
  46. * Hadoop is written in Java and is supported on all major platforms.
  47. * Hadoop supports shell-like commands to interact with HDFS directly.
  48. * The NameNode and Datanodes have built in web servers that makes it
  49. easy to check current status of the cluster.
  50. * New features and improvements are regularly implemented in HDFS.
  51. The following is a subset of useful features in HDFS:
  52. * File permissions and authentication.
  53. * Rack awareness: to take a node's physical location into
  54. account while scheduling tasks and allocating storage.
  55. * Safemode: an administrative mode for maintenance.
  56. * <<<fsck>>>: a utility to diagnose health of the file system, to find
  57. missing files or blocks.
  58. * <<<fetchdt>>>: a utility to fetch DelegationToken and store it in a
  59. file on the local system.
  60. * Rebalancer: tool to balance the cluster when the data is
  61. unevenly distributed among DataNodes.
  62. * Upgrade and rollback: after a software upgrade, it is possible
  63. to rollback to HDFS' state before the upgrade in case of
  64. unexpected problems.
  65. * Secondary NameNode: performs periodic checkpoints of the
  66. namespace and helps keep the size of file containing log of
  67. HDFS modifications within certain limits at the NameNode.
  68. * Checkpoint node: performs periodic checkpoints of the
  69. namespace and helps minimize the size of the log stored at the
  70. NameNode containing changes to the HDFS. Replaces the role
  71. previously filled by the Secondary NameNode, though is not yet
  72. battle hardened. The NameNode allows multiple Checkpoint nodes
  73. simultaneously, as long as there are no Backup nodes
  74. registered with the system.
  75. * Backup node: An extension to the Checkpoint node. In addition
  76. to checkpointing it also receives a stream of edits from the
  77. NameNode and maintains its own in-memory copy of the
  78. namespace, which is always in sync with the active NameNode
  79. namespace state. Only one Backup node may be registered with
  80. the NameNode at once.
  81. * Prerequisites
  82. The following documents describe how to install and set up a Hadoop
  83. cluster:
  84. * {{Single Node Setup}} for first-time users.
  85. * {{Cluster Setup}} for large, distributed clusters.
  86. The rest of this document assumes the user is able to set up and run a
  87. HDFS with at least one DataNode. For the purpose of this document, both
  88. the NameNode and DataNode could be running on the same physical
  89. machine.
  90. * Web Interface
  91. NameNode and DataNode each run an internal web server in order to
  92. display basic information about the current status of the cluster. With
  93. the default configuration, the NameNode front page is at
  94. <<<http://namenode-name:50070/>>>. It lists the DataNodes in the cluster and
  95. basic statistics of the cluster. The web interface can also be used to
  96. browse the file system (using "Browse the file system" link on the
  97. NameNode front page).
  98. * Shell Commands
  99. Hadoop includes various shell-like commands that directly interact with
  100. HDFS and other file systems that Hadoop supports. The command <<<bin/hdfs dfs -help>>>
  101. lists the commands supported by Hadoop shell. Furthermore,
  102. the command <<<bin/hdfs dfs -help command-name>>> displays more detailed help
  103. for a command. These commands support most of the normal files system
  104. operations like copying files, changing file permissions, etc. It also
  105. supports a few HDFS specific operations like changing replication of
  106. files. For more information see {{{File System Shell Guide}}}.
  107. ** DFSAdmin Command
  108. The <<<bin/hadoop dfsadmin>>> command supports a few HDFS administration
  109. related operations. The <<<bin/hadoop dfsadmin -help>>> command lists all the
  110. commands currently supported. For e.g.:
  111. * <<<-report>>>: reports basic statistics of HDFS. Some of this
  112. information is also available on the NameNode front page.
  113. * <<<-safemode>>>: though usually not required, an administrator can
  114. manually enter or leave Safemode.
  115. * <<<-finalizeUpgrade>>>: removes previous backup of the cluster made
  116. during last upgrade.
  117. * <<<-refreshNodes>>>: Updates the namenode with the set of datanodes
  118. allowed to connect to the namenode. Namenodes re-read datanode
  119. hostnames in the file defined by <<<dfs.hosts>>>, <<<dfs.hosts.exclude>>>.
  120. Hosts defined in <<<dfs.hosts>>> are the datanodes that are part of the
  121. cluster. If there are entries in <<<dfs.hosts>>>, only the hosts in it
  122. are allowed to register with the namenode. Entries in
  123. <<<dfs.hosts.exclude>>> are datanodes that need to be decommissioned.
  124. Datanodes complete decommissioning when all the replicas from them
  125. are replicated to other datanodes. Decommissioned nodes are not
  126. automatically shutdown and are not chosen for writing for new
  127. replicas.
  128. * <<<-printTopology>>> : Print the topology of the cluster. Display a tree
  129. of racks and datanodes attached to the tracks as viewed by the
  130. NameNode.
  131. For command usage, see {{{dfsadmin}}}.
  132. * Secondary NameNode
  133. The NameNode stores modifications to the file system as a log appended
  134. to a native file system file, edits. When a NameNode starts up, it
  135. reads HDFS state from an image file, fsimage, and then applies edits
  136. from the edits log file. It then writes new HDFS state to the fsimage
  137. and starts normal operation with an empty edits file. Since NameNode
  138. merges fsimage and edits files only during start up, the edits log file
  139. could get very large over time on a busy cluster. Another side effect
  140. of a larger edits file is that next restart of NameNode takes longer.
  141. The secondary NameNode merges the fsimage and the edits log files
  142. periodically and keeps edits log size within a limit. It is usually run
  143. on a different machine than the primary NameNode since its memory
  144. requirements are on the same order as the primary NameNode.
  145. The start of the checkpoint process on the secondary NameNode is
  146. controlled by two configuration parameters.
  147. * <<<dfs.namenode.checkpoint.period>>>, set to 1 hour by default, specifies
  148. the maximum delay between two consecutive checkpoints, and
  149. * <<<dfs.namenode.checkpoint.txns>>>, set to 40000 default, defines the
  150. number of uncheckpointed transactions on the NameNode which will
  151. force an urgent checkpoint, even if the checkpoint period has not
  152. been reached.
  153. The secondary NameNode stores the latest checkpoint in a directory
  154. which is structured the same way as the primary NameNode's directory.
  155. So that the check pointed image is always ready to be read by the
  156. primary NameNode if necessary.
  157. For command usage, see {{{secondarynamenode}}}.
  158. * Checkpoint Node
  159. NameNode persists its namespace using two files: fsimage, which is the
  160. latest checkpoint of the namespace and edits, a journal (log) of
  161. changes to the namespace since the checkpoint. When a NameNode starts
  162. up, it merges the fsimage and edits journal to provide an up-to-date
  163. view of the file system metadata. The NameNode then overwrites fsimage
  164. with the new HDFS state and begins a new edits journal.
  165. The Checkpoint node periodically creates checkpoints of the namespace.
  166. It downloads fsimage and edits from the active NameNode, merges them
  167. locally, and uploads the new image back to the active NameNode. The
  168. Checkpoint node usually runs on a different machine than the NameNode
  169. since its memory requirements are on the same order as the NameNode.
  170. The Checkpoint node is started by bin/hdfs namenode -checkpoint on the
  171. node specified in the configuration file.
  172. The location of the Checkpoint (or Backup) node and its accompanying
  173. web interface are configured via the <<<dfs.namenode.backup.address>>> and
  174. <<<dfs.namenode.backup.http-address>>> configuration variables.
  175. The start of the checkpoint process on the Checkpoint node is
  176. controlled by two configuration parameters.
  177. * <<<dfs.namenode.checkpoint.period>>>, set to 1 hour by default, specifies
  178. the maximum delay between two consecutive checkpoints
  179. * <<<dfs.namenode.checkpoint.txns>>>, set to 40000 default, defines the
  180. number of uncheckpointed transactions on the NameNode which will
  181. force an urgent checkpoint, even if the checkpoint period has not
  182. been reached.
  183. The Checkpoint node stores the latest checkpoint in a directory that is
  184. structured the same as the NameNode's directory. This allows the
  185. checkpointed image to be always available for reading by the NameNode
  186. if necessary. See Import checkpoint.
  187. Multiple checkpoint nodes may be specified in the cluster configuration
  188. file.
  189. For command usage, see {{{namenode}}}.
  190. * Backup Node
  191. The Backup node provides the same checkpointing functionality as the
  192. Checkpoint node, as well as maintaining an in-memory, up-to-date copy
  193. of the file system namespace that is always synchronized with the
  194. active NameNode state. Along with accepting a journal stream of file
  195. system edits from the NameNode and persisting this to disk, the Backup
  196. node also applies those edits into its own copy of the namespace in
  197. memory, thus creating a backup of the namespace.
  198. The Backup node does not need to download fsimage and edits files from
  199. the active NameNode in order to create a checkpoint, as would be
  200. required with a Checkpoint node or Secondary NameNode, since it already
  201. has an up-to-date state of the namespace state in memory. The Backup
  202. node checkpoint process is more efficient as it only needs to save the
  203. namespace into the local fsimage file and reset edits.
  204. As the Backup node maintains a copy of the namespace in memory, its RAM
  205. requirements are the same as the NameNode.
  206. The NameNode supports one Backup node at a time. No Checkpoint nodes
  207. may be registered if a Backup node is in use. Using multiple Backup
  208. nodes concurrently will be supported in the future.
  209. The Backup node is configured in the same manner as the Checkpoint
  210. node. It is started with <<<bin/hdfs namenode -backup>>>.
  211. The location of the Backup (or Checkpoint) node and its accompanying
  212. web interface are configured via the <<<dfs.namenode.backup.address>>> and
  213. <<<dfs.namenode.backup.http-address>>> configuration variables.
  214. Use of a Backup node provides the option of running the NameNode with
  215. no persistent storage, delegating all responsibility for persisting the
  216. state of the namespace to the Backup node. To do this, start the
  217. NameNode with the <<<-importCheckpoint>>> option, along with specifying no
  218. persistent storage directories of type edits <<<dfs.namenode.edits.dir>>> for
  219. the NameNode configuration.
  220. For a complete discussion of the motivation behind the creation of the
  221. Backup node and Checkpoint node, see {{{https://issues.apache.org/jira/browse/HADOOP-4539}HADOOP-4539}}.
  222. For command usage, see {{{namenode}}}.
  223. * Import Checkpoint
  224. The latest checkpoint can be imported to the NameNode if all other
  225. copies of the image and the edits files are lost. In order to do that
  226. one should:
  227. * Create an empty directory specified in the <<<dfs.namenode.name.dir>>>
  228. configuration variable;
  229. * Specify the location of the checkpoint directory in the
  230. configuration variable <<<dfs.namenode.checkpoint.dir>>>;
  231. * and start the NameNode with <<<-importCheckpoint>>> option.
  232. The NameNode will upload the checkpoint from the
  233. <<<dfs.namenode.checkpoint.dir>>> directory and then save it to the NameNode
  234. directory(s) set in <<<dfs.namenode.name.dir>>>. The NameNode will fail if a
  235. legal image is contained in <<<dfs.namenode.name.dir>>>. The NameNode
  236. verifies that the image in <<<dfs.namenode.checkpoint.dir>>> is consistent,
  237. but does not modify it in any way.
  238. For command usage, see {{{namenode}}}.
  239. * Rebalancer
  240. HDFS data might not always be be placed uniformly across the DataNode.
  241. One common reason is addition of new DataNodes to an existing cluster.
  242. While placing new blocks (data for a file is stored as a series of
  243. blocks), NameNode considers various parameters before choosing the
  244. DataNodes to receive these blocks. Some of the considerations are:
  245. * Policy to keep one of the replicas of a block on the same node as
  246. the node that is writing the block.
  247. * Need to spread different replicas of a block across the racks so
  248. that cluster can survive loss of whole rack.
  249. * One of the replicas is usually placed on the same rack as the node
  250. writing to the file so that cross-rack network I/O is reduced.
  251. * Spread HDFS data uniformly across the DataNodes in the cluster.
  252. Due to multiple competing considerations, data might not be uniformly
  253. placed across the DataNodes. HDFS provides a tool for administrators
  254. that analyzes block placement and rebalanaces data across the DataNode.
  255. A brief administrator's guide for rebalancer as a PDF is attached to
  256. {{{https://issues.apache.org/jira/browse/HADOOP-1652}HADOOP-1652}}.
  257. For command usage, see {{{balancer}}}.
  258. * Rack Awareness
  259. Typically large Hadoop clusters are arranged in racks and network
  260. traffic between different nodes with in the same rack is much more
  261. desirable than network traffic across the racks. In addition NameNode
  262. tries to place replicas of block on multiple racks for improved fault
  263. tolerance. Hadoop lets the cluster administrators decide which rack a
  264. node belongs to through configuration variable
  265. <<<net.topology.script.file.name>>>. When this script is configured, each
  266. node runs the script to determine its rack id. A default installation
  267. assumes all the nodes belong to the same rack. This feature and
  268. configuration is further described in PDF attached to
  269. {{{https://issues.apache.org/jira/browse/HADOOP-692}HADOOP-692}}.
  270. * Safemode
  271. During start up the NameNode loads the file system state from the
  272. fsimage and the edits log file. It then waits for DataNodes to report
  273. their blocks so that it does not prematurely start replicating the
  274. blocks though enough replicas already exist in the cluster. During this
  275. time NameNode stays in Safemode. Safemode for the NameNode is
  276. essentially a read-only mode for the HDFS cluster, where it does not
  277. allow any modifications to file system or blocks. Normally the NameNode
  278. leaves Safemode automatically after the DataNodes have reported that
  279. most file system blocks are available. If required, HDFS could be
  280. placed in Safemode explicitly using <<<bin/hadoop dfsadmin -safemode>>>
  281. command. NameNode front page shows whether Safemode is on or off. A
  282. more detailed description and configuration is maintained as JavaDoc
  283. for <<<setSafeMode()>>>.
  284. * fsck
  285. HDFS supports the fsck command to check for various inconsistencies. It
  286. it is designed for reporting problems with various files, for example,
  287. missing blocks for a file or under-replicated blocks. Unlike a
  288. traditional fsck utility for native file systems, this command does not
  289. correct the errors it detects. Normally NameNode automatically corrects
  290. most of the recoverable failures. By default fsck ignores open files
  291. but provides an option to select all files during reporting. The HDFS
  292. fsck command is not a Hadoop shell command. It can be run as
  293. <<<bin/hadoop fsck>>>. For command usage, see {{{fsck}}}. fsck can be run on the
  294. whole file system or on a subset of files.
  295. * fetchdt
  296. HDFS supports the fetchdt command to fetch Delegation Token and store
  297. it in a file on the local system. This token can be later used to
  298. access secure server (NameNode for example) from a non secure client.
  299. Utility uses either RPC or HTTPS (over Kerberos) to get the token, and
  300. thus requires kerberos tickets to be present before the run (run kinit
  301. to get the tickets). The HDFS fetchdt command is not a Hadoop shell
  302. command. It can be run as <<<bin/hadoop fetchdt DTfile>>>. After you got
  303. the token you can run an HDFS command without having Kerberos tickets,
  304. by pointing <<<HADOOP_TOKEN_FILE_LOCATION>>> environmental variable to the
  305. delegation token file. For command usage, see {{{fetchdt}}} command.
  306. * Recovery Mode
  307. Typically, you will configure multiple metadata storage locations.
  308. Then, if one storage location is corrupt, you can read the metadata
  309. from one of the other storage locations.
  310. However, what can you do if the only storage locations available are
  311. corrupt? In this case, there is a special NameNode startup mode called
  312. Recovery mode that may allow you to recover most of your data.
  313. You can start the NameNode in recovery mode like so: <<<namenode -recover>>>
  314. When in recovery mode, the NameNode will interactively prompt you at
  315. the command line about possible courses of action you can take to
  316. recover your data.
  317. If you don't want to be prompted, you can give the <<<-force>>> option. This
  318. option will force recovery mode to always select the first choice.
  319. Normally, this will be the most reasonable choice.
  320. Because Recovery mode can cause you to lose data, you should always
  321. back up your edit log and fsimage before using it.
  322. * Upgrade and Rollback
  323. When Hadoop is upgraded on an existing cluster, as with any software
  324. upgrade, it is possible there are new bugs or incompatible changes that
  325. affect existing applications and were not discovered earlier. In any
  326. non-trivial HDFS installation, it is not an option to loose any data,
  327. let alone to restart HDFS from scratch. HDFS allows administrators to
  328. go back to earlier version of Hadoop and rollback the cluster to the
  329. state it was in before the upgrade. HDFS upgrade is described in more
  330. detail in {{{Hadoop Upgrade}}} Wiki page. HDFS can have one such backup at a
  331. time. Before upgrading, administrators need to remove existing backup
  332. using bin/hadoop dfsadmin <<<-finalizeUpgrade>>> command. The following
  333. briefly describes the typical upgrade procedure:
  334. * Before upgrading Hadoop software, finalize if there an existing
  335. backup. <<<dfsadmin -upgradeProgress>>> status can tell if the cluster
  336. needs to be finalized.
  337. * Stop the cluster and distribute new version of Hadoop.
  338. * Run the new version with <<<-upgrade>>> option (<<<bin/start-dfs.sh -upgrade>>>).
  339. * Most of the time, cluster works just fine. Once the new HDFS is
  340. considered working well (may be after a few days of operation),
  341. finalize the upgrade. Note that until the cluster is finalized,
  342. deleting the files that existed before the upgrade does not free up
  343. real disk space on the DataNodes.
  344. * If there is a need to move back to the old version,
  345. * stop the cluster and distribute earlier version of Hadoop.
  346. * start the cluster with rollback option. (<<<bin/start-dfs.h -rollback>>>).
  347. * File Permissions and Security
  348. The file permissions are designed to be similar to file permissions on
  349. other familiar platforms like Linux. Currently, security is limited to
  350. simple file permissions. The user that starts NameNode is treated as
  351. the superuser for HDFS. Future versions of HDFS will support network
  352. authentication protocols like Kerberos for user authentication and
  353. encryption of data transfers. The details are discussed in the
  354. Permissions Guide.
  355. * Scalability
  356. Hadoop currently runs on clusters with thousands of nodes. The
  357. {{{PoweredBy}}} Wiki page lists some of the organizations that deploy Hadoop
  358. on large clusters. HDFS has one NameNode for each cluster. Currently
  359. the total memory available on NameNode is the primary scalability
  360. limitation. On very large clusters, increasing average size of files
  361. stored in HDFS helps with increasing cluster size without increasing
  362. memory requirements on NameNode. The default configuration may not
  363. suite very large clustes. The {{{FAQ}}} Wiki page lists suggested
  364. configuration improvements for large Hadoop clusters.
  365. * Related Documentation
  366. This user guide is a good starting point for working with HDFS. While
  367. the user guide continues to improve, there is a large wealth of
  368. documentation about Hadoop and HDFS. The following list is a starting
  369. point for further exploration:
  370. * {{{Hadoop Site}}}: The home page for the Apache Hadoop site.
  371. * {{{Hadoop Wiki}}}: The home page (FrontPage) for the Hadoop Wiki. Unlike
  372. the released documentation, which is part of Hadoop source tree,
  373. Hadoop Wiki is regularly edited by Hadoop Community.
  374. * {{{FAQ}}}: The FAQ Wiki page.
  375. * {{{Hadoop JavaDoc API}}}.
  376. * {{{Hadoop User Mailing List}}}: core-user[at]hadoop.apache.org.
  377. * Explore {{{src/hdfs/hdfs-default.xml}}}. It includes brief description of
  378. most of the configuration variables available.
  379. * {{{Hadoop Commands Guide}}}: Hadoop commands usage.