CentralizedCacheManagement.apt.vm 14 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343
  1. ~~ Licensed under the Apache License, Version 2.0 (the "License");
  2. ~~ you may not use this file except in compliance with the License.
  3. ~~ You may obtain a copy of the License at
  4. ~~
  5. ~~ http://www.apache.org/licenses/LICENSE-2.0
  6. ~~
  7. ~~ Unless required by applicable law or agreed to in writing, software
  8. ~~ distributed under the License is distributed on an "AS IS" BASIS,
  9. ~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  10. ~~ See the License for the specific language governing permissions and
  11. ~~ limitations under the License. See accompanying LICENSE file.
  12. ---
  13. Hadoop Distributed File System-${project.version} - Centralized Cache Management in HDFS
  14. ---
  15. ---
  16. ${maven.build.timestamp}
  17. Centralized Cache Management in HDFS
  18. \[ {{{./index.html}Go Back}} \]
  19. %{toc|section=1|fromDepth=2|toDepth=4}
  20. * {Overview}
  21. <Centralized cache management> in HDFS is an explicit caching mechanism that
  22. allows users to specify <paths> to be cached by HDFS. The NameNode will
  23. communicate with DataNodes that have the desired blocks on disk, and instruct
  24. them to cache the blocks in off-heap caches.
  25. Centralized cache management in HDFS has many significant advantages.
  26. [[1]] Explicit pinning prevents frequently used data from being evicted from
  27. memory. This is particularly important when the size of the working set
  28. exceeds the size of main memory, which is common for many HDFS workloads.
  29. [[1]] Because DataNode caches are managed by the NameNode, applications can
  30. query the set of cached block locations when making task placement decisions.
  31. Co-locating a task with a cached block replica improves read performance.
  32. [[1]] When block has been cached by a DataNode, clients can use a new ,
  33. more-efficient, zero-copy read API. Since checksum verification of cached
  34. data is done once by the DataNode, clients can incur essentially zero
  35. overhead when using this new API.
  36. [[1]] Centralized caching can improve overall cluster memory utilization.
  37. When relying on the OS buffer cache at each DataNode, repeated reads of
  38. a block will result in all <n> replicas of the block being pulled into
  39. buffer cache. With centralized cache management, a user can explicitly pin
  40. only <m> of the <n> replicas, saving <n-m> memory.
  41. * {Use Cases}
  42. Centralized cache management is useful for files that accessed repeatedly.
  43. For example, a small <fact table> in Hive which is often used for joins is a
  44. good candidate for caching. On the other hand, caching the input of a <
  45. one year reporting query> is probably less useful, since the
  46. historical data might only be read once.
  47. Centralized cache management is also useful for mixed workloads with
  48. performance SLAs. Caching the working set of a high-priority workload
  49. insures that it does not contend for disk I/O with a low-priority workload.
  50. * {Architecture}
  51. [images/caching.png] Caching Architecture
  52. In this architecture, the NameNode is responsible for coordinating all the
  53. DataNode off-heap caches in the cluster. The NameNode periodically receives
  54. a <cache report> from each DataNode which describes all the blocks cached
  55. on a given DN. The NameNode manages DataNode caches by piggybacking cache and
  56. uncache commands on the DataNode heartbeat.
  57. The NameNode queries its set of <cache directives> to determine
  58. which paths should be cached. Cache directives are persistently stored in the
  59. fsimage and edit log, and can be added, removed, and modified via Java and
  60. command-line APIs. The NameNode also stores a set of <cache pools>,
  61. which are administrative entities used to group cache directives together for
  62. resource management and enforcing permissions.
  63. The NameNode periodically rescans the namespace and active cache directives
  64. to determine which blocks need to be cached or uncached and assign caching
  65. work to DataNodes. Rescans can also be triggered by user actions like adding
  66. or removing a cache directive or removing a cache pool.
  67. We do not currently cache blocks which are under construction, corrupt, or
  68. otherwise incomplete. If a cache directive covers a symlink, the symlink
  69. target is not cached.
  70. Caching is currently done on the file or directory-level. Block and sub-block
  71. caching is an item of future work.
  72. * {Concepts}
  73. ** {Cache directive}
  74. A <cache directive> defines a path that should be cached. Paths can be either
  75. directories or files. Directories are cached non-recursively, meaning only
  76. files in the first-level listing of the directory.
  77. Directives also specify additional parameters, such as the cache replication
  78. factor and expiration time. The replication factor specifies the number of
  79. block replicas to cache. If multiple cache directives refer to the same file,
  80. the maximum cache replication factor is applied.
  81. The expiration time is specified on the command line as a <time-to-live
  82. (TTL)>, a relative expiration time in the future. After a cache directive
  83. expires, it is no longer considered by the NameNode when making caching
  84. decisions.
  85. ** {Cache pool}
  86. A <cache pool> is an administrative entity used to manage groups of cache
  87. directives. Cache pools have UNIX-like <permissions>, which restrict which
  88. users and groups have access to the pool. Write permissions allow users to
  89. add and remove cache directives to the pool. Read permissions allow users to
  90. list the cache directives in a pool, as well as additional metadata. Execute
  91. permissions are unused.
  92. Cache pools are also used for resource management. Pools can enforce a
  93. maximum <limit>, which restricts the number of bytes that can be cached in
  94. aggregate by directives in the pool. Normally, the sum of the pool limits
  95. will approximately equal the amount of aggregate memory reserved for
  96. HDFS caching on the cluster. Cache pools also track a number of statistics
  97. to help cluster users determine what is and should be cached.
  98. Pools also can enforce a maximum time-to-live. This restricts the maximum
  99. expiration time of directives being added to the pool.
  100. * {<<<cacheadmin>>> command-line interface}
  101. On the command-line, administrators and users can interact with cache pools
  102. and directives via the <<<hdfs cacheadmin>>> subcommand.
  103. Cache directives are identified by a unique, non-repeating 64-bit integer ID.
  104. IDs will not be reused even if a cache directive is later removed.
  105. Cache pools are identified by a unique string name.
  106. ** {Cache directive commands}
  107. *** {addDirective}
  108. Usage: <<<hdfs cacheadmin -addDirective -path <path> -pool <pool-name> [-force] [-replication <replication>] [-ttl <time-to-live>]>>>
  109. Add a new cache directive.
  110. *--+--+
  111. \<path\> | A path to cache. The path can be a directory or a file.
  112. *--+--+
  113. \<pool-name\> | The pool to which the directive will be added. You must have write permission on the cache pool in order to add new directives.
  114. *--+--+
  115. -force | Skips checking of cache pool resource limits.
  116. *--+--+
  117. \<replication\> | The cache replication factor to use. Defaults to 1.
  118. *--+--+
  119. \<time-to-live\> | How long the directive is valid. Can be specified in minutes, hours, and days, e.g. 30m, 4h, 2d. Valid units are [smhd]. "never" indicates a directive that never expires. If unspecified, the directive never expires.
  120. *--+--+
  121. *** {removeDirective}
  122. Usage: <<<hdfs cacheadmin -removeDirective <id> >>>
  123. Remove a cache directive.
  124. *--+--+
  125. \<id\> | The id of the cache directive to remove. You must have write permission on the pool of the directive in order to remove it. To see a list of cachedirective IDs, use the -listDirectives command.
  126. *--+--+
  127. *** {removeDirectives}
  128. Usage: <<<hdfs cacheadmin -removeDirectives <path> >>>
  129. Remove every cache directive with the specified path.
  130. *--+--+
  131. \<path\> | The path of the cache directives to remove. You must have write permission on the pool of the directive in order to remove it. To see a list of cache directives, use the -listDirectives command.
  132. *--+--+
  133. *** {listDirectives}
  134. Usage: <<<hdfs cacheadmin -listDirectives [-stats] [-path <path>] [-pool <pool>]>>>
  135. List cache directives.
  136. *--+--+
  137. \<path\> | List only cache directives with this path. Note that if there is a cache directive for <path> in a cache pool that we don't have read access for, it will not be listed.
  138. *--+--+
  139. \<pool\> | List only path cache directives in that pool.
  140. *--+--+
  141. -stats | List path-based cache directive statistics.
  142. *--+--+
  143. ** {Cache pool commands}
  144. *** {addPool}
  145. Usage: <<<hdfs cacheadmin -addPool <name> [-owner <owner>] [-group <group>] [-mode <mode>] [-limit <limit>] [-maxTtl <maxTtl>>>>
  146. Add a new cache pool.
  147. *--+--+
  148. \<name\> | Name of the new pool.
  149. *--+--+
  150. \<owner\> | Username of the owner of the pool. Defaults to the current user.
  151. *--+--+
  152. \<group\> | Group of the pool. Defaults to the primary group name of the current user.
  153. *--+--+
  154. \<mode\> | UNIX-style permissions for the pool. Permissions are specified in octal, e.g. 0755. By default, this is set to 0755.
  155. *--+--+
  156. \<limit\> | The maximum number of bytes that can be cached by directives in this pool, in aggregate. By default, no limit is set.
  157. *--+--+
  158. \<maxTtl\> | The maximum allowed time-to-live for directives being added to the pool. This can be specified in seconds, minutes, hours, and days, e.g. 120s, 30m, 4h, 2d. Valid units are [smhd]. By default, no maximum is set. A value of \"never\" specifies that there is no limit.
  159. *--+--+
  160. *** {modifyPool}
  161. Usage: <<<hdfs cacheadmin -modifyPool <name> [-owner <owner>] [-group <group>] [-mode <mode>] [-limit <limit>] [-maxTtl <maxTtl>]>>>
  162. Modifies the metadata of an existing cache pool.
  163. *--+--+
  164. \<name\> | Name of the pool to modify.
  165. *--+--+
  166. \<owner\> | Username of the owner of the pool.
  167. *--+--+
  168. \<group\> | Groupname of the group of the pool.
  169. *--+--+
  170. \<mode\> | Unix-style permissions of the pool in octal.
  171. *--+--+
  172. \<limit\> | Maximum number of bytes that can be cached by this pool.
  173. *--+--+
  174. \<maxTtl\> | The maximum allowed time-to-live for directives being added to the pool.
  175. *--+--+
  176. *** {removePool}
  177. Usage: <<<hdfs cacheadmin -removePool <name> >>>
  178. Remove a cache pool. This also uncaches paths associated with the pool.
  179. *--+--+
  180. \<name\> | Name of the cache pool to remove.
  181. *--+--+
  182. *** {listPools}
  183. Usage: <<<hdfs cacheadmin -listPools [-stats] [<name>]>>>
  184. Display information about one or more cache pools, e.g. name, owner, group,
  185. permissions, etc.
  186. *--+--+
  187. -stats | Display additional cache pool statistics.
  188. *--+--+
  189. \<name\> | If specified, list only the named cache pool.
  190. *--+--+
  191. *** {help}
  192. Usage: <<<hdfs cacheadmin -help <command-name> >>>
  193. Get detailed help about a command.
  194. *--+--+
  195. \<command-name\> | The command for which to get detailed help. If no command is specified, print detailed help for all commands.
  196. *--+--+
  197. * {Configuration}
  198. ** {Native Libraries}
  199. In order to lock block files into memory, the DataNode relies on native JNI
  200. code found in <<<libhadoop.so>>>. Be sure to
  201. {{{../hadoop-common/NativeLibraries.html}enable JNI}} if you are using HDFS
  202. centralized cache management.
  203. ** {Configuration Properties}
  204. *** Required
  205. Be sure to configure the following:
  206. * dfs.datanode.max.locked.memory
  207. This determines the maximum amount of memory a DataNode will use for caching.
  208. The "locked-in-memory size" ulimit (<<<ulimit -l>>>) of the DataNode user
  209. also needs to be increased to match this parameter (see below section on
  210. {{OS Limits}}). When setting this value, please remember that you will need
  211. space in memory for other things as well, such as the DataNode and
  212. application JVM heaps and the operating system page cache.
  213. *** Optional
  214. The following properties are not required, but may be specified for tuning:
  215. * dfs.namenode.path.based.cache.refresh.interval.ms
  216. The NameNode will use this as the amount of milliseconds between subsequent
  217. path cache rescans. This calculates the blocks to cache and each DataNode
  218. containing a replica of the block that should cache it.
  219. By default, this parameter is set to 300000, which is five minutes.
  220. * dfs.datanode.fsdatasetcache.max.threads.per.volume
  221. The DataNode will use this as the maximum number of threads per volume to
  222. use for caching new data.
  223. By default, this parameter is set to 4.
  224. * dfs.cachereport.intervalMsec
  225. The DataNode will use this as the amount of milliseconds between sending a
  226. full report of its cache state to the NameNode.
  227. By default, this parameter is set to 10000, which is 10 seconds.
  228. * dfs.namenode.path.based.cache.block.map.allocation.percent
  229. The percentage of the Java heap which we will allocate to the cached blocks
  230. map. The cached blocks map is a hash map which uses chained hashing.
  231. Smaller maps may be accessed more slowly if the number of cached blocks is
  232. large; larger maps will consume more memory. The default is 0.25 percent.
  233. ** {OS Limits}
  234. If you get the error "Cannot start datanode because the configured max
  235. locked memory size... is more than the datanode's available RLIMIT_MEMLOCK
  236. ulimit," that means that the operating system is imposing a lower limit
  237. on the amount of memory that you can lock than what you have configured. To
  238. fix this, you must adjust the ulimit -l value that the DataNode runs with.
  239. Usually, this value is configured in <<</etc/security/limits.conf>>>.
  240. However, it will vary depending on what operating system and distribution
  241. you are using.
  242. You will know that you have correctly configured this value when you can run
  243. <<<ulimit -l>>> from the shell and get back either a higher value than what
  244. you have configured with <<<dfs.datanode.max.locked.memory>>>, or the string
  245. "unlimited," indicating that there is no limit. Note that it's typical for
  246. <<<ulimit -l>>> to output the memory lock limit in KB, but
  247. dfs.datanode.max.locked.memory must be specified in bytes.