BUILDING.txt 25 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570
  1. Build instructions for Hadoop
  2. ----------------------------------------------------------------------------------
  3. Requirements:
  4. * Unix System
  5. * JDK 1.8
  6. * Maven 3.3 or later
  7. * Boost 1.72 (if compiling native code)
  8. * Protocol Buffers 3.7.1 (if compiling native code)
  9. * CMake 3.1 or newer (if compiling native code)
  10. * Zlib devel (if compiling native code)
  11. * Cyrus SASL devel (if compiling native code)
  12. * One of the compilers that support thread_local storage: GCC 4.8.1 or later, Visual Studio,
  13. Clang (community version), Clang (version for iOS 9 and later) (if compiling native code)
  14. * openssl devel (if compiling native hadoop-pipes and to get the best HDFS encryption performance)
  15. * Linux FUSE (Filesystem in Userspace) version 2.6 or above (if compiling fuse_dfs)
  16. * Doxygen ( if compiling libhdfspp and generating the documents )
  17. * Internet connection for first build (to fetch all Maven and Hadoop dependencies)
  18. * python (for releasedocs)
  19. * bats (for shell code testing)
  20. * Node.js / bower / Ember-cli (for YARN UI v2 building)
  21. ----------------------------------------------------------------------------------
  22. The easiest way to get an environment with all the appropriate tools is by means
  23. of the provided Docker config.
  24. This requires a recent version of docker (1.4.1 and higher are known to work).
  25. On Linux / Mac:
  26. Install Docker and run this command:
  27. $ ./start-build-env.sh
  28. The prompt which is then presented is located at a mounted version of the source tree
  29. and all required tools for testing and building have been installed and configured.
  30. Note that from within this docker environment you ONLY have access to the Hadoop source
  31. tree from where you started. So if you need to run
  32. dev-support/bin/test-patch /path/to/my.patch
  33. then the patch must be placed inside the hadoop source tree.
  34. Known issues:
  35. - On Mac with Boot2Docker the performance on the mounted directory is currently extremely slow.
  36. This is a known problem related to boot2docker on the Mac.
  37. See:
  38. https://github.com/boot2docker/boot2docker/issues/593
  39. This issue has been resolved as a duplicate, and they point to a new feature for utilizing NFS mounts
  40. as the proposed solution:
  41. https://github.com/boot2docker/boot2docker/issues/64
  42. An alternative solution to this problem is to install Linux native inside a virtual machine
  43. and run your IDE and Docker etc inside that VM.
  44. ----------------------------------------------------------------------------------
  45. Installing required packages for clean install of Ubuntu 14.04 LTS Desktop:
  46. * Oracle JDK 1.8 (preferred)
  47. $ sudo apt-get purge openjdk*
  48. $ sudo apt-get install software-properties-common
  49. $ sudo add-apt-repository ppa:webupd8team/java
  50. $ sudo apt-get update
  51. $ sudo apt-get install oracle-java8-installer
  52. * Maven
  53. $ sudo apt-get -y install maven
  54. * Native libraries
  55. $ sudo apt-get -y install build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev libsasl2-dev
  56. * Protocol Buffers 3.7.1 (required to build native code)
  57. $ mkdir -p /opt/protobuf-3.7-src \
  58. && curl -L -s -S \
  59. https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz \
  60. -o /opt/protobuf-3.7.1.tar.gz \
  61. && tar xzf /opt/protobuf-3.7.1.tar.gz --strip-components 1 -C /opt/protobuf-3.7-src \
  62. && cd /opt/protobuf-3.7-src \
  63. && ./configure\
  64. && make install \
  65. && rm -rf /opt/protobuf-3.7-src
  66. * Boost
  67. $ curl -L https://sourceforge.net/projects/boost/files/boost/1.72.0/boost_1_72_0.tar.bz2/download > boost_1_72_0.tar.bz2 \
  68. && tar --bzip2 -xf boost_1_72_0.tar.bz2 \
  69. && cd boost_1_72_0 \
  70. && ./bootstrap.sh --prefix=/usr/ \
  71. && ./b2 --without-python install
  72. Optional packages:
  73. * Snappy compression (only used for hadoop-mapreduce-client-nativetask)
  74. $ sudo apt-get install snappy libsnappy-dev
  75. * Intel ISA-L library for erasure coding
  76. Please refer to https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
  77. (OR https://github.com/01org/isa-l)
  78. * Bzip2
  79. $ sudo apt-get install bzip2 libbz2-dev
  80. * Linux FUSE
  81. $ sudo apt-get install fuse libfuse-dev
  82. * ZStandard compression
  83. $ sudo apt-get install libzstd1-dev
  84. * PMDK library for storage class memory(SCM) as HDFS cache backend
  85. Please refer to http://pmem.io/ and https://github.com/pmem/pmdk
  86. ----------------------------------------------------------------------------------
  87. Maven main modules:
  88. hadoop (Main Hadoop project)
  89. - hadoop-project (Parent POM for all Hadoop Maven modules. )
  90. (All plugins & dependencies versions are defined here.)
  91. - hadoop-project-dist (Parent POM for modules that generate distributions.)
  92. - hadoop-annotations (Generates the Hadoop doclet used to generated the Javadocs)
  93. - hadoop-assemblies (Maven assemblies used by the different modules)
  94. - hadoop-maven-plugins (Maven plugins used in project)
  95. - hadoop-build-tools (Build tools like checkstyle, etc.)
  96. - hadoop-common-project (Hadoop Common)
  97. - hadoop-hdfs-project (Hadoop HDFS)
  98. - hadoop-yarn-project (Hadoop YARN)
  99. - hadoop-mapreduce-project (Hadoop MapReduce)
  100. - hadoop-tools (Hadoop tools like Streaming, Distcp, etc.)
  101. - hadoop-dist (Hadoop distribution assembler)
  102. - hadoop-client-modules (Hadoop client modules)
  103. - hadoop-minicluster (Hadoop minicluster artifacts)
  104. - hadoop-cloud-storage-project (Generates artifacts to access cloud storage like aws, azure, etc.)
  105. ----------------------------------------------------------------------------------
  106. Where to run Maven from?
  107. It can be run from any module. The only catch is that if not run from utrunk
  108. all modules that are not part of the build run must be installed in the local
  109. Maven cache or available in a Maven repository.
  110. ----------------------------------------------------------------------------------
  111. Maven build goals:
  112. * Clean : mvn clean [-Preleasedocs]
  113. * Compile : mvn compile [-Pnative]
  114. * Run tests : mvn test [-Pnative] [-Pshelltest]
  115. * Create JAR : mvn package
  116. * Run findbugs : mvn compile findbugs:findbugs
  117. * Run checkstyle : mvn compile checkstyle:checkstyle
  118. * Install JAR in M2 cache : mvn install
  119. * Deploy JAR to Maven repo : mvn deploy
  120. * Run clover : mvn test -Pclover [-DcloverLicenseLocation=${user.name}/.clover.license]
  121. * Run Rat : mvn apache-rat:check
  122. * Build javadocs : mvn javadoc:javadoc
  123. * Build distribution : mvn package [-Pdist][-Pdocs][-Psrc][-Pnative][-Dtar][-Preleasedocs][-Pyarn-ui]
  124. * Change Hadoop version : mvn versions:set -DnewVersion=NEWVERSION
  125. Build options:
  126. * Use -Pnative to compile/bundle native code
  127. * Use -Pdocs to generate & bundle the documentation in the distribution (using -Pdist)
  128. * Use -Psrc to create a project source TAR.GZ
  129. * Use -Dtar to create a TAR with the distribution (using -Pdist)
  130. * Use -Preleasedocs to include the changelog and release docs (requires Internet connectivity)
  131. * Use -Pyarn-ui to build YARN UI v2. (Requires Internet connectivity)
  132. * Use -DskipShade to disable client jar shading to speed up build times (in
  133. development environments only, not to build release artifacts)
  134. YARN Application Timeline Service V2 build options:
  135. YARN Timeline Service v.2 chooses Apache HBase as the primary backing storage. The supported
  136. versions of Apache HBase are 1.2.6 (default) and 2.0.0-beta1.
  137. * HBase 1.2.6 is used by default to build Hadoop. The official releases are ready to use if you
  138. plan on running Timeline Service v2 with HBase 1.2.6.
  139. * Use -Dhbase.profile=2.0 to build Hadoop with HBase 2.0.0-beta1. Provide this option if you plan
  140. on running Timeline Service v2 with HBase 2.0.
  141. Snappy build options:
  142. Snappy is a compression library that can be utilized by the native code.
  143. It is currently an optional component, meaning that Hadoop can be built with
  144. or without this dependency. Snappy library as optional dependency is only
  145. used for hadoop-mapreduce-client-nativetask.
  146. * Use -Drequire.snappy to fail the build if libsnappy.so is not found.
  147. If this option is not specified and the snappy library is missing,
  148. we silently build a version of libhadoop.so that cannot make use of snappy.
  149. This option is recommended if you plan on making use of snappy and want
  150. to get more repeatable builds.
  151. * Use -Dsnappy.prefix to specify a nonstandard location for the libsnappy
  152. header files and library files. You do not need this option if you have
  153. installed snappy using a package manager.
  154. * Use -Dsnappy.lib to specify a nonstandard location for the libsnappy library
  155. files. Similarly to snappy.prefix, you do not need this option if you have
  156. installed snappy using a package manager.
  157. * Use -Dbundle.snappy to copy the contents of the snappy.lib directory into
  158. the final tar file. This option requires that -Dsnappy.lib is also given,
  159. and it ignores the -Dsnappy.prefix option. If -Dsnappy.lib isn't given, the
  160. bundling and building will fail.
  161. ZStandard build options:
  162. ZStandard is a compression library that can be utilized by the native code.
  163. It is currently an optional component, meaning that Hadoop can be built with
  164. or without this dependency.
  165. * Use -Drequire.zstd to fail the build if libzstd.so is not found.
  166. If this option is not specified and the zstd library is missing.
  167. * Use -Dzstd.prefix to specify a nonstandard location for the libzstd
  168. header files and library files. You do not need this option if you have
  169. installed zstandard using a package manager.
  170. * Use -Dzstd.lib to specify a nonstandard location for the libzstd library
  171. files. Similarly to zstd.prefix, you do not need this option if you have
  172. installed using a package manager.
  173. * Use -Dbundle.zstd to copy the contents of the zstd.lib directory into
  174. the final tar file. This option requires that -Dzstd.lib is also given,
  175. and it ignores the -Dzstd.prefix option. If -Dzstd.lib isn't given, the
  176. bundling and building will fail.
  177. OpenSSL build options:
  178. OpenSSL includes a crypto library that can be utilized by the native code.
  179. It is currently an optional component, meaning that Hadoop can be built with
  180. or without this dependency.
  181. * Use -Drequire.openssl to fail the build if libcrypto.so is not found.
  182. If this option is not specified and the openssl library is missing,
  183. we silently build a version of libhadoop.so that cannot make use of
  184. openssl. This option is recommended if you plan on making use of openssl
  185. and want to get more repeatable builds.
  186. * Use -Dopenssl.prefix to specify a nonstandard location for the libcrypto
  187. header files and library files. You do not need this option if you have
  188. installed openssl using a package manager.
  189. * Use -Dopenssl.lib to specify a nonstandard location for the libcrypto library
  190. files. Similarly to openssl.prefix, you do not need this option if you have
  191. installed openssl using a package manager.
  192. * Use -Dbundle.openssl to copy the contents of the openssl.lib directory into
  193. the final tar file. This option requires that -Dopenssl.lib is also given,
  194. and it ignores the -Dopenssl.prefix option. If -Dopenssl.lib isn't given, the
  195. bundling and building will fail.
  196. Tests options:
  197. * Use -DskipTests to skip tests when running the following Maven goals:
  198. 'package', 'install', 'deploy' or 'verify'
  199. * -Dtest=<TESTCLASSNAME>,<TESTCLASSNAME#METHODNAME>,....
  200. * -Dtest.exclude=<TESTCLASSNAME>
  201. * -Dtest.exclude.pattern=**/<TESTCLASSNAME1>.java,**/<TESTCLASSNAME2>.java
  202. * To run all native unit tests, use: mvn test -Pnative -Dtest=allNative
  203. * To run a specific native unit test, use: mvn test -Pnative -Dtest=<test>
  204. For example, to run test_bulk_crc32, you would use:
  205. mvn test -Pnative -Dtest=test_bulk_crc32
  206. Intel ISA-L build options:
  207. Intel ISA-L is an erasure coding library that can be utilized by the native code.
  208. It is currently an optional component, meaning that Hadoop can be built with
  209. or without this dependency. Note the library is used via dynamic module. Please
  210. reference the official site for the library details.
  211. https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
  212. (OR https://github.com/01org/isa-l)
  213. * Use -Drequire.isal to fail the build if libisal.so is not found.
  214. If this option is not specified and the isal library is missing,
  215. we silently build a version of libhadoop.so that cannot make use of ISA-L and
  216. the native raw erasure coders.
  217. This option is recommended if you plan on making use of native raw erasure
  218. coders and want to get more repeatable builds.
  219. * Use -Disal.prefix to specify a nonstandard location for the libisal
  220. library files. You do not need this option if you have installed ISA-L to the
  221. system library path.
  222. * Use -Disal.lib to specify a nonstandard location for the libisal library
  223. files.
  224. * Use -Dbundle.isal to copy the contents of the isal.lib directory into
  225. the final tar file. This option requires that -Disal.lib is also given,
  226. and it ignores the -Disal.prefix option. If -Disal.lib isn't given, the
  227. bundling and building will fail.
  228. Special plugins: OWASP's dependency-check:
  229. OWASP's dependency-check plugin will scan the third party dependencies
  230. of this project for known CVEs (security vulnerabilities against them).
  231. It will produce a report in target/dependency-check-report.html. To
  232. invoke, run 'mvn dependency-check:aggregate'. Note that this plugin
  233. requires maven 3.1.1 or greater.
  234. PMDK library build options:
  235. The Persistent Memory Development Kit (PMDK), formerly known as NVML, is a growing
  236. collection of libraries which have been developed for various use cases, tuned,
  237. validated to production quality, and thoroughly documented. These libraries are built
  238. on the Direct Access (DAX) feature available in both Linux and Windows, which allows
  239. applications directly load/store access to persistent memory by memory-mapping files
  240. on a persistent memory aware file system.
  241. It is currently an optional component, meaning that Hadoop can be built without
  242. this dependency. Please Note the library is used via dynamic module. For getting
  243. more details please refer to the official sites:
  244. http://pmem.io/ and https://github.com/pmem/pmdk.
  245. * -Drequire.pmdk is used to build the project with PMDK libraries forcibly. With this
  246. option provided, the build will fail if libpmem library is not found. If this option
  247. is not given, the build will generate a version of Hadoop with libhadoop.so.
  248. And storage class memory(SCM) backed HDFS cache is still supported without PMDK involved.
  249. Because PMDK can bring better caching write/read performance, it is recommended to build
  250. the project with this option if user plans to use SCM backed HDFS cache.
  251. * -Dpmdk.lib is used to specify a nonstandard location for PMDK libraries if they are not
  252. under /usr/lib or /usr/lib64.
  253. * -Dbundle.pmdk is used to copy the specified libpmem libraries into the distribution tar
  254. package. This option requires that -Dpmdk.lib is specified. With -Dbundle.pmdk provided,
  255. the build will fail if -Dpmdk.lib is not specified.
  256. ----------------------------------------------------------------------------------
  257. Building components separately
  258. If you are building a submodule directory, all the hadoop dependencies this
  259. submodule has will be resolved as all other 3rd party dependencies. This is,
  260. from the Maven cache or from a Maven repository (if not available in the cache
  261. or the SNAPSHOT 'timed out').
  262. An alternative is to run 'mvn install -DskipTests' from Hadoop source top
  263. level once; and then work from the submodule. Keep in mind that SNAPSHOTs
  264. time out after a while, using the Maven '-nsu' will stop Maven from trying
  265. to update SNAPSHOTs from external repos.
  266. ----------------------------------------------------------------------------------
  267. Importing projects to eclipse
  268. When you import the project to eclipse, install hadoop-maven-plugins at first.
  269. $ cd hadoop-maven-plugins
  270. $ mvn install
  271. Then, generate eclipse project files.
  272. $ mvn eclipse:eclipse -DskipTests
  273. At last, import to eclipse by specifying the root directory of the project via
  274. [File] > [Import] > [Existing Projects into Workspace].
  275. ----------------------------------------------------------------------------------
  276. Building distributions:
  277. Create binary distribution without native code and without documentation:
  278. $ mvn package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true
  279. Create binary distribution with native code and with documentation:
  280. $ mvn package -Pdist,native,docs -DskipTests -Dtar
  281. Create source distribution:
  282. $ mvn package -Psrc -DskipTests
  283. Create source and binary distributions with native code and documentation:
  284. $ mvn package -Pdist,native,docs,src -DskipTests -Dtar
  285. Create a local staging version of the website (in /tmp/hadoop-site)
  286. $ mvn clean site -Preleasedocs; mvn site:stage -DstagingDirectory=/tmp/hadoop-site
  287. Note that the site needs to be built in a second pass after other artifacts.
  288. ----------------------------------------------------------------------------------
  289. Installing Hadoop
  290. Look for these HTML files after you build the document by the above commands.
  291. * Single Node Setup:
  292. hadoop-project-dist/hadoop-common/SingleCluster.html
  293. * Cluster Setup:
  294. hadoop-project-dist/hadoop-common/ClusterSetup.html
  295. ----------------------------------------------------------------------------------
  296. Handling out of memory errors in builds
  297. ----------------------------------------------------------------------------------
  298. If the build process fails with an out of memory error, you should be able to fix
  299. it by increasing the memory used by maven which can be done via the environment
  300. variable MAVEN_OPTS.
  301. Here is an example setting to allocate between 256 MB and 1.5 GB of heap space to
  302. Maven
  303. export MAVEN_OPTS="-Xms256m -Xmx1536m"
  304. ----------------------------------------------------------------------------------
  305. Building on macOS (without Docker)
  306. ----------------------------------------------------------------------------------
  307. Installing required dependencies for clean install of macOS 10.14:
  308. * Install Xcode Command Line Tools
  309. $ xcode-select --install
  310. * Install Homebrew
  311. $ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
  312. * Install OpenJDK 8
  313. $ brew tap AdoptOpenJDK/openjdk
  314. $ brew cask install adoptopenjdk8
  315. * Install maven and tools
  316. $ brew install maven autoconf automake cmake wget
  317. * Install native libraries, only openssl is required to compile native code,
  318. you may optionally install zlib, lz4, etc.
  319. $ brew install openssl
  320. * Protocol Buffers 3.7.1 (required to compile native code)
  321. $ wget https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
  322. $ mkdir -p protobuf-3.7 && tar zxvf protobuf-java-3.7.1.tar.gz --strip-components 1 -C protobuf-3.7
  323. $ cd protobuf-3.7
  324. $ ./configure
  325. $ make
  326. $ make check
  327. $ make install
  328. $ protoc --version
  329. Note that building Hadoop 3.1.1/3.1.2/3.2.0 native code from source is broken
  330. on macOS. For 3.1.1/3.1.2, you need to manually backport YARN-8622. For 3.2.0,
  331. you need to backport both YARN-8622 and YARN-9487 in order to build native code.
  332. ----------------------------------------------------------------------------------
  333. Building command example:
  334. * Create binary distribution with native code but without documentation:
  335. $ mvn package -Pdist,native -DskipTests -Dmaven.javadoc.skip \
  336. -Dopenssl.prefix=/usr/local/opt/openssl
  337. Note that the command above manually specified the openssl library and include
  338. path. This is necessary at least for Homebrewed OpenSSL.
  339. ----------------------------------------------------------------------------------
  340. Building on CentOS 8
  341. ----------------------------------------------------------------------------------
  342. * Install development tools such as GCC, autotools, OpenJDK and Maven.
  343. $ sudo dnf group install --with-optional 'Development Tools'
  344. $ sudo dnf install java-1.8.0-openjdk-devel maven
  345. * Install python2 for building documentation.
  346. $ sudo dnf install python2
  347. * Install Protocol Buffers v3.7.1.
  348. $ git clone https://github.com/protocolbuffers/protobuf
  349. $ cd protobuf
  350. $ git checkout v3.7.1
  351. $ autoreconf -i
  352. $ ./configure --prefix=/usr/local
  353. $ make
  354. $ sudo make install
  355. $ cd ..
  356. * Install libraries provided by CentOS 8.
  357. $ sudo dnf install libtirpc-devel zlib-devel lz4-devel bzip2-devel openssl-devel cyrus-sasl-devel libpmem-devel
  358. * Install boost.
  359. $ curl -L -o boost_1_72_0.tar.bz2 https://sourceforge.net/projects/boost/files/boost/1.72.0/boost_1_72_0.tar.bz2/download
  360. $ tar xjf boost_1_72_0.tar.bz2
  361. $ cd boost_1_72_0
  362. $ ./bootstrap.sh --prefix=/usr/local
  363. $ ./b2
  364. $ sudo ./b2 install
  365. * Install optional dependencies (snappy-devel).
  366. $ sudo dnf --enablerepo=PowerTools install snappy-devel
  367. * Install optional dependencies (libzstd-devel).
  368. $ sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
  369. $ sudo dnf --enablerepo=epel install libzstd-devel
  370. * Install optional dependencies (isa-l).
  371. $ sudo dnf --enablerepo=PowerTools install nasm
  372. $ git clone https://github.com/intel/isa-l
  373. $ cd isa-l/
  374. $ ./autogen.sh
  375. $ ./configure
  376. $ make
  377. $ sudo make install
  378. ----------------------------------------------------------------------------------
  379. Building on Windows
  380. ----------------------------------------------------------------------------------
  381. Requirements:
  382. * Windows System
  383. * JDK 1.8
  384. * Maven 3.0 or later
  385. * Boost 1.72
  386. * Protocol Buffers 3.7.1
  387. * CMake 3.1 or newer
  388. * Visual Studio 2010 Professional or Higher
  389. * Windows SDK 8.1 (if building CPU rate control for the container executor)
  390. * zlib headers (if building native code bindings for zlib)
  391. * Internet connection for first build (to fetch all Maven and Hadoop dependencies)
  392. * Unix command-line tools from GnuWin32: sh, mkdir, rm, cp, tar, gzip. These
  393. tools must be present on your PATH.
  394. * Python ( for generation of docs using 'mvn site')
  395. Unix command-line tools are also included with the Windows Git package which
  396. can be downloaded from http://git-scm.com/downloads
  397. If using Visual Studio, it must be Professional level or higher.
  398. Do not use Visual Studio Express. It does not support compiling for 64-bit,
  399. which is problematic if running a 64-bit system.
  400. The Windows SDK 8.1 is available to download at:
  401. http://msdn.microsoft.com/en-us/windows/bg162891.aspx
  402. Cygwin is not required.
  403. ----------------------------------------------------------------------------------
  404. Building:
  405. Keep the source code tree in a short path to avoid running into problems related
  406. to Windows maximum path length limitation (for example, C:\hdc).
  407. There is one support command file located in dev-support called win-paths-eg.cmd.
  408. It should be copied somewhere convenient and modified to fit your needs.
  409. win-paths-eg.cmd sets up the environment for use. You will need to modify this
  410. file. It will put all of the required components in the command path,
  411. configure the bit-ness of the build, and set several optional components.
  412. Several tests require that the user must have the Create Symbolic Links
  413. privilege.
  414. All Maven goals are the same as described above with the exception that
  415. native code is built by enabling the 'native-win' Maven profile. -Pnative-win
  416. is enabled by default when building on Windows since the native components
  417. are required (not optional) on Windows.
  418. If native code bindings for zlib are required, then the zlib headers must be
  419. deployed on the build machine. Set the ZLIB_HOME environment variable to the
  420. directory containing the headers.
  421. set ZLIB_HOME=C:\zlib-1.2.7
  422. At runtime, zlib1.dll must be accessible on the PATH. Hadoop has been tested
  423. with zlib 1.2.7, built using Visual Studio 2010 out of contrib\vstudio\vc10 in
  424. the zlib 1.2.7 source tree.
  425. http://www.zlib.net/
  426. ----------------------------------------------------------------------------------
  427. Building distributions:
  428. * Build distribution with native code : mvn package [-Pdist][-Pdocs][-Psrc][-Dtar][-Dmaven.javadoc.skip=true]
  429. ----------------------------------------------------------------------------------
  430. Running compatibility checks with checkcompatibility.py
  431. Invoke `./dev-support/bin/checkcompatibility.py` to run Java API Compliance Checker
  432. to compare the public Java APIs of two git objects. This can be used by release
  433. managers to compare the compatibility of a previous and current release.
  434. As an example, this invocation will check the compatibility of interfaces annotated as Public or LimitedPrivate:
  435. ./dev-support/bin/checkcompatibility.py --annotation org.apache.hadoop.classification.InterfaceAudience.Public --annotation org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate --include "hadoop.*" branch-2.7.2 trunk
  436. ----------------------------------------------------------------------------------
  437. Changing the Hadoop version declared returned by VersionInfo
  438. If for compatibility reasons the version of Hadoop has to be declared as a 2.x release in the information returned by
  439. org.apache.hadoop.util.VersionInfo, set the property declared.hadoop.version to the desired version.
  440. For example: mvn package -Pdist -Ddeclared.hadoop.version=2.11
  441. If unset, the project version declared in the POM file is used.