Bladeren bron

HADOOP-2867. Adds the task's CWD to its LD_LIBRARY_PATH. Contributed by Amreshwari Sriramadasu.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/core/trunk@658998 13f79535-47bb-0310-9956-ffa450edef68
Devaraj Das 17 jaren geleden
bovenliggende
commit
4d6fc9493c

+ 3 - 0
CHANGES.txt

@@ -152,6 +152,9 @@ Trunk (unreleased changes)
     HADOOP-3381. Clear referenced when directories are deleted so that 
     effect of memory leaks are not multiplied. (rangadi)
 
+    HADOOP-2867. Adds the task's CWD to its LD_LIBRARY_PATH. 
+    (Amareshwari Sriramadasu via ddas)
+
   OPTIMIZATIONS
 
     HADOOP-3274. The default constructor of BytesWritable creates empty 

+ 44 - 3
docs/changes.html

@@ -101,7 +101,7 @@ appends.<br />(dhruba)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._new_features_')">  NEW FEATURES
-</a>&nbsp;&nbsp;&nbsp;(9)
+</a>&nbsp;&nbsp;&nbsp;(11)
     <ol id="trunk_(unreleased_changes)_._new_features_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3074">HADOOP-3074</a>. Provides a UrlStreamHandler for DFS and other FS,
 relying on FileSystem<br />(taton)</li>
@@ -123,10 +123,16 @@ of enumerations.<br />(tomwhite via omalley)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2065">HADOOP-2065</a>. Delay invalidating corrupt replicas of block until its
 is removed from under replicated state. If all replicas are found to
 be corrupt, retain all copies and mark the block as corrupt.<br />(Lohit Vjayarenu via rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3221">HADOOP-3221</a>. Adds org.apache.hadoop.mapred.lib.NLineInputFormat, which
+splits files into splits each of N lines. N can be specified by
+configuration property "mapred.line.input.format.linespermap", which
+defaults to 1.<br />(Amareshwari Sriramadasu via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3336">HADOOP-3336</a>. Direct a subset of annotated FSNamesystem calls for audit
+logging.<br />(cdouglas)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._improvements_')">  IMPROVEMENTS
-</a>&nbsp;&nbsp;&nbsp;(13)
+</a>&nbsp;&nbsp;&nbsp;(19)
     <ol id="trunk_(unreleased_changes)_._improvements_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2928">HADOOP-2928</a>. Remove deprecated FileSystem.getContentLength().<br />(Lohit Vjayarenu via rangadi)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3130">HADOOP-3130</a>. Make the connect timeout smaller for getFile.<br />(Amar Ramesh Kamat via ddas)</li>
@@ -157,6 +163,16 @@ too far into the following split.<br />(Zheng Shao via cdouglas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3332">HADOOP-3332</a>. Reduces the amount of logging in Reducer's shuffle phase.<br />(Devaraj Das)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3355">HADOOP-3355</a>. Enhances Configuration class to accept hex numbers for getInt
 and getLong.<br />(Amareshwari Sriramadasu via ddas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3350">HADOOP-3350</a>. Add an argument to distcp to permit the user to limit the
+number of maps.<br />(cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3013">HADOOP-3013</a>. Add corrupt block reporting to fsck.<br />(lohit vijayarenu via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3377">HADOOP-3377</a>. Remove TaskRunner::replaceAll and replace with equivalent
+String::replace.<br />(Brice Arnould via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3398">HADOOP-3398</a>. Minor improvement to a utility function in that participates
+in backoff calculation.<br />(cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3381">HADOOP-3381</a>. Clear referenced when directories are deleted so that
+effect of memory leaks are not multiplied.<br />(rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-2867">HADOOP-2867</a>. Adds the task's CWD to its LD_LIBRARY_PATH.<br />(Amareshwari Sriramadasu via ddas)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._optimizations_')">  OPTIMIZATIONS
@@ -181,7 +197,7 @@ DataNodes take 30% less CPU while writing data.<br />(rangadi)</li>
     </ol>
   </li>
   <li><a href="javascript:toggleList('trunk_(unreleased_changes)_._bug_fixes_')">  BUG FIXES
-</a>&nbsp;&nbsp;&nbsp;(31)
+</a>&nbsp;&nbsp;&nbsp;(42)
     <ol id="trunk_(unreleased_changes)_._bug_fixes_">
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-2905">HADOOP-2905</a>. 'fsck -move' triggers NPE in NameNode.<br />(Lohit Vjayarenu via rangadi)</li>
       <li>Increment ClientProtocol.versionID missed by <a href="http://issues.apache.org/jira/browse/HADOOP-2585">HADOOP-2585</a>.<br />(shv)</li>
@@ -246,6 +262,31 @@ generation stamps in them.<br />(dhruba)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3203">HADOOP-3203</a>. Fixes TaskTracker::localizeJob to pass correct file sizes
 for the jarfile and the jobfile.<br />(Amareshwari Sriramadasu via ddas)</li>
       <li><a href="http://issues.apache.org/jira/browse/HADOOP-3391">HADOOP-3391</a>. Fix a findbugs warning introduced by <a href="http://issues.apache.org/jira/browse/HADOOP-3248">HADOOP-3248</a><br />(rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3393">HADOOP-3393</a>. Fix datanode shutdown to call DataBlockScanner::shutdown and
+close its log, even if the scanner thread is not running.<br />(lohit vijayarenu
+via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3399">HADOOP-3399</a>. A debug message was logged at info level.<br />(rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3396">HADOOP-3396</a>. TestDatanodeBlockScanner occationally fails.<br />(Lohit Vijayarenu via rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3339">HADOOP-3339</a>. Some of the failures on 3rd datanode in DFS write pipelie
+are not detected properly. This could lead to hard failure of client's
+write operation.<br />(rangadi)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3409">HADOOP-3409</a>. Namenode should save the root inode into fsimage.<br />(hairong)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3296">HADOOP-3296</a>. Fix task cache to work for more than two levels in the cache
+hierarchy. This also adds a new counter to track cache hits at levels
+greater than two.<br />(Amar Kamat via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3370">HADOOP-3370</a>. Ensure that the TaskTracker.runningJobs data-structure is
+correctly cleaned-up on task completion.<br />(Zheng Shao via acmurthy)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3375">HADOOP-3375</a>. Lease paths were sometimes not removed from
+LeaseManager.sortedLeasesByPath. (Tsz Wo (Nicholas), SZE via dhruba)
+</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3424">HADOOP-3424</a>. Values returned by getPartition should be checked to
+make sure they are in the range 0 to #reduces - 1<br />(cdouglas via
+omalley)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3408">HADOOP-3408</a>. Change FSNamesystem to send its metrics as integers to
+accommodate collectors that don't support long values.<br />(lohit vijayarenu
+via cdouglas)</li>
+      <li><a href="http://issues.apache.org/jira/browse/HADOOP-3403">HADOOP-3403</a>. Fixes a problem in the JobTracker to do with handling of lost
+tasktrackers.<br />(Arun Murthy via ddas)</li>
     </ol>
   </li>
 </ul>

+ 4 - 0
docs/hadoop-default.html

@@ -656,6 +656,10 @@ creations/deletions), or "all".</td>
     </td>
 </tr>
 <tr>
+<td><a name="mapred.line.input.format.linespermap">mapred.line.input.format.linespermap</a></td><td>1</td><td> Number of lines per split in NLineInputFormat.
+    </td>
+</tr>
+<tr>
 <td><a name="ipc.client.idlethreshold">ipc.client.idlethreshold</a></td><td>4000</td><td>Defines the threshold number of connections after which
                connections will be inspected for idleness.
   </td>

+ 28 - 27
docs/mapred_tutorial.html

@@ -295,7 +295,7 @@ document.write("Last Published: " + document.lastModified);
 <a href="#Example%3A+WordCount+v2.0">Example: WordCount v2.0</a>
 <ul class="minitoc">
 <li>
-<a href="#Source+Code-N10C84">Source Code</a>
+<a href="#Source+Code-N10C87">Source Code</a>
 </li>
 <li>
 <a href="#Sample+Runs">Sample Runs</a>
@@ -1586,11 +1586,12 @@ document.write("Last Published: " + document.lastModified);
         the working directory of the task can be used to distribute native 
         libraries and load them. The underlying detail is that child-jvm always 
         has its <em>current working directory</em> added to the
-        <span class="codefrag">java.library.path</span> and hence the cached libraries can be 
+        <span class="codefrag">java.library.path</span> and <span class="codefrag">LD_LIBRARY_PATH</span>. 
+        And hence the cached libraries can be 
         loaded via <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#loadLibrary(java.lang.String)">
         System.loadLibrary</a> or <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#load(java.lang.String)">
         System.load</a>.</p>
-<a name="N108F8"></a><a name="Job+Submission+and+Monitoring"></a>
+<a name="N108FB"></a><a name="Job+Submission+and+Monitoring"></a>
 <h3 class="h4">Job Submission and Monitoring</h3>
 <p>
 <a href="api/org/apache/hadoop/mapred/JobClient.html">
@@ -1651,7 +1652,7 @@ document.write("Last Published: " + document.lastModified);
 <p>Normally the user creates the application, describes various facets 
         of the job via <span class="codefrag">JobConf</span>, and then uses the 
         <span class="codefrag">JobClient</span> to submit the job and monitor its progress.</p>
-<a name="N10958"></a><a name="Job+Control"></a>
+<a name="N1095B"></a><a name="Job+Control"></a>
 <h4>Job Control</h4>
 <p>Users may need to chain map-reduce jobs to accomplish complex
           tasks which cannot be done via a single map-reduce job. This is fairly
@@ -1687,7 +1688,7 @@ document.write("Last Published: " + document.lastModified);
             </li>
           
 </ul>
-<a name="N10982"></a><a name="Job+Input"></a>
+<a name="N10985"></a><a name="Job+Input"></a>
 <h3 class="h4">Job Input</h3>
 <p>
 <a href="api/org/apache/hadoop/mapred/InputFormat.html">
@@ -1735,7 +1736,7 @@ document.write("Last Published: " + document.lastModified);
         appropriate <span class="codefrag">CompressionCodec</span>. However, it must be noted that
         compressed files with the above extensions cannot be <em>split</em> and 
         each compressed file is processed in its entirety by a single mapper.</p>
-<a name="N109EC"></a><a name="InputSplit"></a>
+<a name="N109EF"></a><a name="InputSplit"></a>
 <h4>InputSplit</h4>
 <p>
 <a href="api/org/apache/hadoop/mapred/InputSplit.html">
@@ -1749,7 +1750,7 @@ document.write("Last Published: " + document.lastModified);
           FileSplit</a> is the default <span class="codefrag">InputSplit</span>. It sets 
           <span class="codefrag">map.input.file</span> to the path of the input file for the
           logical split.</p>
-<a name="N10A11"></a><a name="RecordReader"></a>
+<a name="N10A14"></a><a name="RecordReader"></a>
 <h4>RecordReader</h4>
 <p>
 <a href="api/org/apache/hadoop/mapred/RecordReader.html">
@@ -1761,7 +1762,7 @@ document.write("Last Published: " + document.lastModified);
           for processing. <span class="codefrag">RecordReader</span> thus assumes the 
           responsibility of processing record boundaries and presents the tasks 
           with keys and values.</p>
-<a name="N10A34"></a><a name="Job+Output"></a>
+<a name="N10A37"></a><a name="Job+Output"></a>
 <h3 class="h4">Job Output</h3>
 <p>
 <a href="api/org/apache/hadoop/mapred/OutputFormat.html">
@@ -1786,7 +1787,7 @@ document.write("Last Published: " + document.lastModified);
 <p>
 <span class="codefrag">TextOutputFormat</span> is the default 
         <span class="codefrag">OutputFormat</span>.</p>
-<a name="N10A5D"></a><a name="Task+Side-Effect+Files"></a>
+<a name="N10A60"></a><a name="Task+Side-Effect+Files"></a>
 <h4>Task Side-Effect Files</h4>
 <p>In some applications, component tasks need to create and/or write to
           side-files, which differ from the actual job-output files.</p>
@@ -1825,7 +1826,7 @@ document.write("Last Published: " + document.lastModified);
 <p>The entire discussion holds true for maps of jobs with 
            reducer=NONE (i.e. 0 reduces) since output of the map, in that case, 
            goes directly to HDFS.</p>
-<a name="N10AA5"></a><a name="RecordWriter"></a>
+<a name="N10AA8"></a><a name="RecordWriter"></a>
 <h4>RecordWriter</h4>
 <p>
 <a href="api/org/apache/hadoop/mapred/RecordWriter.html">
@@ -1833,9 +1834,9 @@ document.write("Last Published: " + document.lastModified);
           pairs to an output file.</p>
 <p>RecordWriter implementations write the job outputs to the 
           <span class="codefrag">FileSystem</span>.</p>
-<a name="N10ABC"></a><a name="Other+Useful+Features"></a>
+<a name="N10ABF"></a><a name="Other+Useful+Features"></a>
 <h3 class="h4">Other Useful Features</h3>
-<a name="N10AC2"></a><a name="Counters"></a>
+<a name="N10AC5"></a><a name="Counters"></a>
 <h4>Counters</h4>
 <p>
 <span class="codefrag">Counters</span> represent global counters, defined either by 
@@ -1849,7 +1850,7 @@ document.write("Last Published: " + document.lastModified);
           Reporter.incrCounter(Enum, long)</a> in the <span class="codefrag">map</span> and/or 
           <span class="codefrag">reduce</span> methods. These counters are then globally 
           aggregated by the framework.</p>
-<a name="N10AED"></a><a name="DistributedCache"></a>
+<a name="N10AF0"></a><a name="DistributedCache"></a>
 <h4>DistributedCache</h4>
 <p>
 <a href="api/org/apache/hadoop/filecache/DistributedCache.html">
@@ -1883,7 +1884,7 @@ document.write("Last Published: " + document.lastModified);
           <a href="api/org/apache/hadoop/filecache/DistributedCache.html#createSymlink(org.apache.hadoop.conf.Configuration)">
           DistributedCache.createSymlink(Configuration)</a> api. Files 
           have <em>execution permissions</em> set.</p>
-<a name="N10B2B"></a><a name="Tool"></a>
+<a name="N10B2E"></a><a name="Tool"></a>
 <h4>Tool</h4>
 <p>The <a href="api/org/apache/hadoop/util/Tool.html">Tool</a> 
           interface supports the handling of generic Hadoop command-line options.
@@ -1923,7 +1924,7 @@ document.write("Last Published: " + document.lastModified);
             </span>
           
 </p>
-<a name="N10B5D"></a><a name="IsolationRunner"></a>
+<a name="N10B60"></a><a name="IsolationRunner"></a>
 <h4>IsolationRunner</h4>
 <p>
 <a href="api/org/apache/hadoop/mapred/IsolationRunner.html">
@@ -1947,7 +1948,7 @@ document.write("Last Published: " + document.lastModified);
 <p>
 <span class="codefrag">IsolationRunner</span> will run the failed task in a single 
           jvm, which can be in the debugger, over precisely the same input.</p>
-<a name="N10B90"></a><a name="Debugging"></a>
+<a name="N10B93"></a><a name="Debugging"></a>
 <h4>Debugging</h4>
 <p>Map/Reduce framework provides a facility to run user-provided 
           scripts for debugging. When map/reduce task fails, user can run 
@@ -1958,7 +1959,7 @@ document.write("Last Published: " + document.lastModified);
 <p> In the following sections we discuss how to submit debug script
           along with the job. For submitting debug script, first it has to
           distributed. Then the script has to supplied in Configuration. </p>
-<a name="N10B9C"></a><a name="How+to+distribute+script+file%3A"></a>
+<a name="N10B9F"></a><a name="How+to+distribute+script+file%3A"></a>
 <h5> How to distribute script file: </h5>
 <p>
           To distribute  the debug script file, first copy the file to the dfs.
@@ -1981,7 +1982,7 @@ document.write("Last Published: " + document.lastModified);
           <a href="api/org/apache/hadoop/filecache/DistributedCache.html#createSymlink(org.apache.hadoop.conf.Configuration)">
           DistributedCache.createSymLink(Configuration) </a> api.
           </p>
-<a name="N10BB5"></a><a name="How+to+submit+script%3A"></a>
+<a name="N10BB8"></a><a name="How+to+submit+script%3A"></a>
 <h5> How to submit script: </h5>
 <p> A quick way to submit debug script is to set values for the 
           properties "mapred.map.task.debug.script" and 
@@ -2005,17 +2006,17 @@ document.write("Last Published: " + document.lastModified);
 <span class="codefrag">$script $stdout $stderr $syslog $jobconf $program </span>  
           
 </p>
-<a name="N10BD7"></a><a name="Default+Behavior%3A"></a>
+<a name="N10BDA"></a><a name="Default+Behavior%3A"></a>
 <h5> Default Behavior: </h5>
 <p> For pipes, a default script is run to process core dumps under
           gdb, prints stack trace and gives info about running threads. </p>
-<a name="N10BE2"></a><a name="JobControl"></a>
+<a name="N10BE5"></a><a name="JobControl"></a>
 <h4>JobControl</h4>
 <p>
 <a href="api/org/apache/hadoop/mapred/jobcontrol/package-summary.html">
           JobControl</a> is a utility which encapsulates a set of Map-Reduce jobs
           and their dependencies.</p>
-<a name="N10BEF"></a><a name="Data+Compression"></a>
+<a name="N10BF2"></a><a name="Data+Compression"></a>
 <h4>Data Compression</h4>
 <p>Hadoop Map-Reduce provides facilities for the application-writer to
           specify compression for both intermediate map-outputs and the
@@ -2029,7 +2030,7 @@ document.write("Last Published: " + document.lastModified);
           codecs for reasons of both performance (zlib) and non-availability of
           Java libraries (lzo). More details on their usage and availability are
           available <a href="native_libraries.html">here</a>.</p>
-<a name="N10C0F"></a><a name="Intermediate+Outputs"></a>
+<a name="N10C12"></a><a name="Intermediate+Outputs"></a>
 <h5>Intermediate Outputs</h5>
 <p>Applications can control compression of intermediate map-outputs
             via the 
@@ -2050,7 +2051,7 @@ document.write("Last Published: " + document.lastModified);
             <a href="api/org/apache/hadoop/mapred/JobConf.html#setMapOutputCompressionType(org.apache.hadoop.io.SequenceFile.CompressionType)">
             JobConf.setMapOutputCompressionType(SequenceFile.CompressionType)</a> 
             api.</p>
-<a name="N10C3B"></a><a name="Job+Outputs"></a>
+<a name="N10C3E"></a><a name="Job+Outputs"></a>
 <h5>Job Outputs</h5>
 <p>Applications can control compression of job-outputs via the
             <a href="api/org/apache/hadoop/mapred/OutputFormatBase.html#setCompressOutput(org.apache.hadoop.mapred.JobConf,%20boolean)">
@@ -2070,7 +2071,7 @@ document.write("Last Published: " + document.lastModified);
 </div>
 
     
-<a name="N10C6A"></a><a name="Example%3A+WordCount+v2.0"></a>
+<a name="N10C6D"></a><a name="Example%3A+WordCount+v2.0"></a>
 <h2 class="h3">Example: WordCount v2.0</h2>
 <div class="section">
 <p>Here is a more complete <span class="codefrag">WordCount</span> which uses many of the
@@ -2080,7 +2081,7 @@ document.write("Last Published: " + document.lastModified);
       <a href="quickstart.html#SingleNodeSetup">pseudo-distributed</a> or
       <a href="quickstart.html#Fully-Distributed+Operation">fully-distributed</a> 
       Hadoop installation.</p>
-<a name="N10C84"></a><a name="Source+Code-N10C84"></a>
+<a name="N10C87"></a><a name="Source+Code-N10C87"></a>
 <h3 class="h4">Source Code</h3>
 <table class="ForrestTable" cellspacing="1" cellpadding="4">
           
@@ -3290,7 +3291,7 @@ document.write("Last Published: " + document.lastModified);
 </tr>
         
 </table>
-<a name="N113E6"></a><a name="Sample+Runs"></a>
+<a name="N113E9"></a><a name="Sample+Runs"></a>
 <h3 class="h4">Sample Runs</h3>
 <p>Sample text-files as input:</p>
 <p>
@@ -3458,7 +3459,7 @@ document.write("Last Published: " + document.lastModified);
 <br>
         
 </p>
-<a name="N114BA"></a><a name="Highlights"></a>
+<a name="N114BD"></a><a name="Highlights"></a>
 <h3 class="h4">Highlights</h3>
 <p>The second version of <span class="codefrag">WordCount</span> improves upon the 
         previous one by using some features offered by the Map-Reduce framework:

File diff suppressed because it is too large
+ 1 - 1
docs/mapred_tutorial.pdf


+ 2 - 1
src/docs/src/documentation/content/xdocs/mapred_tutorial.xml

@@ -1109,7 +1109,8 @@
         the working directory of the task can be used to distribute native 
         libraries and load them. The underlying detail is that child-jvm always 
         has its <em>current working directory</em> added to the
-        <code>java.library.path</code> and hence the cached libraries can be 
+        <code>java.library.path</code> and <code>LD_LIBRARY_PATH</code>. 
+        And hence the cached libraries can be 
         loaded via <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#loadLibrary(java.lang.String)">
         System.loadLibrary</a> or <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#load(java.lang.String)">
         System.load</a>.</p>

+ 15 - 3
src/java/org/apache/hadoop/mapred/TaskRunner.java

@@ -28,7 +28,9 @@ import org.apache.hadoop.util.*;
 import java.io.*;
 import java.net.InetSocketAddress;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.Vector;
 import java.net.URI;
 
@@ -387,9 +389,18 @@ abstract class TaskRunner extends Thread {
       stdout.getParentFile().mkdirs();
       List<String> wrappedCommand = 
         TaskLog.captureOutAndError(setup, vargs, stdout, stderr, logSize);
-      
+      Map<String, String> env = new HashMap<String, String>();
+      StringBuffer ldLibraryPath = new StringBuffer();
+      ldLibraryPath.append(workDir.toString());
+      String oldLdLibraryPath = null;
+      oldLdLibraryPath = System.getenv("LD_LIBRARY_PATH");
+      if (oldLdLibraryPath != null) {
+        ldLibraryPath.append(sep);
+        ldLibraryPath.append(oldLdLibraryPath);
+      }
+      env.put("LD_LIBRARY_PATH", ldLibraryPath.toString());
       // Run the task as child of the parent TaskTracker process
-      runChild(wrappedCommand, workDir, taskid);
+      runChild(wrappedCommand, workDir, env, taskid);
 
     } catch (FSError e) {
       LOG.fatal("FSError", e);
@@ -432,10 +443,11 @@ abstract class TaskRunner extends Thread {
    * Run the child process
    */
   private void runChild(List<String> args, File dir,
+                        Map<String, String> env,
                         TaskAttemptID taskid) throws IOException {
 
     try {
-      shexec = new ShellCommandExecutor(args.toArray(new String[0]), dir);
+      shexec = new ShellCommandExecutor(args.toArray(new String[0]), dir, env);
       shexec.execute();
     } catch (IOException ioe) {
       // do nothing

Some files were not shown because too many files changed in this diff