Browse Source

MAPREDUCE-6583. Clarify confusing sentence in MapReduce tutorial document. Contributed by Kai Sasaki.

(cherry picked from commit 7995a6ea4dc524e5b17606359d09df72d771224a)
(cherry picked from commit 8607cb6074b40733d8990618a44c490f9f303ae3)
Akira Ajisaka 9 years ago
parent
commit
e94a8aea57

+ 3 - 0
hadoop-mapreduce-project/CHANGES.txt

@@ -28,6 +28,9 @@ Release 2.7.3 - UNRELEASED
     MAPREDUCE-6549. multibyte delimiters with LineRecordReader cause
     MAPREDUCE-6549. multibyte delimiters with LineRecordReader cause
     duplicate records (wilfreds via rkanter)
     duplicate records (wilfreds via rkanter)
 
 
+    MAPREDUCE-6583. Clarify confusing sentence in MapReduce tutorial document.
+    (Kai Sasaki via aajisaka)
+
 Release 2.7.2 - UNRELEASED
 Release 2.7.2 - UNRELEASED
 
 
   INCOMPATIBLE CHANGES
   INCOMPATIBLE CHANGES

+ 3 - 3
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md

@@ -309,7 +309,7 @@ public void reduce(Text key, Iterable<IntWritable> values,
 }
 }
 ```
 ```
 
 
-The `Reducer` implementation, via the `reduce` method just sums up the values, which are the occurence counts for each key (i.e. words in this example).
+The `Reducer` implementation, via the `reduce` method just sums up the values, which are the occurrence counts for each key (i.e. words in this example).
 
 
 Thus the output of the job is:
 Thus the output of the job is:
 
 
@@ -346,7 +346,7 @@ Maps are the individual tasks that transform input records into intermediate rec
 
 
 The Hadoop MapReduce framework spawns one map task for each `InputSplit` generated by the `InputFormat` for the job.
 The Hadoop MapReduce framework spawns one map task for each `InputSplit` generated by the `InputFormat` for the job.
 
 
-Overall, `Mapper` implementations are passed the `Job` for the job via the [Job.setMapperClass(Class)](../../api/org/apache/hadoop/mapreduce/Job.html) method. The framework then calls [map(WritableComparable, Writable, Context)](../../api/org/apache/hadoop/mapreduce/Mapper.html) for each key/value pair in the `InputSplit` for that task. Applications can then override the `cleanup(Context)` method to perform any required cleanup.
+Overall, mapper implementations are passed to the job via [Job.setMapperClass(Class)](../../api/org/apache/hadoop/mapreduce/Job.html) method. The framework then calls [map(WritableComparable, Writable, Context)](../../api/org/apache/hadoop/mapreduce/Mapper.html) for each key/value pair in the `InputSplit` for that task. Applications can then override the `cleanup(Context)` method to perform any required cleanup.
 
 
 Output pairs do not need to be of the same types as input pairs. A given input pair may map to zero or many output pairs. Output pairs are collected with calls to context.write(WritableComparable, Writable).
 Output pairs do not need to be of the same types as input pairs. A given input pair may map to zero or many output pairs. Output pairs are collected with calls to context.write(WritableComparable, Writable).
 
 
@@ -846,7 +846,7 @@ In the following sections we discuss how to submit a debug script with a job. Th
 
 
 ##### How to distribute the script file:
 ##### How to distribute the script file:
 
 
-The user needs to use [DistributedCache](#DistributedCache) to *distribute* and *symlink* thescript file.
+The user needs to use [DistributedCache](#DistributedCache) to *distribute* and *symlink* to the script file.
 
 
 ##### How to submit the script:
 ##### How to submit the script: