|
@@ -73,7 +73,7 @@ Applications that run on HDFS have large data sets. A typical file in HDFS is gi
|
|
|
|
|
|
### Simple Coherency Model
|
|
|
|
|
|
-HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in the future.
|
|
|
+HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed except for appends and truncates. Appending the content to the end of the files is supported but cannot be updated at arbitrary point. This assumption simplifies data coherency issues and enables high throughput data access. A MapReduce application or a web crawler application fits perfectly with this model.
|
|
|
|
|
|
### "Moving Computation is Cheaper than Moving Data"
|
|
|
|