Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

from Mining of Massive Datasets Jure Leskovec Stanford Univ. Anand Rajaraman Mil

ID: 3665173 • Letter: F

Question

from Mining of Massive Datasets Jure Leskovec Stanford Univ. Anand Rajaraman Milliway Labs Jeffrey D. Ullman Stanford Univ

http://infolab.stanford.edu/~ullman/mmds/book.pdf is a link to the book

Exercise 2.2.1 : Suppose we execute the word-count MapReduce program described in this section on a large repository such as a copy of the Web. We shall use 100 Map tasks and some number of Reduce tasks. (a) Suppose we do not use a combiner at the Map tasks. Do you expect there to be significant skew in the times taken by the various reducers to process their value list? Why or why not? (b) If we combine the reducers into a small number of Reduce tasks, say 10 tasks, at random, do you expect the skew to be significant? What if we instead combine the reducers into 10,000 Reduce tasks? ! (c) Suppose we do use a combiner at the 100 Map tasks. Do you expect skew to be significant? Why or why not?

Explanation / Answer

a) There will be significant skew since some keys will have large number of occurances [lengths of the value lists] while some have less occurances, so different reducers take different amount of time. We can take an example from real world dictionaries where the word distribibution follows power law. So, if we reduce based on keys, the skew in times is bound to happen.

b) The skew will be present but not as worse as in first case. After combination of reducers to some reduce tasks cause an averaging over execution times of several reducer tasks. If we combine 10,000 reduce tasks, then skew may be even lesser as way, long Reduce tasks might occupy a compute node fully, while several shorter Reduce tasks might run sequentially at a single compute node.

c) I feel that skew will be less. Since words will be combined in the mapping phase only. So during reduce phase not very large value lists will be present to be reduced.