Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
547 views
in Technique[技术] by (71.8m points)

hadoop - Why submitting job to mapreduce takes so much time in General?

So usually for 20 node cluster submitting job to process 3GB(200 splits) of data takes about 30sec and actual execution about 1m. I want to understand what is the bottleneck in job submitting process and understand next quote

Per-MapReduce overhead is significant: Starting/ending MapReduce job costs time

Some process I'm aware: 1. data splitting 2. jar file sharing

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

A few things to understand about HDFS and M/R that helps understand this latency:

  1. HDFS stores your files as data chunk distributed on multiple machines called datanodes
  2. M/R runs multiple programs called mapper on each of the data chunks or blocks. The (key,value) output of these mappers are compiled together as result by reducers. (Think of summing various results from multiple mappers)
  3. Each mapper and reducer is a full fledged program that is spawned on these distributed system. It does take time to spawn a full fledged programs, even if let us say they did nothing (No-OP map reduce programs).
  4. When the size of data to be processed becomes very big, these spawn times become insignificant and that is when Hadoop shines.

If you were to process a file with a 1000 lines content then you are better of using a normal file read and process program. Hadoop infrastructure to spawn a process on a distributed system will not yield any benefit but will only contribute to the additional overhead of locating datanodes containing relevant data chunks, starting the processing programs on them, tracking and collecting results.

Now expand that to 100 of Peta Bytes of data and these overheads looks completely insignificant compared to time it would take to process them. Parallelization of the processors (mappers and reducers) will show it's advantage here.

So before analyzing the performance of your M/R, you should first look to benchmark your cluster so that you understand the overheads better.

How much time does it take to do a no-operation map-reduce program on a cluster?

Use MRBench for this purpose:

  1. MRbench loops a small job a number of times
  2. Checks whether small job runs are responsive and running efficiently on your cluster.
  3. Its impact on the HDFS layer is very limited

To run this program, try the following (Check the correct approach for latest versions:

hadoop jar /usr/lib/hadoop-0.20/hadoop-test.jar mrbench -numRuns 50

Surprisingly on one of our dev clusters it was 22 seconds.

Another issue is file size.

If the file sizes are less than the HDFS block size then Map/Reduce programs have significant overhead. Hadoop will typically try to spawn a mapper per block. That means if you have 30 5KB files, then Hadoop may end up spawning 30 mappers eventually per block even if the size of file is small. This is a real wastage as each program overhead is significant compared to the time it would spend processing the small sized file.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...