Mapreduce Interview Questions and Answers for Experienced Part – 3 2

Below are a few more hadoop mapreduce interview questions and answers for experienced and freshers hadoop developers.

Hadoop Mapreduce Interview Questions and Answers for Experienced:
1.  After restart of namenode, Mapreduce jobs started failing which worked fine before restart. What could be the wrong ?

The cluster could be in a safe mode after the restart of a namenode. The administrator needs to wait for namenode to exit the safe mode before restarting the jobs again. This is a very common mistake by Hadoop administrators.

2.  What do you always have to specify for a MapReduce job ?
  1. The classes for the mapper and reducer. 
  2. The classes for the mapper, reducer, and combiner.
  3. The classes for the mapper, reducer, partitioner, and combiner.
  4. None; all classes have default implementations.
3.  How many times will a combiner be executed ?
  1. At least once.
  2. Zero or one times.
  3. Zero, one, or many times. 
  4. It’s configurable.

4.  You have a mapper that for each key produces an integer value and the following set of
reduce operations

Reducer A: outputs the sum of the set of integer values.
Reducer B: outputs the maximum of the set of values.
Reducer C: outputs the mean of the set of values.
Reducer D: outputs the difference between the largest and smallest values
in the set.

Which of these reduce operations could safely be used as a combiner ?

  1. All of them.
  2. A and B. 
  3. A, B, and D.
  4. C and D.
  5. None of them.

Explanation: Reducer C cannot be used because if such reduction were to occur, the final reducer could receive from the combiner a series of means with no knowledge of how many items were used to generate them, meaning the overall mean is impossible to calculate.

Reducer D is subtle as the individual tasks of selecting a maximum or minimum are safe for use as combiner operations. But if the goal is to determine the overall variance between the maximum and minimum value for each key, this would not work. If the combiner that received the maximum key had values clustered around it, this would generate small results; similarly for the one receiving the minimum value. These sub ranges have little value in isolation and again the final reducer cannot construct the desired result.

5.  What is Uber task in YARN ?

If the job is small, the application master may choose to run them in the same JVM as itself, since it judges the overhead of allocating new containers and running tasks in them as outweighing the gain to be had in running them in parallel, compared to running them sequentially on one node. (This is different to Mapreduce
1, where small jobs are never run on a single task tracker.)

Such a job is said to be Uberized, or run as an Uber task.

6.  How to configure Uber Tasks ?

By default a job that has less than 10 mappers  only and one reducer, and the input size is less than the size of one HDFS block is said to be small job. These values may
be changed for a job by setting mapreduce.job.ubertask.maxmaps , mapreduce.job.ubertask.maxreduces , and mapreduce.job.ubertask.maxbytes 

It’s also possible to disable Uber tasks entirely  by setting mapreduce.job.ubertask.enable to false.

7.  What are the ways to debug a failed mapreduce job ?

Commonly there are two ways.

    1. By using mapreduce job counters
    2. YARN Web UI for looking into syslogs for actual error messages or status.
8.  What is the importance of heartbeats in HDFS/Mapreduce Framework ?

A heartbeat in master/slave architecture is a signal indicating that it is alive. A datanode sends heartbeats to Namenode and node managers send their heartbeats to Resource Managers to tell the master node that these are still alive.

If the Namenode or Resource manager does not receive heartbeat from any slave node then they will decide that there is some problem in data node or node manager and is unable to perform the assigned task, then master (namenode or resource manager) will reassign the same task to other live nodes.

9.  Can we rename the output file ?

Yes, we can rename the output file by implementing multiple format output class.

10.  What are the default input and output file formats in Mapreduce jobs ?

If input file or output file formats are not specified, then the default file input or output formats are considered as text files.

A few more hadoop mapreduce interview questions and answers for experienced will be published in the upcoming posts in this category.

About Siva

Senior Hadoop developer with 4 years of experience in designing and architecture solutions for the Big Data domain and has been involved with several complex engagements. Technical strengths include Hadoop, YARN, Mapreduce, Hive, Sqoop, Flume, Pig, HBase, Phoenix, Oozie, Falcon, Kafka, Storm, Spark, MySQL and Java.

Leave a comment

Your email address will not be published. Required fields are marked *

2 thoughts on “Mapreduce Interview Questions and Answers for Experienced Part – 3

  • Pranav

    Hi Siva,

    Just wonder – In MapReduce1 interview question, you have mentioned that we required reducer class but its OPERATIONAL, however in this blog in Que-2  there is reducer class MANDATORY.  Please confirm. Thanks.

    • Siva Post author

      Hi Pranav,

      Reducer is not mandatory. It is optional. if we do not specify the reducer class in driver by default it will run 1 reduce task with identity reducer class.

      we can suppress this also by making job.setNumReduceTasks(0), now the reducer will not run at all.

Review Comments
default image

I have attended Siva’s Spark and Scala training. He is good in presentation skills and explaining technical concepts easily to everyone in the group. He is having excellent real time experience and provided enough use cases to understand each concepts. Duration of the course and time management is awesome. Happy that I found a right person on time to learn Spark. Thanks Siva!!!

Dharmeswaran ETL / Hadoop Developer Spark Nov 2016 September 21, 2017