Top 11 key tuning checklists for Apache Hadoop

Hadoop   |   
Published May 9, 2016   |   

Apache Hadoop is a well know and de-facto framework for processing large big data sets through distributed & parallel computing. YARN(Yet Another Resources Negotiator) allowed Hadoop to evolve from a simple MapReduce engine to a big data ecosystem that can run heterogeneous (MapReduce and non-MapReduce) apps concurrently. This results in larger clusters with more workloads and users than ever before. Traditional recommendations encourage provisioning, isolation, and tuning to increase performance and avoid resource contention but result in highly underutilized clusters.

Tuning is an essential part of maintaining a Hadoop cluster. Cluster administrators must interpret system metrics and optimize for specific workloads (e.g., high CPU utilization versus high I/O). To know what to tune, Hadoop operators often rely on monitoring software for insight into cluster activity. Tools like Ganglia, Cloudera Manager, or Apache Ambari will give us near real-time statistics at the node level, and many provide after-the-fact reports for particular jobs.

Here we will quickly look into the 11 checklists for tuning by admin/developer.

11 checklists for Apache Hadoop

1. Number of mappers : If we found that mappers are only running for a few seconds, try to use fewer mappers that can run longer (a minute or two ).  Then increase mapred.min.split.size to decrease the number of mappers allocated in a slot.

2. Mapper output: Mappers should output as little data as possible. Hence try filtering out records on the mapper side and use minimal data to form the map output key and map output value.

3. Number of reducers: Reduce tasks should run for five minutes or so and produce at least a blocks worth of data.

4. Combiners: We can specify a combiner to cut the amount of data shuffled between the mappers and the reducers.

5. Compression: We can you enable map output compression to improve job execution time.

6. Custom serialization: We can implement a RawComparator.

7. Disks per node: We can adjust the number of disks per node (mapred.local.dir, dfs.name.dir, dfs.data.dir) and test how scaling affects execution time.

8. JVM reuse: We have to consider enabling JVM reuse (mapred.job.reuse.jvm.num.tasks) for workloads with lots of short-running tasks.

9. Maximize memory for the shuffle: We can generally maximize memory for the shuffle, but give the map and reduce functions enough memory to operate. Hence make the mapred.child.java.opts property as large as possible for the amount of memory on the task nodes.

10. Minimize disk spilling: One spill to disk is optimal. The MapReduce counter spilled_records is a useful metric, as it counts the total number of records that were spilled to disk during a job.

11. Adjust memory allocation: Total Memory = Map Slots + Reduce Slots + TT+ DN + Other Services + OS.

This article originally appeared here. Republished with permission. Submit your copyright complaints here.