hadoop + one key to every reducer

Tag: hadoop Author: zhangcong5678 Date: 2010-09-02

Is there a way in Hadoop to ensure that every reducer gets only one key that is output by the mapper ?

Best Answer

This question is a bit unclear for me. But I think I have a pretty good idea what you want.

First of all if you do nothing special every time a reduce is called it gets only one single key with a set of one or more values (via an iterator).

My guess is that you want to ensure that every reducer gets exactly one 'key-value pair'. There are essentially two ways of doing that:

  1. Ensure in the mapper that all keys that are output are unique. So for each key there is only one value.
  2. Force the reducer to do this by forcing a group comparator that simply classifies all keys as different.

So if I understand your question correctly. You should implement a GroupComparator that simply states that all keys are different and should therefor be sent to a different reducer call.


Because of other answers in this question I'm adding a bit more detail:

There are 3 methods used for comparing keys (I pulled these code samples from a project I did using the 0.18.3 API):

Partitioner

    conf.setPartitionerClass(KeyPartitioner.class);

The partitioner is only to ensure that "things that must be the same end up on the same partition". If you have 1 computer there is only one partition, so this won't help much.

Key Comparator

    conf.setOutputKeyComparatorClass(KeyComparator.class);

The key comparator is used to SORT the "key-value pairs" in a group by looking at the key ... which must be different somehow.

Group Comparator

    conf.setOutputValueGroupingComparator(GroupComparator.class);

The group comparator is used to group keys that are different, yet must be sent o the same reducer.

HTH

Other Answer1

You can get some control over which keys get sent to which reducers by implementng the Partitioner interface

From the Hadoop API docs:

Partitioner controls the partitioning of the keys of the intermediate map-outputs. The key (or a subset of the key) is used to derive the partition, typically by a hash function. The total number of partitions is the same as the number of reduce tasks for the job. Hence this controls which of the m reduce tasks the intermediate key (and hence the record) is sent for reduction.

The following book does a great job of describing partitioning, key sorting strategies and tradeoffs along with other issues in map reduce algorithm design: http://www.umiacs.umd.edu/~jimmylin/book.html

comments:

If you have a single node "cluster" then all keys will be sent to partition '0'. Using the partitioner won't do the trick. See my own answer for details why.

Other Answer2

Are you sure you want to do this? Can you elaborate your problem, so that I can understand why you want to do this.

You have to do two things, as mentioned in earlier answers

  1. Write a partitioner such that each key gets associated with an unique reducer.
  2. Ensure that that the number of reducer slots in your cluster is more than or equal to the number of unique keys you will have

Pranab

Other Answer3

My guess is same as above, just you can sort the keys if possible and try to assign it reducer based on your partitioning criteria, refer youtube mapreduce ucb 61a lecture-34, they talk about this stuff.