1

am getting the following error while Running a Map-reduce program.

The program is to sort the o/p using TotalOrderpartition.

I have 2 node cluster. 
when i run teh program with -D mapred.reduce.tasks=2 its working fine
 But its failing with below error while running with -D mapred.reduce.tasks=3 option.


java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
        at org.apache.hadoop.mapred.MapTask$OldOutputCollector.<init>(MapTask.java:448)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
        ... 6 more
Caused by: java.lang.IllegalArgumentException: Can't read partitions file
        at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:91)
        ... 11 more
Caused by: java.io.IOException: Split points are out of order
        at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:78)
        ... 11 more

Plese let me know whats wrong here?

Thanks
R

3 Answers 3

2

The maximum number of reducers that can be mentioned is equal to the number of nodes in your cluster. Since the number of nodes is 2 here, you cannot set the number of reducers to be greater than 2.

Sign up to request clarification or add additional context in comments.

1 Comment

Somewhat if i try to run with 0 reducer it's working. What could be the reason why reducer is depends on number of nodes here?
1

Sounds like you don't have enough keys in your partition file. The docs say that TotalOrderpartitioner requires that you have at least N - 1 keys, where N is the number of reducers, in your partition SequenceFile.

1 Comment

If you are going to downvote, at least give the reason. This is a perfectly valid answer to the original question as asked.
0

I also met this problem, through the check soucecode found that because of sample, increase the reduce number makes in the splitpoint have the same element, so throw this error. It has relation with the data. type hadoop fs - text _partition look at the files generated the partition, if your tasks failure there must has same element.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.