3

I am getting the exception:

Task serialization failed: java.lang.reflect.InvocationTargetException

My code is:

JDBCRDD jdbcRDD = new JDBCRDD(sc.sc(),Connection, getSchema(url), Table_Name, fields, new Filter[]{}, partitionList.toArray(new JDBCPartition[0])
        );

System.out.println("count before to Java RDD=" + jdbcRDD.cache().count());

JavaRDD<Row> jrdd = jdbcRDD.toJavaRDD();

System.out.println("count=" + jrdd.count());

jrdd.foreachPartition( (Iterator<Row> it)-> { 

                new DataPull().updateDB(it); 
  });

} 

The class DataPull is serializable.

Yet I am getting

15/05/26 10:20:07 ERROR yarn.ApplicationMaster: User class threw exception: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:240)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:150)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:158)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:58)
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:39)
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:837)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

Need help in solving this issue.

1
  • Can you please refomat this? Lines starting with at least 4 spaces will be formatted as code. Also, please include a stack trace or whatever else you get with that error message. Commented May 26, 2015 at 18:53

1 Answer 1

2

In my case, snappy codec caused the issue. Try to use another one to see if this solves your problem.

You can edit conf/spark-defaults.conf and add for example

spark.io.compression.codec      lzf
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.