17

Getting Below exception , when i tried to perform unit tests for my spark streaming code on SBT windows using scalatest.

sbt testOnly <<ClassName>>

*
*
*
*
*
*

2018-06-18 02:39:00 ERROR Executor:91 - Exception in task 1.0 in stage 3.0 (TID 11) java.lang.NoSuchMethodError: net.jpountz.lz4.LZ4BlockInputStream.(Ljava/io/InputStream;Z)V at org.apache.spark.io.LZ4CompressionCodec.compressedInputStream(CompressionCodec.scala:122) at org.apache.spark.serializer.SerializerManager.wrapForCompression(SerializerManager.scala:163) at org.apache.spark.serializer.SerializerManager.wrapStream(SerializerManager.scala:124) at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$2.apply(BlockStoreShuffleReader.scala:50) at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$2.apply(BlockStoreShuffleReader.scala:50) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:417) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:61) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.sort_addToSorter$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614) at org.apache.spark.sql.execution.GroupedIterator$.apply(GroupedIterator.scala:29) at org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec$StateStoreUpdater.updateStateForKeysWithData(FlatMapGroupsWithStateExec.scala:176)**

Tried couple of things to exclude net.jpountz.lz4 jar( with suggestions from other posts) but again same error in output.

Currently using spark 2.3 , scalatest 3.0.5, Scala 2.11 version . i see this issue only after upgrade to spark 2.3 and scalatest 3.0.5

Any suggestions ?

3
  • First suggestion: please edit the title and the formatting of your question to make it more readable. Afterwards, you should probably share some lines of the code you've used Commented Jun 18, 2018 at 11:41
  • Can you post your build file? Commented Jun 27, 2018 at 22:47
  • I was getting same error while running job which has parquet output added following property it worked fine, --conf spark.io.compression.codec=snappy Commented Jan 6, 2021 at 11:11

2 Answers 2

34

Kafka has a conflicting dependency with Spark and that's what caused this issue for me.

This is how you can exclude the dependency in you sbt file

lazy val excludeJpountz = ExclusionRule(organization = "net.jpountz.lz4", name = "lz4")

lazy val kafkaClients = "org.apache.kafka" % "kafka-clients" % userKafkaVersionHere excludeAll(excludeJpountz) // add more exclusions here

When you use this kafkaClients dependency it would now exclude the problematic lz4 library.


Update: This appears to be an issue with Kafka 0.11.x.x and earlier version. As of 1.x.x Kafka seems to have moved away from using the problematic net.jpountz.lz4 library. Therefore, using latest Kafka (1.x) with latest Spark (2.3.x) should not have this issue.

Sign up to request clarification or add additional context in comments.

7 Comments

This solved my issue after upgrading to spark 2.3. I had to exclude it on org.apache.kafka:kafka and org.apache.spark:spark-streaming-kafka-0-10.
Thanks for the update soote, that may very well also have the same dependency.
Still not solved for me, Here are changes i did to my build.sbt name := "ABC123" scalaVersion := "2.11.11" val sparkVersion = "2.3.0" ** lazy val excludeJpountz = ExclusionRule(organization = "net.jpountz.lz4", name = "lz4") ** libraryDependencies ++= Seq( . . "org.apache.spark" %% "spark-streaming-kafka-0-10" % sparkVersion excludeAll(excludeJpountz), ** "org.apache.spark" %% "spark-sql-kafka-0-10" % sparkVersion excludeAll(excludeJpountz),** )
The right way to solve this for complex builds is to install https://github.com/jrudolph/sbt-dependency-graph and the use whatDependsOn net.jpountz.lz4 lz4 1.3.0 it will give you a graph of all the libraries you have that depend on this. You will need to add the same exclude rule to all of those.
just to clarify something that initially confused us: net.jpountz.lz4:lz4 has been superseded by the artefact org.lz4:lz4-java but they kept the same package names. So you can end up with both artefacts in your packaging and if you're unlucky the ClassLoader finds the old one without the new method signature.
|
3

This artifact "net.jpountz.lz4:lz4" was moved to: "org.lz4 » lz4-java"

By using; libraryDependencies += "org.lz4" % "lz4-java" % "1.7.1", the issue has been resolved.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.