1

I am using spark 1.6.0 in local mode. I have created ipython pyspark profile so pyspark kernel will start in jupyter notebook. All this works correctly.

I want to use this package spark-csv inside of jupyter notebook. I tried to edit file ~/.ipython/profile_pyspark/startup/00-pyspark-setup.py and put --packages com.databricks:spark-csv_2.11:1.4.0 after pyspark-shell command, without success. Still getting this error message:

Py4JJavaError: An error occurred while calling o22.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
I have tried also [this solution][2] and many others...none of them worked.

Do you have any suggestions?

2
  • An answer here does not solve my problem. That's the reason why I opened this one. Commented Mar 18, 2016 at 7:09
  • export SPARK_OPTS='--packages com.databricks:spark-csv_2.10:1.4.0' Commented Jun 30, 2016 at 20:37

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.