1

I would like to use Spyder with pyspark (spark-2.1.1) but I cannot fix a rather frustrating Java error. I launch spyder from command line in Windows 10 after activating a conda environment (Python version is 3.5.3). This is my code:

import pyspark
sc = pyspark.SparkContext("local")
file = sc.textFile("C:/test.log")
words = file.flatMap(lambda line : line.split(" "))
words.count()

When I try to define sc i get the following error:

  File "D:\spark-2.1.1-bin-hadoop2.7\python\pyspark\java_gateway.py", line 95, in launch_gateway
    raise Exception("Java gateway process exited before sending the driver its port number")

Exception: Java gateway process exited before sending the driver its port number

For the sake of completeness:

  1. if I run pyspark from the command line after activating the conda environment, it works and correctly performs the word count task.

  2. If I launch Spyder App Desktop from the Start Menu in Windows 10, everything works (but I think I cannot load the right python modules from my conda environment in this case).

  3. The related environment variables seem to be ok:

    echo %SPARK_HOME%

    D:\spark-2.1.1-bin-hadoop2.7

    echo %JAVA_HOME%

    C:\Java\jdk1.8.0_121

    echo %PYTHONPATH%

    D:\spark-2.1.1-bin-hadoop2.7\python;D:\spark-2.1.1-bin-hadoop2.7\python\lib\py4j-0.10.4-src.zip; D:\spark-2.1.1-bin-hadoop2.7\python\lib; C:\Users\user\Anaconda3

I have already tried with the solutions proposed here, but nothing worked for me. Any suggestion is greatly appreciated!

1
  • Hi @ChiaraM Can you please let me know in detail about your first point. Sorry that I am new to pyspark and trying to do the same as you have but encountered same error as you got. Commented Oct 3, 2017 at 13:51

1 Answer 1

0

Since 1) is working, it is probably best to use the conda environment in Spyder.

In Preferences go to the "Python Interpreter"section and switch from "Default (i.e. the same as Spyder's)" to "Use the following Python interpreter".

If your environment is called spark_env and Anaconda is installed under C:\Program Files\Continnum\Anaconda, the python profile corresponding to this environment is C:\Program Files\Continnum\Anaconda\envs\spark_env\python.exe.

A python console in Spyder startet after this change will be in your conda environment (note that this does not apply to IPyhton).

To check environment variables, you can use python code to make sure these are the same variables your script sees:

   from os import environ
   print(environ['SPARK_HOME'])
   print(environ['JAVA_HOME'])
   try:
          print(environ['PYSPARK_SUBMIT_ARGS'])
   except:
      print("no problem with PYSPARK_SUBMIT_ARGS")  # https://github.com/ContinuumIO/anaconda-issues/issues/1276#issuecomment-277355043

Hope that helps.

Sign up to request clarification or add additional context in comments.

10 Comments

Thank you very much, your suggestion solved the issue.
Hi @daten-kieker I also got the same error as this question owner got and I have tried to do what you have mentioned. Still I am getting the same error. Can you please help me on this to get resolved :(
Hi @JKC, is your setup and the results to the steps ChiaraM described identical? What is the result of the little test script?
Yes @daten-kieker I get this "no problem with PYSPARK_SUBMIT_ARGS" when i ran the test script
@JKC what is the output of the first lines (environment variables SPARK_HOME and JAVA_HOME)?
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.