1

I have a class named some_class() in a Python file here:

/some-folder/app/bin/file.py

I am importing it to my code here:

/some-folder2/app/code/file2.py

By

import sys
sys.path.append('/some-folder/app/bin')
from file import some_class

clss = some_class()

I want to use this class's function named some_function in map of spark

sc.parallelize(some_data_iterator).map(lambda x: clss.some_function(x))

This is giving me an error :

No module named file

While class.some_function when I am calling it outside map function of pyspark i.e. normally but not in pySpark's RDD. I think this has something to do with pyspark. I have no idea where am I going wrong in this.

I tried broadcasting this class and still didn't work.

1 Answer 1

5

All Python dependencies have to be either present on the search path of the worker nodes or distributed manually using SparkContext.addPyFile method so something like this should do the trick:

sc.addPyFile("/some-folder/app/bin/file.py")

It will copy the file to all the workers and place in the working directory.

On a side note please don't use file as module name, even if it is only an example. Shadowing built-in functions in Python is not a very good idea.

Sign up to request clarification or add additional context in comments.

2 Comments

Is there a way to add a folder to the path instead of one single file
addPyFile can take a zip file. You can zip your entire source tree, then add it with addPyFile.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.