I am a beginner in Spark and I am trying to create a DataFrame based on the content of JSON file using PySpark by following the guide: http://spark.apache.org/docs/1.6.1/sql-programming-guide.html#overview
However, whenever I execute this command (using both relative path or absolute path)
df = sqlContext.read.json("examples/src/main/resources/people.json")
always gives me the error
java.io.IOException: No input paths specified in job
What is the cause of these issue or is there any Spark configuration that I have missed out? I am using Spark 1.6.1 and Python 2.7.6.