I use MS Windows 7.
Initially, I tried one program using scala in Spark 1.6 and it worked fine (where I am getting SparkContext object as sc automatically).
When I tried Spark 2.2, I am not getting sc automatically so I created one by doing the following steps:
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
val sc = new SparkConf().setAppName("myname").setMaster("mast")
new SparkContext(sc)
Now when I am trying to execute below parallelize method it gives me one error:
val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
Error:
Value parallelize is not a member of org.apache.spark.SparkConf
I followed these steps using official documentation only. So can anybody explain me where I went wrong? Thanks in advance. :)

spark-shell, don't you? Have you definedHADOOP_HOMEand/or savedwinutils.exein$HADOOP_HOME/bin?