I am trying to normalize the values of multiple columns in a spark dataframe, by subtracting the mean and dividing by the stddev of each column. Here's the code I have so far:
from pyspark.sql import Row
from pyspark.sql.functions import stddev_pop, avg
df = spark.createDataFrame([Row(A=1, B=6), Row(A=2, B=7), Row(A=3, B=8),
Row(A=4, B=9), Row(A=5, B=10)])
exprs = [x - (avg(x)) / stddev_pop(x) for x in df.columns]
df.select(exprs).show()
Which gives me the result:
+------------------------------+------------------------------+
|(A - (avg(A) / stddev_pop(A)))|(B - (avg(B) / stddev_pop(B)))|
+------------------------------+------------------------------+
| null| null|
+------------------------------+------------------------------+
Where I'm hoping for:
+------------------------------+------------------------------+
|(A - (avg(A) / stddev_pop(A)))|(B - (avg(B) / stddev_pop(B)))|
+------------------------------+------------------------------+
| -1.414213562| -1.414213562|
| -0.707106781| -0.707106781|
| 0| 0|
| 0.707106781| 0.707106781|
| 1.414213562| 1.414213562|
+------------------------------+------------------------------+
I believe I can do this with the StandardScaler class from mllib, but I'd prefer to do this using only the dataframe API if possible - if only as a learning exercise.