I have a dataframe in spark. I want to get all the column names into one column(as key) and and all values into column (as value) group by id.
val df = spark.sqlContext.createDataFrame(Seq(("101"," FIXED"," 2000-01-01"," null"," null"," .0125484200"),("102"," VARRY"," 2018-09-14"," 4.3980"," 0.0"," .3518450000"), ("103"," FIXED"," 2001-02-01"," null"," null"," .0000023564"), ("103"," FIXED"," 2011-02-23"," 4.83"," 2414.6887"," .0020154800"), ("104"," FIXED"," 2000-01-01"," null"," null"," .0215487400"))).toDF("Id","type","datecol","value1"," value2","finalvalue")
df.show
+---+------+-----------+-------+----------+------------+
| Id| type| datecol| value1| value2| finalvalue|
+---+------+-----------+-------+----------+------------+
|101| FIXED| 2000-01-01| null| null| .0125484200|
|102| VARRY| 2018-09-14| 4.3980| 0.0| .3518450000|
|103| FIXED| 2001-02-01| null| null| .0000023564|
|103| FIXED| 2011-02-23| 4.83| 2414.6887| .0020154800|
|104| FIXED| 2000-01-01| null| null| .0215487400|
+---+------+-----------+-------+----------+------------+
I need to convert the dataframe as below
+---+-----------+------------+
| Id| key | value |
+---+-----------+------------+
|101| type | FIXED|
|101| datecol | 2000-01-01|
|101| value1 | null|
|101| value2 | null|
|101| finalvalue| .0125484200|
|102| type | VARRY|
|102| datecol | 2000-09-14|
|102| value1 | 4.3980|
|102| value2 | 0.0|
|102| finalvalue| .3518450000|
|103| type | FIXED|
|103| datecol | 2000-02-01|
|103| value1 | null|
|103| value2 | null|
|103| finalvalue| .0000023564|
|103| type | FIXED|
|103| datecol | 2000-02-23|
|103| value1 | 4.83|
|103| value2 | 2414.6887|
|103| finalvalue| .0020154800|
|104| type | FIXED|
|104| datecol | 2000-01-01|
|104| value1 | null|
|104| value2 | null|
|104| finalvalue| .0215487400|
+---+-----------+------------+
Any suggestions would be helpful
Thanks