I want to select the columns which contains values other than NULL. Suppose I have a table abc:
SnapshotDate CreationDate Country Region CloseDate Probability BookingAmount RevenueAmount SnapshotDate1 CreationDate1 CloseDate1
null null null null null 25 882000 0 null null null
null null null null null 25 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
null null null null null 0 882000 0 null null null
Then I would want to select Only Probability, BookingAmount and RevenueAmount columns as they contains some values other than null and ignore the rest of the column that contains only null values.
I tried doing something like - select * from abc where SnapshotDate != null and CreationDate != null and ..... But that does not help as it will look for each row.
NOTE:- Is there something like - sqlContext.sql("select case when col1 is null then dont select it else select it end) x from abc")
I am using spark 1.6.1
Is there a way to do that?
Thanks in advance