My DataFrame table contains rows such as
['row1', 'col_1', 'col_2', 'col_3', ..., 'col_N', 'alpha']
N (the number of columns except the first and the last ones) is relatively large.
Now, I need to create another DataFrame out of this by multiplying each of these columns named col_i by column alpha. Is there a smarter way than to do a manual multiplication per each of these columns, as in:
sc = SparkContext()
sqlc = SQLContext(sc)
sqlc.sql('SELECT col_1 * alpha, col_2 * alpha, ..., col_N * alpha FROM table')
So I'd like to know whether it's possible to do the same operation on each column without specifically writing it for each one.