I have a csv file which is in below format.
key_string,query
abc,"select * from abc"
pqr,"select * from pqr"
xyz,"select * from xyz"
These tables are in Hive. I want to create dataframes for eg: abc_df,pqr_df and so on. I can be adding more queries to the csv in future. How can I create multiple dataframes in pyspark using for loop or any other technique? I tried following code but its not working: df is I have read the above csv file
x=""
y=[]
for i in df.rdd.collect():
x= i[0] + "_df"
x = spark.sql(i[1])
y.append(x)
print(y)`
Pls suggest next steps