0

I have a two dataframes as below

df1

column1 column2 column3
abc 021 abc456
def 456 xyz098

df2

ref column1 column2 column3
A 1 0 1
B 0 0 1

I want to populate df1 column values to null where the df2 dataframe ref value A is zero

out_df_refA

column1 column2 column3
abc Null abc456
def Null xyz098

Similarly for ref value B in df2 dataframe

out_df_refB

column1 column2 column3
Null Null abc456
Null Null xyz098

1 Answer 1

3

You can cross join to a filtered df2 and use when to only keep the values in df1 when the flag is not equal to 0.

import pyspark.sql.functions as F

out_df_refA = (df1.alias('df1')
    .crossJoin(df2.filter("ref = 'A'").drop('ref').alias('df2'))
    .select(*[F.when(F.col('df2.' + c) != 0, F.col('df1.' + c)).alias(c) for c in df1.columns])
)

out_df_refA.show()
+-------+-------+-------+
|column1|column2|column3|
+-------+-------+-------+
|    abc|   null| abc456|
|    def|   null| xyz098|
+-------+-------+-------+
import pyspark.sql.functions as F

out_df_refB = (df1.alias('df1')
    .crossJoin(df2.filter("ref = 'B'").drop('ref').alias('df2'))
    .select(*[F.when(F.col('df2.' + c) != 0, F.col('df1.' + c)).alias(c) for c in df1.columns])
)
out_df_refB.show()
+-------+-------+-------+
|column1|column2|column3|
+-------+-------+-------+
|   null|   null| abc456|
|   null|   null| xyz098|
+-------+-------+-------+
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.