1

Need help on ... converting multiple rows into single row by keys. group by advise appreciated. Using pyspark Version:2

l = (1,1,'', 'add1'  ),
    (1,1,'name1', ''),
    (1,2,'', 'add2'),
    (1,2,'name2', ''),
    (2,1,'', 'add21'),
    (2,1,'name21', ''),
    (2,2,'', 'add22'),
    (2,2,'name22', '')

df = sqlContext.createDataFrame(l, ['Key1', 'Key2','Name', 'Address'])
df.show()
+----+----+------+-------+
|Key1|Key2|  Name|Address|
+----+----+------+-------+
|   1|   1|      |   add1|
|   1|   1| name1|       |
|   1|   2|      |   add2|
|   1|   2| name2|       |
|   2|   1|      |  add21|
|   2|   1|name21|       |
|   2|   2|      |  add22|
|   2|   2|name22|       |
+----+----+------+-------+

I am stuck looking for output like

+----+----+------+-------+
|Key1|Key2|  Name|Address|
+----+----+------+-------+
|   1|   1| name1 |   add1|
|   1|   2| name2 |   add2|
|   2|   1| name21|  add21|
|   2|   2| name22|  add22|
+----+----+------+-------+

1 Answer 1

1

Group by Key1 and Key2, and take the maximum value from Name and Address:

import pyspark.sql.functions as F

df.groupBy(['Key1', 'Key2']).agg(
    F.max(df.Name).alias('Name'), 
    F.max(df.Address).alias('Address')
).show()
+----+----+------+-------+
|Key1|Key2|  Name|Address|
+----+----+------+-------+
|   1|   1| name1|   add1|
|   2|   2|name22|  add22|
|   1|   2| name2|   add2|
|   2|   1|name21|  add21|
+----+----+------+-------+
Sign up to request clarification or add additional context in comments.

1 Comment

appreciate for your response Thank you!!,

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.