Skip to main content
deleted 74 characters in body; edited title
Source Link
Jamal
  • 35.2k
  • 13
  • 134
  • 238

More efficient way of managing Managing PySpark DataFrames?

I was successfully able to write a small script using PySpark to retrieve and organize data from a large .xml file. Being new to using PySpark, I am wondering if there is any better way to write the following code:

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import monotonically_increasing_id

### xml file from https://wit3.fbk.eu/
sc = SparkSession.builder.getOrCreate()
df = sc.read.format("com.databricks.spark.xml").option("rowTag","transcription").load('ted_en-20160408.xml')
df_values = df.select("seekvideo._VALUE")
df_id = df.select("seekvideo._id")
df_values = df_values.withColumn("id", monotonically_increasing_id())
df_id = df_id.withColumn("id", monotonically_increasing_id())
result = df_values.join(df_id, "id", "outer").drop("id")
answer = result.toPandas()

transcription = dict()
for talk in range(len(ted)):
    if not answer._id.iloc[talk]:
        continue
    transcription[talk] = zip(answer._id.iloc[talk], answer._VALUE.iloc[talk])

where df is of the form:

DataFrame[_corrupt_record: string, seekvideo: array<struct<_VALUE:string,_id:bigint>>]

and transcription is a dictionary of the transcriptions of each TED Talk keyed by position. For example, transcription[0] is of the form:

[(800, u'When I moved to Harare in 1985,'),
 (4120,
  u"social justice was at the core of Zimbabwe's national health policy."),
 (8920, u'The new government emerged from a long war of independence'),
 (12640, u'and immediately proclaimed a socialist agenda:'),
 (15480, u'health care services, primary education'),
...
]

I would greatly appreciate it if you have some suggestions. Thank you!

More efficient way of managing PySpark DataFrames?

I was successfully able to write a small script using PySpark to retrieve and organize data from a large .xml file. Being new to using PySpark, I am wondering if there is any better way to write the following code:

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import monotonically_increasing_id

### xml file from https://wit3.fbk.eu/
sc = SparkSession.builder.getOrCreate()
df = sc.read.format("com.databricks.spark.xml").option("rowTag","transcription").load('ted_en-20160408.xml')
df_values = df.select("seekvideo._VALUE")
df_id = df.select("seekvideo._id")
df_values = df_values.withColumn("id", monotonically_increasing_id())
df_id = df_id.withColumn("id", monotonically_increasing_id())
result = df_values.join(df_id, "id", "outer").drop("id")
answer = result.toPandas()

transcription = dict()
for talk in range(len(ted)):
    if not answer._id.iloc[talk]:
        continue
    transcription[talk] = zip(answer._id.iloc[talk], answer._VALUE.iloc[talk])

where df is of the form:

DataFrame[_corrupt_record: string, seekvideo: array<struct<_VALUE:string,_id:bigint>>]

and transcription is a dictionary of the transcriptions of each TED Talk keyed by position. For example, transcription[0] is of the form:

[(800, u'When I moved to Harare in 1985,'),
 (4120,
  u"social justice was at the core of Zimbabwe's national health policy."),
 (8920, u'The new government emerged from a long war of independence'),
 (12640, u'and immediately proclaimed a socialist agenda:'),
 (15480, u'health care services, primary education'),
...
]

I would greatly appreciate it if you have some suggestions. Thank you!

Managing PySpark DataFrames

I was successfully able to write a small script using PySpark to retrieve and organize data from a large .xml file. Being new to using PySpark, I am wondering if there is any better way to write the following code:

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import monotonically_increasing_id

### xml file from https://wit3.fbk.eu/
sc = SparkSession.builder.getOrCreate()
df = sc.read.format("com.databricks.spark.xml").option("rowTag","transcription").load('ted_en-20160408.xml')
df_values = df.select("seekvideo._VALUE")
df_id = df.select("seekvideo._id")
df_values = df_values.withColumn("id", monotonically_increasing_id())
df_id = df_id.withColumn("id", monotonically_increasing_id())
result = df_values.join(df_id, "id", "outer").drop("id")
answer = result.toPandas()

transcription = dict()
for talk in range(len(ted)):
    if not answer._id.iloc[talk]:
        continue
    transcription[talk] = zip(answer._id.iloc[talk], answer._VALUE.iloc[talk])

where df is of the form:

DataFrame[_corrupt_record: string, seekvideo: array<struct<_VALUE:string,_id:bigint>>]

and transcription is a dictionary of the transcriptions of each TED Talk keyed by position. For example, transcription[0] is of the form:

[(800, u'When I moved to Harare in 1985,'),
 (4120,
  u"social justice was at the core of Zimbabwe's national health policy."),
 (8920, u'The new government emerged from a long war of independence'),
 (12640, u'and immediately proclaimed a socialist agenda:'),
 (15480, u'health care services, primary education'),
...
]
Source Link
Wilson
  • 111
  • 1

More efficient way of managing PySpark DataFrames?

I was successfully able to write a small script using PySpark to retrieve and organize data from a large .xml file. Being new to using PySpark, I am wondering if there is any better way to write the following code:

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import monotonically_increasing_id

### xml file from https://wit3.fbk.eu/
sc = SparkSession.builder.getOrCreate()
df = sc.read.format("com.databricks.spark.xml").option("rowTag","transcription").load('ted_en-20160408.xml')
df_values = df.select("seekvideo._VALUE")
df_id = df.select("seekvideo._id")
df_values = df_values.withColumn("id", monotonically_increasing_id())
df_id = df_id.withColumn("id", monotonically_increasing_id())
result = df_values.join(df_id, "id", "outer").drop("id")
answer = result.toPandas()

transcription = dict()
for talk in range(len(ted)):
    if not answer._id.iloc[talk]:
        continue
    transcription[talk] = zip(answer._id.iloc[talk], answer._VALUE.iloc[talk])

where df is of the form:

DataFrame[_corrupt_record: string, seekvideo: array<struct<_VALUE:string,_id:bigint>>]

and transcription is a dictionary of the transcriptions of each TED Talk keyed by position. For example, transcription[0] is of the form:

[(800, u'When I moved to Harare in 1985,'),
 (4120,
  u"social justice was at the core of Zimbabwe's national health policy."),
 (8920, u'The new government emerged from a long war of independence'),
 (12640, u'and immediately proclaimed a socialist agenda:'),
 (15480, u'health care services, primary education'),
...
]

I would greatly appreciate it if you have some suggestions. Thank you!