41

I want to convert string variable below to dataframe on spark.

val jsonStr = "{ "metadata": { "key": 84896, "value": 54 }}"

I know how to create dataframe from json file.

sqlContext.read.json("file.json")

but I don't know how to create dataframe from string variable.

How can I convert json String variable to dataframe.

1
  • val df = spark.read.json(spark.createDataset(jsonStr :: Nil)) Commented Feb 23, 2023 at 20:16

8 Answers 8

76

For Spark 2.2+:

import spark.implicits._
val jsonStr = """{ "metadata": { "key": 84896, "value": 54 }}"""
val df = spark.read.json(Seq(jsonStr).toDS)

For Spark 2.1.x:

val events = sc.parallelize("""{"action":"create","timestamp":"2016-01-07T00:01:17Z"}""" :: Nil)    
val df = sqlContext.read.json(events)

Hint: this is using sqlContext.read.json(jsonRDD: RDD[Stirng]) overload. There is also sqlContext.read.json(path: String) where it reads a Json file directly.

For older versions:

val jsonStr = """{ "metadata": { "key": 84896, "value": 54 }}"""
val rdd = sc.parallelize(Seq(jsonStr))
val df = sqlContext.read.json(rdd)
Sign up to request clarification or add additional context in comments.

7 Comments

Out of date, see below answer for Spark 2.2
Is there a solution for python?
@Rohan yeah, just remove the "val" keywords, and it's basically Python.
@Jean, how to convert a json string(not a json file) to a dataframe in Spark Java
@devinbost Seq is also not Python.
|
17

Since the function for reading JSON from an RDD got deprecated in Spark 2.2, this would be another option:

val jsonStr = """{ "metadata": { "key": 84896, "value": 54 }}"""
import spark.implicits._ // spark is your SparkSession object
val df = spark.read.json(Seq(jsonStr).toDS)

Comments

7

Here is an example how to convert Json string to Dataframe in Java (Spark 2.2+):

String str1 = "{\"_id\":\"123\",\"ITEM\":\"Item 1\",\"CUSTOMER\":\"Billy\",\"AMOUNT\":285.2}";
String str2 = "{\"_id\":\"124\",\"ITEM\":\"Item 2\",\"CUSTOMER\":\"Sam\",\"AMOUNT\":245.85}";
List<String> jsonList = new ArrayList<>();
jsonList.add(str1);
jsonList.add(str2);
SparkContext sparkContext = new SparkContext(new SparkConf()
        .setAppName("myApp").setMaster("local"));
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkContext);
SQLContext sqlContext = new SQLContext(sparkContext);
JavaRDD<String> javaRdd = javaSparkContext.parallelize(jsonList);
Dataset<Row> data = sqlContext.read().json(javaRdd);
data.show();

Here is the result:

+------+--------+------+---+
|AMOUNT|CUSTOMER|  ITEM|_id|
+------+--------+------+---+
| 285.2|   Billy|Item 1|123|
|245.85|     Sam|Item 2|124|
+------+--------+------+---+

Comments

4
simple_json = '{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}'
rddjson = sc.parallelize([simple_json])
df = sqlContext.read.json(rddjson)

The reference to the answer is https://stackoverflow.com/a/49399359/2187751

Comments

3

To convert list of json Strings into DataFrame in Spark 2.2 =>

val spark = SparkSession
          .builder()
          .master("local")
          .appName("Test")
          .getOrCreate()

var strList = List.empty[String]
var jsonString1 = """{"ID" : "111","NAME":"Arkay","LOC":"Pune"}"""
var jsonString2 = """{"ID" : "222","NAME":"DineshS","LOC":"PCMC"}"""
strList = strList :+ jsonString1
strList = strList :+ jsonString2

val rddData = spark.sparkContext.parallelize(strList)
resultDF = spark.read.json(rddData)
resultDF.show()

Result:

+---+----+-------+
| ID| LOC|   NAME|
+---+----+-------+
|111|Pune|  Arkay|
|222|PCMC|DineshS|
+---+----+-------+

Comments

3

you can now directly read json from Dataset[String]: https://spark.apache.org/docs/latest/sql-data-sources-json.html

val otherPeopleDataset = spark.createDataset(
  """{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}""" :: Nil)
val otherPeople = spark.read.json(otherPeopleDataset)
otherPeople.show()
// +---------------+----+
// |        address|name|
// +---------------+----+
// |[Columbus,Ohio]| Yin|
// +---------------+----+

1 Comment

how is do for python ?
2

There will be some error in some case like Illegal Patter component : XXX so for that you need to add .option with timestamp in spark.read so updated code will be.

val spark = SparkSession
          .builder()
          .master("local")
          .appName("Test")
          .getOrCreate()
import spark.implicits._
val jsonStr = """{ "metadata": { "key": 84896, "value": 54 }}"""
val df = spark.read.option("timestampFormat", "yyyy/MM/dd HH:mm:ss ZZ").json(Seq(jsonStr).toDS)
df.show()

Comments

0
val jsonStr = """{ "metadata": { "key": 84896, "value": 54 }}"""

val df = spark.read.json(spark.createDataset(jsonStr :: Nil)) 

display(df.withColumn("key",$"metadata.key").withColumn("value",$"metadata.value"))

1 Comment

While this code may solve the question, including an explanation of how and why this solves the problem would really help to improve the quality of your post, and probably result in more up-votes. Remember that you are answering the question for readers in the future, not just the person asking now. Please edit your answer to add explanations and give an indication of what limitations and assumptions apply

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.