0

I have the following RDD:

1:AAAAABAAAAABAAAAABAAAAAB
2:BBAAAAAAAAAABBAAAAAAAAAA

Every character is an event. Mi desired output is to obtain the number of occurrences per group of event. For this first example, the output should be:

{ "A" -> 6 , "B" -> 6 }

With my code, I get the desired output:

val rdd = sqlContext.sparkContext.makeRDD(Seq(
"1:AAAAABAAAAABAAAAABAAAAAB","2:BBAAAAAAAAAABBAAAAAAAAAA"))
val rddSplited = rdd.map(_.split(":")(1).toList)
    val values = scala.collection.mutable.Map[String, Long]()
    var iteracion = 0
    for (ocurrences <- rddSplited) {
      var previousVal = "0"
      for (listValues <- ocurrences) {
        if (listValues.toString != previousVal) {
          values.get(listValues.toString) match {
            case Some(e) => values.update(listValues.toString, e + 1)
            case None => values.put(listValues.toString, 1)
          }
          previousVal = listValues.toString()
        }
      }
      //println(values)  //return the values

    }
      println(values)  //returns an empty Map

  }

The problem is that the

println(values)

doesn't return any data, but if change it when the commented println is placed, the Map values does return values.

How can I return the final values of the map after the main for loop?

Sorry if my implementation is not the best one, I'm new in this Scala/Spark world.

Thanks in advance.

I'm editing the question to explain better what I'm trying to achieve, The code providing in the answers (thanks for all your help), does not return the desired output. I'm not trying to count the numbers of events, what I need is to count the numbers of occurrences when an event changes to another, ie:

    AAAAABAAAAABAAAAABAAAAAB  =>  A-> 4 , B-> 4
    BBAAAAAAAAAABBAAAAAAAAAA  =>  A-> 2 , B-> 2

So the final output should be  A-> 6 , B-> 6

I'm really sorry for the misunderstanding.

1
  • 1
    Even in a single JVM Scala code that would not be recommended (such mutation/side effect), so with Spark ... Commented Sep 4, 2017 at 16:49

2 Answers 2

2

Seems like you're trying to achieve your result in a very Java-like way. I've written a Scala functional style program that does exactly what you want as follows:

val rdd = sqlContext.sparkContext.makeRDD(Seq("1:AAAAABAAAAABAAAAABAAAAAB","2:BBAAAAAAAAAABBAAAAAAAAAA"))

rdd.foreach{elem =>
    val splitted = elem.split(":")
    val out: Seq[Map[Char, Int]] = splitted
      .tail
      .toSeq
      .map(_.groupBy(c => c).map{case (key, values) => key -> values.length})
    println(out)
  }
Sign up to request clarification or add additional context in comments.

1 Comment

I've added new details to the question, trying to explain myself better.
0

There are multiple problems with your code (mutual state, lazy transformations), try this:

val rdd = ss.sparkContext.makeRDD(Seq("1:AAAAABAAAAABAAAAABAAAAAB","2:BBAAAAAAAAAABBAAAAAAAAAA"))

rdd.foreach{record =>
    val Array(_,events) = record.split(":")
    val eventCount = events.groupBy(identity).mapValues(_.size)
    println(eventCount)
  }

Note that you won't see the println when you use map instead of foreach (map is lazy). Also, the println goes to the stdout of the worker nodes of your cluster, you will only see them if you use local mode in spark.

1 Comment

I've added new details to the question, trying to explain myself better.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.