Scalable Apache Spark Solution to Big Data Secondary Sort Problem! – Part 2
In Part -1, we have discussed about the Spark solution to Secondary for larger data sets. Now let’s deep dive in Choice #2
Choice #2:
If we have smaller data set then choice will fit, like read and buffer all of the values for a given key in an Array or List data structure and then do an in-reducer sort on the values. This solution will fit in memory per reducer key.
So solve this problem let’s take Time Series as Input as below,
Image may be NSFW.
Clik here to view.
Let us solve this problem by using single Java class, because the Apache Spark has a very powerful and high-levels of APIs we can do it by having single Java class. Let’s touch up on the Spark API which is built upon the basic abstraction concept of the resilient distributed data set(RDD). And to fully leverage Spark APIs first we have to understand what RDDs is and RDD objects represents an immutable, partitioned collection of elements that can be operated on in parallel. Now the RDD<T> class contains the basic MapReduce operations available on all RDDs like map(), filter(), and persist(), JavaPairRDD<K,V>, mapToPair(), flatMapToPair(), and groupByKey(),reduce(), groupByKey(), and join().
Let us solve it by step by step;
- Import required Spark Java classes
- JavaRDDLike
- JavaDoubleRDD
- JavaPairRDD
- JavaRDD
- JavaSparkContext
- StorageLevels
- Pass input data as arguments and validate
- Connect the Spark master by creating JavaSparkContext
- Each element created from the above step will be record of time series data like <Detail> <,><When><,><Value>
- Create key-value pairs from a JavaRDD<String>
- Collect all values from the JavaPairRDD<> and print them
- The group JavaPairRDD<> elements by the key (detail).
- To validate the above step 7, we collect all values from the JavaPairRDD<String, Iterable<Tuple2<Integer, Integer>>> and print them.
- We sort the reducer’s values to get the final output. We accomplish this by writing a custom mapValues() method. We just sort the values (the key remains the same).
- To validate the final result, we collect all values from the sorted JavaPairRDD<> and print them.
Reference – Data Algorithms, Mahmoud Parsian and Big Data Community.
Interesting? Please subscribe to our blogs at www.dataottam.com to keep yourself trendy on Big Data, Analytics, and IoT.
And as always please feel free to suggest or comment coffee@dataottam.com.