Scalable Apache Spark Solution to Big Data Secondary Sort Problem! – Part 1
In Big Data era the secondary sort problem is relates to sorting values associated with a key in the reduce phase. It can be called as value-to-key conversion. The secondary sorting technique will help us to sort the values in ascending or descending order which is passed to each reducer. And in the Big Data world the Secondary Sort can be solved by either MapReduce or Spark framework. For our example we will take Apache Spark to solve the Secondary Sort problem.
Just quick view on Keys and Secondary sort, So now to solve the Secondary problem in Apache Spark we have at least two choices as below,
Choice #1:
If we have bigger dataset then just use the Spark framework for sorting the reducer values which does not need the in-reducer sorting of values passed to the reducer. And in this approach we need to create a composite key by adding a part of, or the entire value to have our secondary sort solutions. And this choice is scalable solution because we are not limited by the memory of a commodity server.
Choice #2:
If we have smaller data set then choice will fit, like read and buffer all of the values for a given key in an Array or List data structure and then do an in-reducer sort on the values. This solution will fit in memory per reducer key.
So solve this problem let’s take Time Series as Input as below,
And the secondary output should be;
Detail
X => [3, 9, 6]
Y=> [7, 5, 1]
Z => [4, 8, 7, 0]
P=> [9, 6, 7, 0, 3]
As today we can run our Spark application in three different modes like Standalone mode which is the default setup, YARN client mode, and YARN cluster mode.
By using Spark-1.1.0 which is challenging to get the Secondary Sort solution, because currently Spark’s shuffle is based on a hash, which is totally different from MapReduce’s sort-based shuffle. So, we should implement sorting explicitly using an RDD operator. Since we had a partitioner by a natural key (detail – field name in time series data) then that preserved the order of the RDD, that would be a feasible solution. There is a partitioner which is represented as an abstract class, org.apache.spark.Partitioner, but it does not reserve the order of the original RDD elements. Therefore, option #2 cannot be implemented by the current version of Spark (1.1.0). Wish it can be achieved in the Spark 2.0.
Hence we might extend the JavaPairRDD class and add additional methods such as groupByKeyAndSortValues().
But we can easily achieved by the second option in Spark which will be discussed in detail in Part 2 blog.
Reference – Data Algorithms, Mahmoud Parsian and Big Data Community.
Interesting? Please subscribe to our blogs at www.dataottam.com to keep yourself trendy on Big Data, Analytics, and IoT.
And as always please feel free to suggest or comment coffee@dataottam.com.