Implicit Parallelism through Deep Language Embedding

Alexander Alexandrov; Asterios Katsifodimos; Georgi Krastev; Volker Markl

In: SIGMOD Record: Special Issue on 2015 ACM SIGMOD Research Highlights, Vol. 45, No. 1, Pages 51-58, ACM, 3/2016.


Parallel collection processing based on second-order func- tions such as map and reduce has been widely adopted for scalable data analysis. Initially popularized by Google, over the past decade this programming paradigm has found its way in the core APIs of parallel dataflow engines such as Hadoop’s MapReduce, Spark’s RDDs, and Flink’s DataSets. We review programming patterns typical of these APIs and discuss how they relate to the underlying parallel execution model. We argue that fixing the abstraction leaks exposed by these patterns will reduce the cost of data analysis due to improved programmer productivity. To achieve that, we first revisit the algebraic foundations of parallel collection processing. Based on that, we propose a simplified API that (i) provides proper support for nested collection processing and (ii) alleviates the need of certain second-order primi- tives through comprehensions – a declarative syntax akin to SQL. Finally, we present a metaprogramming pipeline that performs algebraic rewrites and physical optimizations which allow us to target parallel dataflow engines like Spark and Flink with competitive performance.

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence