How Data Volume Affects Spark Based Data Analytics on a Scale-up Server
2015 (English)In: Big Data Benchmarks, Performance Optimization, and Emerging Hardware: 6th Workshop, BPOE 2015, Kohala, HI, USA, August 31 - September 4, 2015. Revised Selected Papers, Springer, 2015, Vol. 9495, 81-92 p.Conference paper (Refereed)
Sheer increase in volume of data over the last decade has triggered research in cluster computing frameworks that enable web enterprises to extract big insights from big data. While Apache Spark is gaining popularity for exhibiting superior scale-out performance on the commodity machines, the impact of data volume on the performance of Spark based data analytics in scale-up configuration is not well understood. We present a deep-dive analysis of Spark based applications on a large scale-up server machine. Our analysis reveals that Spark based data analytics are DRAM bound and do not benefit by using more than 12 cores for an executor. By enlarging input data size, application performance degrades significantly due to substantial increase in wait time during I/O operations and garbage collection, despite 10 % better instruction retirement rate (due to lower L1 cache misses and higher core utilization). We match memory behaviour with the garbage collector to improve performance of applications between 1.6x to 3x.
Place, publisher, year, edition, pages
Springer, 2015. Vol. 9495, 81-92 p.
, Lecture Notes in Computer Science
IdentifiersURN: urn:nbn:se:kth:diva-181325DOI: 10.1007/978-3-319-29006-5_7ScopusID: 2-s2.0-84958073801ISBN: 978-3-319-29005-8OAI: oai:DiVA.org:kth-181325DiVA: diva2:899225
6th International Workshop on Bigdata Benchmarks, Performance Optimization and Emerging Hardware (BpoE), held in conjunction with 41st International Conference on Very Large Data Bases (VLDB),Kohala, HI, USA, August 31 - September 4, 2015
QC 201602242016-02-012016-02-012016-04-25Bibliographically approved