Ghaffari, A., Chechina, N., Trinder, P. and Meredith, J. (2013) Scalable Persistent Storage for Erlang. In: Twelfth ACM SIGPLAN Workshop on Erlang, Boston, MA, USA, 25-27 Sep 2013, pp. 73-74. ISBN 9781450323857 (doi: 10.1145/2505305.2505315)
|
Text
107445.pdf - Accepted Version 639kB |
Abstract
The many core revolution makes scalability a key property. The RELEASE project aims to improve the scalability of Erlang on emergent commodity architectures with 100,000 cores. Such architectures require scalable and available persistent storage on up to 100 hosts. We enumerate the requirements for scalable and available persistent storage, and evaluate four popular Erlang DBMSs against these requirements. This analysis shows that Mnesia and CouchDB are not suitable persistent storage at our target scale, but Dynamo-like NoSQL DataBase Management Systems (DBMSs) such as Cassandra and Riak potentially are. We investigate the current scalability limits of the Riak 1.1.1 NoSQL DBMS in practice on a 100-node cluster. We establish for the first time scientifically the scalability limit of Riak as 60 nodes on the Kalkyl cluster, thereby confirming developer folklore. We show that resources like memory, disk, and network do not limit the scalability of Riak. By instrumenting Erlang/OTP and Riak libraries we identify a specific Riak functionality that limits scalability. We outline how later releases of Riak are refactored to eliminate the scalability bottlenecks. We conclude that Dynamo-style NoSQL DBMSs provide scalable and available persistent storage for Erlang in general, and for our RELEASE target architecture in particular.
Item Type: | Conference Proceedings |
---|---|
Status: | Published |
Refereed: | Yes |
Glasgow Author(s) Enlighten ID: | Chechina, Dr Natalia and Ghaffari, Mr Amir and Trinder, Professor Phil |
Authors: | Ghaffari, A., Chechina, N., Trinder, P., and Meredith, J. |
College/School: | College of Science and Engineering > School of Computing Science |
ISBN: | 9781450323857 |
Copyright Holders: | Copyright © 2013 ACM |
Publisher Policy: | Reproduced in accordance with the copyright policy of the publisher |
University Staff: Request a correction | Enlighten Editors: Update this record