Overview

Alluxio holds a unique place in the big data ecosystem, residing between storage systems (such as Amazon S3, Apache HDFS or OpenStack Swift) and computation frameworks and applications (such as Apache Spark or Hadoop MapReduce) to provide a central point of access with a memory-centric design. Alluxio works best when computation frameworks and distributed storage are decoupled and Alluxio is deployed alongside a cluster’s computation framework.

For user applications and computation frameworks, Alluxio is the storage underneath that usually collocates with the computation frameworks, so that Alluxio can provide fast storage, facilitating data sharing and locality between jobs, regardless of whether they are running on the same computation engine. As a result, Alluxio can serve the data at memory speed when data is local, or the computation cluster network speed when data is in Alluxio. Data is only read once from the under storage system for the first time it’s accessed. So the data access can be accelerated even when the access to the under storage is not fast.

For under storage systems, Alluxio bridges the gap between big data applications and traditional storage systems, and expands the set of workloads available to utilize the data. Since Alluxio hides the integration of under storage systems from applications, any under storage can back all the applications and frameworks running on top of Alluxio. Also, when mounting multiple under storage systems simultaneously, Alluxio can serve as a unifying layer for any number of varied data sources.


architecture-overview.png

Components

Alluxio’s design uses a single leader master and multiple workers. At a high level, Alluxio can be divided into three components: the master, workers, and clients. The master and workers together make up the Alluxio servers, which are the components a system admin would maintain and manage. The clients are used to talk to Alluxio servers by the applications, such as Spark or MapReduce jobs, Alluxio command-line users, or the FUSE layer.

Master

Alluxio master service can be deployed as one leader master and several standby masters for fault tolerance. When the leader master goes down, a standby master is elected to become the leader master.


architecture-master.png

Primary Master

There is only one leader master in an Alluxio cluster. The leader master is responsible for managing the global metadata of the system, for example, the file system metadata (e.g. the namespace tree), the block metadata (e.g the block worker locations), as well as the workers metadata. Alluxio Clients interact with the leader master to read or modify this metadata. In addition, all workers periodically send heartbeat information to the leader master to maintain their participation in the cluster. The leader master does not initiate communication with other components; it only interacts with other components as a response to requests via RPC services. Additionally, the leader master also saves file system journals to a distributed persistent storage to allow for recovery of master state information.

Standby Master

The standby master replays journals written by the leader master, periodically squashes the journal entries, and writes checkpoints for faster recovery in the future. It does not process any requests from any Alluxo components.

Worker

Alluxio workers are responsible for managing user-configurable local resources allocated to Alluxio (e.g. memory, SSDs, HDDs etc.). Alluxio workers store data as blocks and serve client requests that read or write data by reading or creating new blocks within its local resources. The worker is only responsible for the data within these blocks; the actual mapping from file to blocks is only stored by the master.

Also, Alluxio workers perform data operations on the under store (e.g. data transfer or under store metadata operations). This brings two important benefits: The data read from the under store can be stored in the worker and be available immediately to other clients andthe client is light and does not depend on the under storage connectors.

Because RAM usually offers limited capacity, blocks in a worker can be evicted when space is full. Workers employ eviction policies to decide which data to keep in the Alluxio space. For more on this topic, please check out the documentation for Tiered Storage.


architecture-overview.png

Client

The Alluxio client provides users a gateway to interact with the Alluxio servers. It initiates communication with the leader master to carry out metadata operations and with workers to read and write data that exists in Alluxio. It also provides a native filesystem API in Java, and supports multiple client languages including REST, Go and Python. In addition to that, Alluxio also supports APIs that are compatible with HDFS API as well as Amazon S3 API.

Need help? Ask a Question