ExaStor

ExaStor is high-performance scale-out storage optimized for AI/ML workloads.
The Luster-based parallel distributed file system is suitable for high-density,
large-scale storage workloads that require extreme I/O and scalability,
such as AI/ML and big data analysis.

Challenges for modern high-performance workloads

The storage bottleneck issue
Storage can easily become a main cause of bottleneck when it is not prepared for fast growing AI/ML workloads
Escalating storage costs
AI/ML workflow requires tremendous amount of storage space and network traffic which leads to more operational costs
Increasing operational overhead
To fully support the dynamic needs of AI applications, storage must provide transparent monitoring and insights throughout the pipeline

Exascale storage for
high-performance workloads

High-performance Parallel File System
Provide high-speed parallel I/O to every data in the storage cluster
NVIDIA® GPUDirect® Storage Support
Support NVIDIA GPUDirect Storage for direct memory access between GPU and storage
Storage Type in All Scenarios

Leverage NVMe all-flash, SSD and HDD hybrid configuration

Flexible Deployment and Scalability

Choose how to deploy or scale your storage that suits your environment

Up to 40GB/s per Node

Delivers 40GB/s sequential read throughput with all-NVMe storage system

RDMA-powered File Service

Supports NFS over RDMA protocol with InfiniBand EDR and HDR network

System Architecture

High-performance distributed
file storage optimized for AI/ML workloads

High-performance Parallel File System

Allows parallel data access through Luster parallel file system, with high-speed file sharing and petabyte-level scalability through its parallel I/O architecture. Files are distributed and stored in objects across the cluster through global namespace.

NVIDIA® GPUDirect® Storage Support

Provides a storage interface to accelerate GPU applications, minimizing CPU and memory load. Maximizes GPU performance by providing a distributed file system IO driver module for applications running on GPGPU.

Flash-based Level 2 Cache

Can leverage SSD drives as secondary read cache, extending the main memory cache. When the main memory is full, the system can perform read request on the allocated cache drive, instead of using slower hard drives.

All-NVMe Lineup

Integrated Dual-controller Architecture

Integrated Single-controller Architecture

Hybrid Lineup

Integrated Dual-controller Architecture

Separated Single-controller Architecture

Are you interested in our products?
Get in touch with Gluesys.