// Clustered Filesystem IN ALPHA

[ FabriCFS ]

One filesystem.
Any workload. Zero compromise.

A shared clustered filesystem built for the most demanding workloads on the planet. NVMe-oF backend. POSIX-compliant. File-level locking across every node. Whether you're training LLMs, running FinTech pipelines, or managing a KVM VM fleet — FabriCFS mounts the same volume everywhere simultaneously.

NVMe-oF storage backend
POSIX compliant
N-way concurrent mounts
HA auto failover

Three workloads. One filesystem.

FabriCFS was designed from the ground up to serve the most demanding storage workloads simultaneously — without compromise on any of them.

GPU Clusters · LLM Training

AI & Machine Learning

Feed GPU clusters at full NVMe-oF speed. All training nodes mount the same dataset volume simultaneously — no data synchronization lag, no bottlenecked NFS gateway, no per-node copies.

  • Sub-millisecond latency for checkpoint writes
  • Shared dataset volume across all GPU nodes
  • No per-node dataset copies — save terabytes
  • Compatible with PyTorch, JAX, and all major ML frameworks
Low-Latency · High-Frequency

FinTech & Trading

Microsecond-consistent filesystem access for trading systems and financial pipelines. File-level locking ensures concurrent readers and writers never corrupt shared state — even at HFT speeds.

  • POSIX file-level locking for concurrent write safety
  • Deterministic I/O latency via NVMe-oF fabric
  • Active-active nodes — no single point of contention
  • Audit-compliant access logging built in
KVM · QEMU · Hypervisors

VM Fleets

Share VM disk volumes across hypervisor nodes with full POSIX semantics. Live migration, shared storage pools, and SAN backends — all without a dedicated SAN head or expensive proprietary appliances.

  • Shared disk pools for QEMU/KVM live migration
  • SAN & block storage backend consumption
  • No dedicated SAN head required
  • High availability with automatic node failover

NVMe-oF fabric. Clustered from the ground up.

FabriCFS is not a wrapper around NFS or CIFS. It's a purpose-built clustered filesystem with a distributed metadata layer and NVMe-oF data plane — designed for bare-metal performance at cluster scale.

NVMe-oF Data Plane

Data blocks are served directly over NVMe-oF fabric (TCP or RDMA), bypassing the kernel storage stack for bare-metal I/O throughput with sub-millisecond access times.

Distributed Metadata

Metadata is distributed across all nodes using a consensus protocol. No single metadata server bottleneck — reads and writes are load-balanced automatically across the cluster.

File-Level Locking

POSIX advisory and mandatory locks are enforced cluster-wide. Concurrent readers and writers across any node coordinate through a distributed lock manager — transparent to applications.

SAN Backend

Consume existing SAN, iSCSI, or FC block storage as the FabriCFS data tier. No rip-and-replace — bring your existing storage investment and get clustered FS semantics on top.

Active-Active Nodes

All nodes serve reads and writes simultaneously — no primary/replica split. Client I/O is distributed across all available nodes for linear throughput scaling as the cluster grows.

Automatic Failover

Node failures are detected and rerouted in milliseconds. Clients reconnect automatically with no manual intervention — the filesystem stays online through hardware failure or planned maintenance.

Built on open standards

Filesystem
POSIX compliant File-level locking Extended attributes Symbolic links
Storage backends
NVMe-oF (TCP) NVMe-oF (RDMA) iSCSI Fibre Channel Local NVMe
Client access
Linux kernel module FUSE (userspace) NFS re-export SMB gateway
Workload compatibility
PyTorch / JAX QEMU / KVM Kubernetes CSI Databases (read-only HA)

Get on the alpha list

FabriCFS is in early alpha. We're working with select teams running GPU clusters, FinTech pipelines, and large VM fleets. Apply for access below.

Alpha access is invite-only. We'll reach out directly.