Wasabi unveils AI-ready storage & expands to Silicon Valley
Wasabi has launched a new storage region in Silicon Valley and introduced a high-performance storage class aimed at supporting artificial intelligence (AI) and machine learning (ML) workloads. The new Wasabi Fire storage class seeks to address the growing need for cost-efficient, scalable, and high-speed cloud storage, as organisations increasingly deploy AI-driven applications and data pipelines.
AI infrastructure push
Wasabi Fire is positioned for compute-intensive AI and ML training, real-time inference, high-frequency data logging, and media pipeline use cases. It utilises NVMe and SSD-based technologies to offer hyperscale performance at a monthly rate of USD $19.99 per terabyte, with no egress fees or hidden charges. This approach aims to allow organisations to store and access large volumes of training data required for modern AI models while mitigating the cost disparities seen in existing high-performance storage offerings.
The company maintains that many providers have traditionally only supplied premium, high-speed storage options at significant cost. By leveraging Wasabi Fire, organisations can aim to maximise GPU utilisation, minimise latency, and efficiently scale their AI infrastructure, while keeping overall storage expenditure predictable and contained.
New Silicon Valley presence
The new storage region is located in San Jose, California, and is Wasabi's sixteenth location globally. Developed in partnership with IBM Cloud, the facility is designed to serve organisations in one of the world's most significant AI technology ecosystems. The co-location with IBM infrastructure is intended to deliver ultra-high-speed access and help minimise performance bottlenecks associated with AI training and inference workloads.
Wasabi now manages more than three exabytes of data, highlighting the scale of its operations and its expanding footprint in the cloud infrastructure market.
Customer demand
The rapid growth in AI and ML deployments has shifted the focus for technology buyers from GPUs alone to the broader infrastructure stack, with storage emerging as a significant cost driver for large-scale data projects. Wasabi's proposition centres on providing cloud storage solutions that balance performance with cost predictability and transparency. By eliminating charges for data access and retrieval, the company seeks to differentiate itself from larger hyperscale cloud providers.
"Object storage is the backbone of AI, but customers shouldn't have to choose between speed and cost," said David Friend, co-founder and CEO, Wasabi Technologies.
The company's approach stands in contrast to traditional pricing models, where customers can face unexpected charges as data volumes and access patterns scale with AI training and operational workloads.
Partner perspectives
The launch in Silicon Valley is supported by IBM Cloud, with service integration tailored to organisations seeking secured, enterprise-grade infrastructure for AI applications.
"We're excited for Wasabi to expand into Silicon Valley with the IBM Cloud San Jose data centre," said Alan Peacock, General Manager, IBM Cloud. "Wasabi Fire on the IBM Cloud is designed to give clients the benefits of IBM's secured enterprise-grade infrastructure."
Industry observers note that these recent developments align with broader trends in AI infrastructure, as demand for flexible, affordable, and reliable cloud storage options increases.
"Wasabi's momentum reflects a clear demand for simple, predictable cloud storage," said Dave McCarthy, research vice president, IDC. "By adding a new storage class and expanding into Silicon Valley, Wasabi positions itself as a storage provider aligned to the full lifecycle of AI development while maintaining the simplicity that has defined its growth to date."