IT Brief US - Technology news for CIOs & IT decision-makers
Story image

StorONE launches ONEai for on-premises AI training & analysis

Yesterday

StorONE has introduced ONEai, an enterprise-focused artificial intelligence solution designed to enable large language model (LLM) training and inferencing directly within the storage layer.

Developed in partnership with Phison Electronics, ONEai integrates Phison's aiDAPTIV+ technology directly into StorONE's storage platform. This arrangement means companies can carry out AI-related operations, including domain-specific model training and inferencing, on data stored locally—without the need for external AI infrastructure or cloud services.

Technology integration

ONEai offers an enterprise storage system with embedded AI processing capabilities. By leveraging GPU and memory optimisation, intelligent data placement, and direct support for LLM fine-tuning, the new solution aims to streamline AI deployment and improve access to proprietary data analytics. The system is positioned as a turnkey, plug-and-play deployment that does not require significant internal AI expertise or complex infrastructure.

The solution's architecture is focused on reducing hardware costs, improving GPU performance, and offering on-premises LLM training and inferencing. This is intended to support organisations looking to gain deeper insights from their stored data while controlling costs and maintaining data sovereignty.

Addressing enterprise challenges

With more enterprises seeking to leverage AI on multi-terabyte and petabyte-scale data pools, the traditional requirement for separate, often complex AI infrastructure has been a significant barrier. Conventional methods often depend on external orchestration and cloud or hybrid AI workflows, which can increase both regulatory risks and total costs for data-driven organisations.

StorONE and Phison have developed ONEai to deliver fully automated, AI-native LLM training and inference directly within the storage layer itself. The product supports real-time insights into file creation, modification, and deletion, and is optimised for fine-tuning, retrieval-augmented generation (RAG), and inferencing tasks. The system includes integrated GPU memory extensions, offering a user interface aimed at simplifying ongoing management.

End-to-end automation

"ONEai sets a new benchmark for an increasingly AI-integrated industry, where storage is the launchpad to take data from a static component to a dynamic application," said Gal Naor, CEO of StorONE. "Through this technology partnership with Phison, we are filling the gap between traditional storage and AI infrastructure by delivering a turnkey, automated solution that simplifies AI data insights for organizations with limited budgets or expertise. We're lowering the barrier to entry to enable enterprises of all sizes to tap into AI-driven intelligence without the requirement of building large-scale AI environments or sending data to the cloud."

ONEai is presented as a system that automatically recognises and responds to changes in stored data, supporting immediate, ongoing AI analysis. Its plug-and-play approach is designed to remove the requirement for separate AI platforms and deliver full on-premises processing to ensure maximum data control.

Phison's technical contribution

"We're proud to partner with StorONE to enable a first-of-its-kind solution that addresses challenges in access to expanded GPU memory, high-performance inferencing and larger capacity LLM training without the need for external infrastructure," said Michael Wu, GM and President of Phison US. "Through the aiDAPTIV+ integration, ONEai connects the storage engine and the AI acceleration layer, ensuring optimal data flow, intelligent workload orchestration and highly efficient GPU utilization. The result is an alternative to the DIY approach for IT and infrastructure teams, who can now opt for a pre-integrated, seamless, secure and efficient AI deployment within the enterprise infrastructure."

According to the companies, ONEai's plug-and-play deployment model can eliminate the requirement for in-house AI expertise while streamlining overall operations. The integrated GPU modules inside the storage layer aim to lower AI inference latency and deliver up to 95% hardware utilisation, while also minimising power consumption and operational costs by reducing the number of GPUs required.

Use cases and availability

ONEai is designed for immediate interaction with proprietary data, automatically tracking data changes and feeding updates into ongoing AI training and inferencing processes. This is intended to align with real-world enterprise needs for rapid, domain-specific data analysis.

The solution will become generally available in the third quarter of 2025.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X