Lovelace has emerged from stealth with Elemental, an AI platform for mission-critical enterprise use.
The Pittsburgh-based startup is led by Andrew Moore, former head of Google Cloud AI and former dean of Carnegie Mellon University's School of Computer Science. Elemental is designed to help AI systems link their outputs to structured, verifiable data rather than rely on unsupported responses.
Lovelace is entering a market where businesses are trying to use AI in areas such as financial analysis and defence, but reliability concerns remain a barrier. It argues the problem is less about raw model performance than whether systems have enough context to produce answers that can be checked and trusted.
According to Lovelace, Elemental combines data ingestion, entity resolution and graph construction in a single pipeline. The system turns fragmented internal and external information into knowledge graphs that AI agents can query with citations.
The platform is also enriched with a proprietary dataset called YottaGraph, which Lovelace says can scale to trillions of interconnected facts. The company says this lets customers add real-time external intelligence to their internal data when building what it calls context engines.
That approach reflects a broader shift in enterprise AI spending towards systems that can support auditable decisions in sensitive settings. Companies and public bodies have been testing AI for tasks that require more than conversational responses, including investigations, analysis and operational decision-making, where errors can have serious consequences.
Lovelace says Elemental increases the investigative power of AI agents by 1000 times on complex queries and achieves more than 99.5% entity accuracy. Those figures were provided by the company, which also says the system can deliver analysis at the speed and cost of a simple query.
Moore founded Lovelace in 2023. Alongside his roles at Google Cloud AI and Carnegie Mellon, he also served as the first AI advisor for US CENTCOM, according to the company.
The team includes people who worked on technologies used by billions of users, including systems at Google and other global AI platforms. Lovelace also says it already works with some of the largest public and private organisations, though it did not name customers.
Context problem
Lovelace's central claim is that AI systems fail in high-stakes situations when they cannot connect scattered information across multiple sources and verify what they find. In this view, the challenge is not simply generating an answer, but investigating the available evidence, correctly resolving entities, and maintaining a traceable link back to the source data.
Knowledge graphs have long been used in enterprise software to model relationships between people, organisations, places and events. Lovelace is positioning Elemental as a way to quickly build and update those graphs so modern AI agents can use them in near real time.
The platform can create secure, enterprise-specific context engines that let agents navigate and query structured data within milliseconds, Lovelace says. It argues this gives users deep-research-style output without the slower workflows usually associated with manual investigation or conventional analytics systems.
Founder's view
Moore framed the launch around the need for AI systems to support human judgement in situations where mistakes are costly.
"Throughout my career, I've been driven by a simple question: how can we use advanced intelligence to help people make the right decision when the cost of being wrong is catastrophic? AI has extraordinary potential in investigative contexts - but only if it unambiguously helps humans make better decisions. With Elemental, we're giving teams the speed of AI with the confidence of verifiable evidence, so every conclusion can be traced, tested, and trusted in the moments that matter most," said Moore.
The company's pitch places it in a fast-growing segment of the AI market focused on governance, traceability and retrieval of verified information. As more organisations move from pilot projects to operational deployments, suppliers are trying to address concerns that large language models can produce plausible but false answers when they lack enough relevant context.
Lovelace says its goal is to make autonomous agents more reliable in environments where decisions must be evidence-based. Elemental is intended to help those systems connect siloed information and return outputs that can be traced to verifiable sources.