IT Brief US - Technology news for CIOs & IT decision-makers
Software engineer reviewing ai code qa checklist warning

Sonny's CTO warns against AI-generated code without review

Wed, 29th Apr 2026 (Today)

Eashwer Srinivasan has warned software teams against relying on fully AI-generated code without human review, following changes to the development process at engineering teams at Sonny's.

Srinivasan, a former NASA software engineering director and now chief technology officer at Sonny's Car Wash, shared the warning during an episode of Forte Group's CTO2CTO podcast, hosted by Forte Group chief technology officer Lucas Hendrich.

He said his team found problems within two development sprints of adopting AI code-generation tools.

Engineering leads identified a pattern as faults emerged in code produced entirely by AI systems. Developers found those faults harder to diagnose because no one had properly examined the underlying logic before the software moved forward.

That prompted the company to restore a mandatory human review stage before any AI-generated code could reach production. Srinivasan summed up the rule simply: AI should not write anything that a developer has not first understood.

The warning comes from an executive whose career has included roles at NASA, GE, Rockwell Automation and Wind River. His experience spans environments where software problems can affect safety, public confidence and business operations.

He drew a contrast between AI-generated application code and AI-generated testing. While wholly machine-produced code created maintainability issues, AI-generated functional, regression and edge-case tests delivered measurable gains during development.

That process produced roughly 30% efficiency gains in engineering effort at Sonny's, he said. Those tests still require human review before release, preserving the same oversight principle applied to production code.

"The code that is typically generated by many of these coding systems is pretty complicated," said Eashwer Srinivasan, chief technology officer at Sonny's Car Wash. "After you spend several hours looking at it, you realise it's just filler."

He added that AI is also changing how software teams handle testing. Instead of waiting for a separate handoff to quality assurance, developers can now generate test cases as software is written, though he argued that this does not remove the need for human judgment.

"The whole concept of 'I'm done with my user story, can you write the test cases?' is becoming a thing of the past," said Srinivasan. "But those auto-generated test cases still go through a human eye."

Operational discipline

Srinivasan linked that approach to lessons from earlier in his career, including his work at NASA. He was part of the team responsible for a NASA platform that experienced a sudden surge in public traffic after the Space Shuttle Columbia disaster, during which the site absorbed hundreds of millions of page views.

He presented that episode as an example of the operational mindset he now applies to software development: reliability first, clear processes under pressure, and caution about trading resilience for speed.

At Sonny's, that discipline extends beyond coding practice. The company requires engineers to attend Car Wash College, a training facility designed to familiarise software staff with the physical systems their applications support.

The rationale is that engineers need domain knowledge before building software for an operational environment. In Srinivasan's view, understanding the machinery and workflows behind a platform is essential to writing software that works in practice.

The comments were made in season two, episode five of CTO2CTO, titled Modesty in Tech: Lessons Learnt from Building Platforms at NASA.

CTO2CTO is produced by Forte Group and features peer-level conversations between technology executives on the challenges of leading technology at scale.

Broader debate

His comments reflect a broader debate within corporate technology teams over how far AI should be allowed in the software development lifecycle. Many organisations are testing code assistants and generation tools in search of productivity gains, but concerns remain about code quality, maintainability, auditability and accountability when systems fail.

One of his central arguments is that AI should function as a copilot rather than an autopilot. He also said older systems should not be discarded simply because new tools are available, especially in operational settings where proven reliability matters more than novelty.

That view may resonate with technology leaders managing industrial, transport, aerospace and other systems where software supports physical assets and service continuity. In those settings, the cost of an opaque defect can extend well beyond debugging time.

Sonny's experience suggests a more selective use of AI inside engineering teams: use automation where outputs can be checked efficiently, such as test generation, but keep direct human accountability for production logic. For companies under pressure to move quickly on AI adoption, Srinivasan's account offers a case for restraint as much as experimentation.