Open Source CXO

Analyzing AI for the Enterprise: Current and Future States — Part 2 — Open Source CXO Ep. 16 | Active Logic

With: John Keddy, CEO at Lazarus AI

In part two of this two-part conversation, Lazarus AI CEO John Keddy moves beyond the foundational AI concepts covered in the first episode and into the territory that keeps enterprise leaders up at night: where is AI heading, who gets to regulate it, and how do you build a competitive moat with technology that everyone can access?

John brings a practitioner’s perspective to these questions. Lazarus AI operates in highly regulated industries where getting data wrong has real consequences — financial, legal, and reputational. That experience gives him a grounded view of both AI’s potential and its limits. This isn’t a conversation about hypothetical futures; it’s about the decisions enterprise leaders need to make right now to position their organizations for what’s coming.

Key Insight: AI Regulation and the Legislative Landscape

The regulatory conversation around AI is evolving faster than most organizations realize, but not necessarily in the direction people expect. John breaks down the current state of AI legislation, distinguishing between what’s being proposed, what’s likely to pass, and what will actually affect enterprise operations.

The core tension: regulation needs to protect consumers and prevent harm without stifling innovation. That balance is difficult to strike, especially when legislators are still developing their understanding of how AI systems actually work. John notes that the most effective regulations are likely to be industry-specific rather than broad-based — healthcare AI faces different risks than marketing AI, and the rules should reflect that.

For enterprise leaders, the practical takeaway is to build compliance into your AI strategy from the start rather than treating it as an afterthought. Organizations that design their AI systems with auditability, explainability, and data governance baked in will be far better positioned when regulations do arrive than those scrambling to retrofit compliance onto existing systems.

Key Insight: Ethical AI Deployment in Practice

“Ethical AI” has become a buzzword that means different things to different people. John cuts through the noise to focus on what ethical deployment actually looks like in an enterprise context: transparency about what the AI can and cannot do, honest communication with stakeholders about accuracy rates, and clear accountability when the system makes mistakes.

In regulated industries, the stakes are particularly high. An AI system that misclassifies an insurance claim or misreads a medical document doesn’t just create a customer service issue — it creates legal liability. John describes how Lazarus AI approaches this problem through layered validation, where automated outputs are verified against known benchmarks before being treated as authoritative.

The ethical dimension extends to workforce impact as well. John addresses the question directly: AI will change job descriptions, eliminate some roles, and create others. Leaders who pretend otherwise aren’t being ethical — they’re being evasive. The responsible approach is transparent communication about how AI will change work within the organization, coupled with genuine investment in reskilling.

Key Insight: Data Accuracy in Regulated Industries

Data accuracy isn’t a nice-to-have in industries like insurance and healthcare — it’s a regulatory requirement. John explains why this creates a specific set of challenges for AI adoption that don’t apply in lower-stakes contexts.

The fundamental issue: large language models are probabilistic. They generate the most likely output, not the guaranteed correct output. In a marketing context, a 95% accuracy rate might be perfectly acceptable. In a claims processing context, that 5% error rate represents real money, real compliance risk, and real customer harm.

Lazarus AI’s approach involves combining AI processing with deterministic validation layers — essentially using AI for the heavy lifting of document ingestion and classification, then applying rule-based checks to catch the cases where the model’s confidence is below threshold. This hybrid approach allows organizations to capture the efficiency gains of AI without accepting uncontrolled accuracy risk. For companies building custom software that processes sensitive data, this pattern of combining probabilistic AI with deterministic validation is increasingly becoming the standard architecture.

Key Insight: Prompt Engineering and Reliability

The conversation turns to a practical challenge that every organization deploying AI faces: prompt engineering reliability. The same prompt can produce meaningfully different outputs depending on context, model version, and even timing. For enterprise applications, this variability is a serious problem.

John describes the evolution from ad-hoc prompting to systematic prompt engineering — treating prompts as code that needs to be versioned, tested, and validated just like any other software component. This includes regression testing when models are updated, A/B testing of prompt variations, and monitoring output quality over time.

The broader point is that integrating AI into existing cloud infrastructure requires the same engineering discipline as any other integration. Organizations that treat AI as a black box they drop into their workflow will be surprised by inconsistent results. Those that treat it as a component that needs monitoring, testing, and maintenance will get reliable value.

Key Insight: AI as Competitive Advantage

If every company has access to the same foundational models, where does competitive advantage come from? John’s answer: proprietary data, domain expertise, and integration quality.

The models themselves are increasingly commoditized. What differentiates one organization’s AI capabilities from another’s is the data they train and fine-tune on, the domain knowledge they encode into their prompts and validation layers, and how seamlessly AI is integrated into their existing workflows. A company with a decade of proprietary industry data and deep domain expertise will get dramatically better results from the same foundational model than a competitor starting from scratch.

This reframes the AI investment question for enterprise leaders. The strategic move isn’t to build your own model — it’s to organize your data, codify your domain knowledge, and build the integration infrastructure that lets you leverage whatever model is best at any given time. Investing in web applications and internal tools that capture and structure organizational knowledge is, in effect, investing in your future AI capabilities.

Takeaways

  • Build compliance into AI strategy from day one. Retrofitting regulatory compliance onto existing AI systems is exponentially more expensive than designing for it upfront.
  • Combine probabilistic AI with deterministic validation. In regulated industries, layered accuracy checks are essential — don’t rely on model confidence alone.
  • Treat prompts as production code. Version, test, and monitor prompts with the same rigor you apply to any other software component.
  • Competitive advantage comes from proprietary data, not proprietary models. Organize your data and domain knowledge now to leverage future AI capabilities.
  • Be transparent about AI’s workforce impact. Ethical deployment means honest communication about how roles will change, not pretending they won’t.

Listen and Subscribe

Have a Project in Mind?

Let's talk about what you're building and how we can help.