🎉 K2view named a Visionary in Gartner’s latest Magic Quadrant for Data Integration

Read More
Start Free
Book a Demo

Why context breaks before models do

Gartner_25
Read Gartner Report
Why context breaks before models do
4:09

Table of contents

    LLMs are getting smarter fast, but agentic AI still fails in production for one reason: context. In this post, K2view CEO Ronen Schwartz explains why fragmented enterprise data breaks operational AI at scale.

     

    In my previous post, I argued that most agentic AI systems don’t fail in production because LLMs aren’t smart enough. They fail because production environments are unforgiving.

    Since then, I’ve had many conversations with teams building operational AI systems. Different industries. Different use cases. But the same underlying challenge keeps surfacing.

    The hardest problem in agentic AI isn’t reasoning.
    It’s context.

    Reasoning is improving faster than context

    Large language models are getting better at planning, reasoning, and decision-making at a remarkable pace. Agent frameworks are maturing. Tooling is improving.

    But the way we provide context to AI agents hasn’t kept up.

    In operational environments, context isn’t a static prompt or a document retrieved from a knowledge base. It’s a dynamic, real-time view of what’s happening right now, across multiple enterprise systems, for a specific business situation.

    And that’s where things start to break.

    Context in production is fragmented by design

    Enterprise data was never designed for AI agents.

    Customer information lives in CRM systems.
    Billing data lives elsewhere.
    Operational state sits in transactional systems.
    Historical context is spread across warehouses and logs.
    Unstructured knowledge is buried in documents and policies

    When an AI agent needs to answer a simple operational question, it often has to assemble context from all of these sources on the fly.

    In pilots and POCs, this complexity is hidden. Data is pre-selected. Assumptions are hard-coded. Governance is relaxed.

    In production, none of that holds.

    The cost of assembling context at runtime

    When context is assembled dynamically from fragmented systems, several things happen.

    First, latency increases. Each additional system call, join, or transformation adds delay. In operational workflows, even small delays are noticeable.

    Second, reliability suffers. If one system is slow, stale, or inconsistent, the entire context becomes unreliable. The same question can yield different answers depending on timing and data availability.

    Third, cost escalates. AI agents are often forced to ingest far more data than they actually need, simply because it’s hard to isolate the right subset. This drives up inference costs quickly at scale.

    And finally, risk increases. Broad data access makes it difficult to enforce fine-grained governance, masking, and compliance consistently.

    These are not model problems.
    They are context assembly problems.

    More data doesn’t mean better context

    A common reaction is to give AI agents more data. More tables. More APIs. More documents.

    In practice, this often makes things worse.

    Excess data introduces noise. It increases ambiguity. It forces agents to reason over irrelevant or conflicting information. The result is variability, inconsistency, and loss of trust.

    In operational AI, what matters is not how much data an agent can access, but whether it receives the right data, scoped to the task at hand and the specific entity involved.

    Precision matters more than volume.

    The missing disciplin

    eWhat’s missing in most agentic AI architectures today is a disciplined way to determine:

    • what data is actually required for a given task

    •  

      how current that data must be

    •  

      how access should be constrained and governed

    •  

      and how context should be assembled before reasoning begins

     

    Without this discipline, context becomes an afterthought. Something bolted on at runtime rather than designed into the architecture.

    That’s why context tends to break before models do.

    In the next post, I’ll dig into why “more data” is the wrong answer for operational AI, and why production systems need a principled way to scope data access around tasks and entities, rather than broad datasets.

    Because in agentic systems, context isn’t just input.

    It’s the foundation everything else depends on.

    Achieve better business outcomeswith the K2view Data Product Platform

    Solution Overview
    Gartner_25
    Read Gartner Report