Why Your AI Agent Keeps Getting It Wrong: The Three-Layer Architecture Every Data Leader Needs to…

Why Your AI Agent Keeps Getting It Wrong: The Three-Layer Architecture Every Data Leader Needs to Know

Your AI agent is not failing because the model is bad.

It is failing because the architecture feeding the model is incomplete. The agent does not know what your “revenue” number means. It cannot see the CRM data it needs. It does not know that this question should be answered by the finance persona, not the sales one. The model is doing its job. The infrastructure around it is not.

This is the defining challenge of enterprise AI in 2026. Everyone has deployed agents. Most of those agents produce responses that are confidently wrong, inconsistently right, or too generic to act on. The gap between a demo that impresses and an agent that actually drives business outcomes comes down to three layers working together: Model Context Protocol for live data access, analytical tables and semantic views for governed business logic, and AGENTS.md for persona-specific behavioral governance.

Most organizations have one of these. A few have two. Almost none have all three. The ones that do are pulling ahead.

The last mile problem in enterprise AI

Ask any senior data leader what frustrates them most about their AI deployments and you will hear the same answer: the agent gives plausible responses but wrong ones. It pulls the right table but applies the wrong definition. It answers a finance question with a sales lens. It confuses last quarter with last fiscal quarter. It surfaces data that technically exists but is not the governed, certified version.

These are not model failures. They are architecture failures. The model is only as good as the context it receives, the definitions it understands, and the behavioral constraints it operates under. Fix the architecture and the model performs dramatically better.

That architecture has three layers.

Layer one: Model Context Protocol

Model Context Protocol, or MCP, is an open standard that allows AI agents to connect to live enterprise systems at runtime. Think of it as a universal adapter. Before MCP, connecting an agent to a data source required a custom integration for every system: one integration for your CRM, another for your ERP, another for your data warehouse, another for your calendar. Each integration was hardcoded, brittle, and expensive to maintain.

MCP standardizes that connection. An enterprise can expose its CRM, its data warehouse, its HR system, its document repositories, and its APIs through MCP servers. Any agent that speaks MCP can then reach any of those systems through a single protocol, without custom integration work for each new connection.

For a senior data leader, MCP answers a fundamental question: how do agents access the live, current information they need to do real work? Not stale data from last night’s pipeline. Not static documents. Live context from the systems where work actually happens.

This is why MCP has become one of the fastest-adopted standards in enterprise AI infrastructure. When a CFO asks an agent to prepare a revenue review, the agent needs to pull current pipeline data, recent close results, and variance figures against plan. MCP is what makes that possible without requiring an engineer to build and maintain a bespoke connector.

But MCP alone is not enough. Access is not understanding.

Layer two: Analytical tables and semantic views

MCP gives an agent the ability to reach your data. It does not give the agent the ability to understand it.

Enterprise data is full of definitions that only make sense in context. Revenue might mean booked revenue in the sales system and recognized revenue in the finance system. Headcount might mean full-time employees in one table and all worker types in another. Fiscal quarter starts in February at some organizations and January at others. “Cloud cost” might mean external provider spend in one team’s model and total compute allocation in another.

When an agent queries raw tables without a semantic layer, it encounters these ambiguities constantly. It resolves them by guessing, and it guesses wrong often enough to undermine trust in every output it produces.

Semantic views solve this. A semantic view defines entities, metrics, and relationships in business terms. It tells an agent that “revenue” means a specific, agreed-upon calculation. That “active employee” means a specific filter with specific exclusions. That “quarter” follows a specific fiscal calendar starting in February. The agent generates correct SQL not because it understands your schema, but because it understands your semantics.

This is the difference between a data access layer and a data intelligence layer. Analytical tables provide the facts. Semantic views provide the meaning. An agent operating on semantic views produces answers that are not just technically correct but contextually correct. The CFO gets a revenue variance figure that matches what the finance team would calculate, using the definitions the finance team has approved.

Without semantic views, every agent query is a gamble. With them, it is a governed retrieval.

Layer three: AGENTS.md

MCP provides access. Semantic views provide understanding. AGENTS.md governs behavior.

An AGENTS.md file is a declarative configuration document that defines how a specific agent operates within a specific business context. It is human-readable, version-controlled, and consumed by the agent at runtime. It does not replace platform-level controls like role-based access, masking policies, and query restrictions. Those remain the enforcement floor. What AGENTS.md adds is the behavioral governance layer above that floor.

A well-designed AGENTS.md contains five elements. Scope and identity defines what business function this agent serves and what it is not responsible for. Data source routing specifies which semantic views, MCP connections, and certified tables the agent should use, in what priority order. A finance agent should pull from finance-certified semantic views before falling back to raw tables. A sales agent should prioritize the governed sales data model before making ad-hoc queries. Governance constraints specify what is off-limits regardless of underlying permissions. Skill routing maps types of questions to specialized analytical capabilities, each with its own validated data sources and SQL patterns. And behavioral standards define how the agent communicates: formatting conventions, precision standards, escalation protocols when data is ambiguous.

This last point matters more than it seems. When a finance team requires all cost comparisons to be expressed week-over-week, and all dollar values formatted to one decimal place in millions, that is not a cosmetic preference. It is a data quality standard enforced at the output layer. An agent that produces inconsistently formatted numbers, or that switches between comparison periods depending on query phrasing, is an agent that will never be trusted at the executive level.

It is also worth being direct about a limitation. AGENTS.md files are governance declarations, not enforcement guarantees. An aggressive prompt can attempt to override declarative constraints. This is precisely why all three layers must work together. AGENTS.md defines intended behavior. Platform-level controls enforce hard boundaries. Semantic views ensure that even when an agent operates freely within its defined scope, the definitions it uses are governed and consistent.

The three layers in action

A CFO asks: “What drove the Q2 revenue miss and draft my talking points for the board.”

Without the three-layer architecture, the agent does one of several wrong things. It queries a table with an inconsistent revenue definition and gets a number that does not match the finance system. It pulls pipeline data from MCP but has no semantic context for what counts as closed versus committed. It produces a draft that uses sales terminology when the board expects finance terminology.

With the three-layer architecture, the interaction looks different. MCP pulls live close data from the CRM and current actuals from the finance system. The semantic view translates that data into governed, finance-approved revenue figures, applying the correct fiscal calendar and recognition rules. The AGENTS.md routes this request to the finance analytical skill, enforces week-over-week comparison formatting, and constrains the output to use board-ready language rather than internal sales terminology. The agent produces a variance analysis and draft talking points that the CFO can use directly, not as a starting point for manual correction.

The model did not get smarter. The architecture got better.

What to do now

Senior data leaders building or scaling AI platforms should make three decisions explicitly rather than letting them emerge by default.

First, define your MCP strategy. Which enterprise systems should agents be able to reach in real time, and what governance wraps those connections? MCP servers without governance policies create context chaos. Decide what is in scope and what is not before your agents decide for themselves.

Second, invest in semantic views as the foundation of your agent data layer. Every analytical table that agents will query through MCP or direct SQL should have a corresponding semantic definition. If agents are querying raw tables without semantic context, they are guessing at your business definitions. The cost of those guesses compounds at scale.

Third, treat AGENTS.md files as governed artifacts. These are not developer configuration files. They are the behavioral specifications for systems that operate on your enterprise data on behalf of your people. They should be reviewed, version-controlled, owned by domain leaders, and audited with the same rigor as access control policies.

The architecture that separates good agents from great ones

The organizations winning with AI agents in 2026 are not the ones with the best models. They are the ones that have built the architecture that makes models perform at their best: live context through MCP, governed understanding through semantic views, and behavioral governance through AGENTS.md.

The three layers are not optional enhancements. They are the foundation. An agent with live access but no semantic layer is confidently wrong. An agent with semantic context but no behavioral governance is inconsistently useful. An agent with all three is something different: a reliable participant in how your organization makes decisions.

The question is no longer whether your organization will deploy AI agents. It is whether the architecture underneath them is ready.

Sahil Kotwal is a data governance lead at Snowflake specializing in people analytics and AI governance. His work focuses on how organizations can embed governance controls into enterprise data platforms to enable responsible AI at scale.


Why Your AI Agent Keeps Getting It Wrong: The Three-Layer Architecture Every Data Leader Needs to… was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top