Bring structured context to Snowflake Intelligence with dbt
Snowflake Intelligence brings agentic and conversational experiences directly into Snowflake. It is powered by Cortex capabilities such as Cortex Analyst and Cortex Search, and its AI agents connect to governed assets, including semantic views and models, search services, and other tools.
That architecture reinforces a familiar point for data teams: these AI experiences are only as reliable as the structure and semantics they are grounded in.
In Parts 1 and 2 of this series, we introduced the idea of a "structured context layer" and why AI systems need it to produce reliable, governed, cost-efficient outcomes.
In Part 3, we will look at how dbt helps you build that structure in Snowflake, and how that structure directly improves the quality of Snowflake Intelligence experiences.
The problem: AI can't infer meaning from raw schemas
Large language models (LLMs) don't know how your models relate, what your metrics actually mean, which joins are valid, or what’s production-ready versus experimental. If all you give an AI system is a database schema, it’s left to infer meaning from names and patterns.
When semantics aren’t explicit, the model has to guess. That leads to incorrect SQL, inconsistent metrics, ungoverned access paths, and answers you can’t trace back to trusted definitions.
What Snowflake Intelligence needs to be accurate
To answer business questions reliably, Cortex Analyst needs explicit rules for what fields mean, which joins are valid, and how metrics should be computed.
Snowflake Semantic Views are designed to provide that structure. They are schema-level objects that package up the definitions that matter most—metrics, dimensions, relationships, and the surrounding context—so Cortex Analyst has something concrete to follow when translating a question into SQL against your physical tables.
One operational detail that matters: semantic views aren’t just Cortex configuration. They’re Snowflake objects with normal lifecycle and access controls. That means teams can validate them, promote them through environments, and manage who can use them, rather than configuring semantics once and hoping they don’t drift.
dbt builds the governed foundation
Snowflake-level governance solves only part of the problem. Your warehouse changes constantly, and so does the AI stack around it: new agents, new interfaces, new retrieval patterns, new places where definitions get copied or reinterpreted. Semantics are only trustworthy if they stay aligned with the models underneath and remain consistent as new consumers show up.
Snowflake Semantic Views are great for making semantics enforceable at runtime inside Snowflake.
But most teams want more than Snowflake-native enforcement. They want semantics to live in a code-first system of record: centrally owned, reviewed like software, and reusable everywhere they need it—from BI and embedded analytics to AI—not trapped inside a single product experience.
That is where dbt fits.
dbt is where meaning gets defined and kept correct over time: in code, with version control, review, tests, lineage, and repeatable deployments, so those definitions stay interoperable across your stack, even as your underlying warehouse or AI stack evolves.
In dbt, the "structured context layer" is the governed, machine-readable context your team already builds as part of analytics engineering, made reliable and portable to AI.
Practically, dbt's structured context layer includes:
- Semantic definitions, including metrics and the models they depend on
- Lineage, so systems can understand how models connect and where data comes from
- Contracts, tests, and CI signals, so downstream consumers know what is valid and what changed
- Documentation and ownership, so definitions have meaning and accountability
- Freshness and operational metadata, so you can reason about trust and timeliness
- Policies and business rules, so governance and logic are centrally maintained
dbt then makes that context available in the right shape for the job. For interactive or agentic workflows, the dbt MCP server can give AI tools governed access to dbt-managed assets (including the dbt Semantic Layer via the MCP query tool), so agents can discover metrics and models—and pull lineage, documentation, and trust signals—before generating queries.
And on Snowflake, when you want Cortex Analyst to use the same governed definitions natively, you can publish that context into Snowflake Semantic Views directly from your dbt project (via supported tooling/packages), giving Cortex a Snowflake-native semantic object to follow.
Where the dbt Semantic Layer fits
The dbt Semantic Layer is part of that structured context layer. It is where teams define governed metrics in code, once, and reuse them across many consuming surfaces without re-implementing business logic each time.
Under the hood, it is powered by MetricFlow, the open-source SQL generation engine that compiles metric definitions into efficient SQL to guarantee accuracy, consistency, and performance while enforcing the rules that usually get lost in translation (ie grain, aggregation behavior, join paths, and semantic constraints). And dbt Fusion ensures that SQL executes consistently and efficiently on your warehouse.
This means AI retrieves both the correct meaning and the correct computation of metrics like “revenue” without the model wasting tokens trying to infer missing logic.
That matters because it keeps metric results consistent even when the query shape changes across tools or AI-generated prompts. This becomes especially valuable when the same metric needs to show up reliably across BI, embedded analytics, and AI experiences—without three teams quietly maintaining three “almost identical” versions.
Why the dbt Semantic Layer matters for AI-enabled analytics
- A single, auditable definition of truth: Metrics live in code, get reviewed, and ship through the same workflow as your models. When someone asks, “Why is revenue down?” you can point to one definition and its lineage, not reconcile competing interpretations.
- Lower-cost, more stable query behavior: MetricFlow compiles metrics into SQL that’s optimized and semantically correct. When agents generate lots of queries (and retries), optimization and consistency matter: fewer full scans, fewer “wrong shape” queries, fewer corrective loops.
- An interface you can reuse everywhere: Instead of teaching every tool (or every agent) how to compute metrics, you expose metrics through a consistent central interface. When you add a new AI surface, you can reuse governed definitions rather than rebuilding logic.
Together, these give AI systems something they usually lack: semantics that are trustworthy, portable, and operationally maintained.
How dbt and Snowflake Semantic Views work together
Teams often want two things at the same time:
- A Snowflake-native semantic object that Cortex Analyst/Snowflake Intelligence can consume directly
- A code-first system of record for semantics that stays governed and reusable across more than one consuming surface
Those map cleanly to two complementary layers:
dbt is the system of record
The dbt Semantic Layer is where semantics get defined, reviewed, tested, and maintained over time, alongside the models they depend on. That’s what keeps meaning stable as the warehouse changes and new consumers get introduced.
It also means you can surface the same governed context beyond Snowflake Intelligence. For example, via the dbt MCP server, any AI agent outside of Snowflake can discover which metrics exist, read documentation, understand lineage, and check trust signals before they attempt query generation. This means agent interactions are more reliable and reduce token overhead.
Snowflake Semantic Views are the Snowflake-native consumption layer
Semantic views express business concepts in a format Snowflake Intelligence expects. They describe metrics, dimensions, relationships, and context so Cortex Analyst can interpret questions and generate SQL reliably.
In practice, many teams will author and govern the source of truth in dbt, then publish the right shape of semantics into Snowflake where Cortex can use it directly.
Here's a practical workflow:
- Define metrics and semantic models in dbt
- Test and validate them with tests/contracts/CI and review
- Deploy curated models to Snowflake
- Create Snowflake Semantic Views directly within dbt as part of your DAG
- Cortex Analyst consumes those views to generate SQL
dbt provides the governance, testing, and version control that makes Snowflake Intelligence trustworthy. Semantic Views provide the Snowflake-native interface that Cortex Analyst reads.
Where to start
If you are deciding where to start, here is a lightweight way to think about it:
| If you need | Start here | Add later |
|---|---|---|
Snowflake Intelligence only | dbt (governance + testing + curated models) | Semantic Views (as the Snowflake-native surface area) |
Consistent metrics across BI and AI | dbt Semantic Layer (define metrics once) | Semantic Views for Cortex / Snowflake consumption |
Portability across multiple systems | dbt Semantic Layer (define metrics once) | OSI (as it matures) + warehouse-specific surfaces (optional) |
When Cortex operates inside the guardrails defined by dbt's structured context layer, you unlock:
- Correctness: Cortex queries follow official metric and schema definitions
- Governance and security: Access, contracts, and trust signals are enforced end-to-end
- Cost efficiency: Smaller prompt payloads, fewer warehouse scans, fewer retries, and more optimized SQL paths reduce AI operational costs
- Explainability: AI answers can be traced back to models, tests, and metadata
- Consistency across tools: the same semantics power BI, AI, and embedded use cases
Future-proofing with OSI: Building beyond Snowflake Intelligence
As teams adopt more BI and AI tools, semantic definitions tend to get duplicated across systems. The Open Semantic Interchange (OSI) aims to reduce that duplication by providing a vendor-neutral spec for sharing semantic definitions between systems.
OSI doesn't replace platform-native semantics like Snowflake Semantic Views. It provides a portable layer that can help you avoid re-authoring semantics from scratch each time you add a new consuming surface. This portability matters because as teams add BI tools, embedded analytics, and other AI-enabled applications, OSI lets them define semantics once in dbt, then generate the appropriate native representations for each consumer—whether that's Snowflake Semantic Views, semantic models for Tableau, etc.
dbt is in the process of building a dbt Semantic Layer to OSI converter so teams can export governed definitions into an OSI-compliant artifact. Once mature, this will enable smoother interoperability between dbt semantic definitions and Snowflake Semantic Views, allowing teams to define metrics in dbt and generate Snowflake-native semantic views with less manual work.
Bringing structured data to Snowflake Intelligence without a rewrite
Most teams don’t start from a blank slate. You can converge on a combined approach incrementally.
If you started with Snowflake Semantic Views
- Bring the underlying tables under dbt ownership: Use dbt to build and maintain the curated tables/view your Semantic Views depend on, with documentation, testing, and controlled releases.
- Manage Semantic View changes like code: Define Snowflake Semantic Views as part of your dbt DAG using the dbt_semantic_view package. This ensures Semantic View changes are version controlled, reviewable, and promoted across environments alongside your dbt deployments.
- Add the dbt Semantic Layer where you need to reuse beyond Snowflake Intelligence: For metrics that need to stay consistent across BI + AI + embedded surfaces, define them once in dbt Semantic Layer and reuse them rather than re-implementing logic across tools. Note for teams using Fusion: Semantic Views are Snowflake-specific SQL. To keep them in your dbt project (and benefit from dbt governance), set
static_analysis: offfor those files so Fusion doesn’t attempt to analyze them.
If you started with the dbt Semantic Layer
- Keep dbt as the system of record for governed metric definitions: Define metrics in code, manage change through review, and keep consistent over time across tools and teams.
- Publish to Snowflake Semantic Views for Snowflake Intelligence (choose one path): If you want change control and release discipline, manage Semantic Views in dbt; if you want the fastest setup with Snowflake-managed generation, use Autopilot.
- Recommended path: Manage Semantic Views in dbt: Use the dbt_semantic_view package to create and deploy the Semantic Views you need, so the Snowflake-facing contract ships with the same CI/CD, review, and environment promotions as the rest of your dbt project.
- Alternative path: Snowflake Semantic View Autopilot: Use the new Semantic View Autopilot tool to generate Snowflake Semantic Views directly from your dbt Semantic Layer definitions. This Snowflake-managed tool will read your dbt Semantic definitions and write the corresponding Snowflake Semantic View. This can be a quick way to get Snowflake-native objects without manually authoring them, but the resulting view lifecycle is driven by the Snowflake tool rather than your dbt deployments.
- Start narrow, represent the subset needed for Snowflake Intelligence in Semantic Views, then expand: Use the dbt_semantic_view package to publish the metrics, dimensions, relationships, and context required for Cortex Analyst and Snowflake-native consumption first, and broaden coverage as usage grows and patterns stabilize.
- Connect Custom Applications to dbt & Cortex via MCP for richer agent behavior: Use the dbt MCP Server to expose dbt semantics—metrics, metadata, lineage, and trust signals—to any AI client application. This allows AI systems to ground their responses in governed definitions and provides transparency around data quality. For example, you can build a custom Streamlit application within Snowflake that calls both Cortex agents and the dbt MCP Server simultaneously. This enables users to ask data questions and immediately see whether the underlying data is tested, fresh, and trusted, making AI outputs more explainable and actionable.
This video walks through an end-to-end workflow for powering reliable Cortex and Snowflake Intelligence experiences with dbt and Snowflake Semantic Views:
Building reliable AI with dbt and Snowflake
Snowflake Intelligence brings AI experiences directly into your warehouse. dbt provides the structured context that makes those experiences traceable and reliable.
Together, they make AI outputs more accurate, auditable, and cost-efficient. And as your AI strategy expands beyond Snowflake Intelligence (new agents, new interfaces, new applications), dbt keeps semantics centralized and reusable so you’re not rebuilding meaning in every new system.
The result: accurate, governed, and cost-efficient AI that works for everyone.
If you're interested in learning more about bringing structured context to Snowflake Intelligence with dbt, get a demo today or speak with your account representative.
VS Code Extension
The free dbt VS Code extension is the best way to develop locally in dbt.





