/ /
What’s new in dbt - December 2025

What’s new in dbt - December 2025

Sara Gawlinski

last updated on Dec 19, 2025

The year is wrapping up 🎁 and so is another big season of shipping at dbt. We’re coming off an action-packed Coalesce, where we shared major updates for the dbt Fusion Engine, AI, and our vision for open data infrastructure.

Before we all unplug and recharge, here’s a final roundup for 2025 of what’s new in dbt, including improvements to Fusion, dbt Catalog, the dbt MCP Server, dbt Core 1.11, and new partnership updates that are already shaping how teams will work in 2026. Let’s finish the year strong.

ICYMI: Coalesce 2025

At Coalesce this year, we showed how dbt is rewriting expectations for how data work gets done. In our biggest keynote ever, we unveiled faster development with Fusion, cost-saving pipelines through state-aware orchestration, and how governed AI experiences can be powered by the dbt MCP Server. We also announced our plans to set the standard for open data infrastructure. Together, these updates mark a major step toward faster, more intelligent, and more interoperable analytics in the AI era. Check out our recap blog for all the details.

Fusion

A lot has landed in Fusion recently: Custom materializations, Iceberg, snapshots, exposures, and big performance wins. If you want to stay in the know as new features and improvements land, be sure to follow the Fusion Diaries newsletter. Fusion is currently in Private Preview for eligible projects. Review this checklist to get your projects Fusion-ready.

  • Iceberg support: You can now materialize models to Iceberg tables format as well as take advantage of dbt's catalogs.yml abstraction, making it easier to take advantage of open, performant table standards alongside Fusion’s modern architecture.
  • Custom materializations: Fusion now supports custom materializations to bring more parity to dbt Core and give teams the flexibility to define and extend how their models are built while still benefiting from Fusion’s faster parsing and richer context. Because Fusion cannot know whether a custom materialization will modify the schema of the persisted object, it will treat these nodes like introspective queries and disable static analysis.
  • Exposure support: Exposures are now supported in Fusion to enable full lineage visibility and governance across downstream dashboards, reports, and apps.
Exposures shown in Fusion-powered lineage
  • [Enhancement] New selector method for exposures: You can now select exposures directly via selectors, which makes it easier to target or test analytics assets that depend on your dbt project.
  • Auto-upgrading packages with dbt-autofix: dbt-autofix now automatically upgrades packages for you to reduce the manual cleanup required to make your project Fusion-compatible.
  • Custom snapshot support: Fusion now supports custom snapshot strategies to allow teams to carry over advanced snapshot logic while benefiting from Fusion’s stateful execution model.
  • Fusion now supports Snowflake Dynamic Tables and Databricks materialized views. This means teams can use the same near-real-time materializations they use in dbt Core, but with Fusion's faster parsing and richer execution context.
  • Event logging improvements: We’ve improved event logs for Fusion runs to give teams clearer, more actionable logs that make debugging and observability easier across complex projects.
  • Performance improvements: Fusion delivers significant memory reductions in the Language Server Protocol (LSP), which makes developer experiences in VS Code and the CLI faster, lighter, and more stable, especially for large projects.

dbt Semantic Layer + AI

Here’s what’s new in the dbt Semantic Layer and dbt MCP server as we deepen our investments in AI-assisted development and governed agentic workflows.

  • User PAT authentication for Semantic Layer queries enable user-level authentication and reduce the need for sharing tokens between users. When you authenticate using PATs, queries are run using your personal development credentials.
  • The dbt Semantic Layer GraphQL API now has a queryRecords endpoint. With this endpoint, you can view the query history both for Insights and Semantic Layer queries.
  • New Admin API tools via MCP: You can now automate key dbt workflows directly through the dbt MCP Server—list jobs, trigger runs, retrieve run details, cancel or retry runs, and manage artifacts.
  • New project introspection tools: MCP now supports a suite of introspection methods for local and remote projects, get_model_lineage_dev, get_macro_details, get_seed_details, get_semantic_model_details, get_snapshot_details, and get_test_details. This makes it easier to build intelligent agents and automated development workflows on top of dbt.

dbt Catalog

Here are the latest improvements to dbt Catalog as we expand its capabilities for multi-project exploration and governed data discovery.

  • Cross-project lineage expansion: Users can now traverse lineage across projects by opening project nodes directly in the lineage view. This lays the groundwork for better lineage exploration and more seamless exploration without navigating large, browser-crashing DAGs.
Cross-project lineage expansion
  • Favorites functionality: Now that we have global navigation, there's much more to find compared to our previous project-only view. With favorites, users can favorite projects and assets to better organize their global navigation at the top of the file tree.
  • State-aware orchestration status visibility: Reused statuses now appear in DAGs for Fusion projects that use state-aware orchestration. These are considered in our health signals and in lineage lenses (with more improvements coming).
State-aware orchestration status visibility
  • More granular source health signals: Rather than simply indicating when sources are stale, we’ve expanded source health signals based on user feedback. Stale sources now break into two additional statuses—unconfigured and expired/not run recently—to give users more actionable insight into why a dependent model has a cautionary status due to its sources.
More granular source health signals

Platform

We’ve also shipped some major improvements to security, connectivity, and admin configuration in dbt platform.

  • New private connectivity options across multi-tenant (MT) and single-tenant (ST) instances.
  • Improved configuration for platform metadata credentials to make it easier to manage and rotate them.
  • Enhanced SSO configuration options for smoother enterprise administration.
  • All multi-tenant accounts now use static subdomains (e.g., abc123.dbt.com), to improve reliability and simplify network allowlisting.

dbt Core 1.11

Here’s a preview of what’s new in dbt Core 1.11—moving to GA this week—a release packed with quality-of-life improvements, new UDF capabilities, and several bug fixes courtesy of “Debug-cember”. Check out the upgrade guide to take advantage of these new capabilities.

  • First-class UDFs (now more use cases!): Building on what we introduced in beta at Coalesce 2025, the final Core 1.11 release expands UDF functionality with support for Python UDFs, default arguments, and richer configuration options to make functions more powerful and more portable across your dbt projects.
  • Deprecation warnings by default: JSON schema validation warnings for YAML configs are now enabled out of the box to help teams catch outdated or incorrect configurations earlier in development.
  • A lot of bug fixes (hello, Debug-cember!): The community and dbt Labs team spent December hammering through long-standing issues across parsing, execution, logging, error messages, and more. (To see the full list, head over to #dbt-core-development in Slack.) Core 1.11 rolls up a ton of these fixes into one release to improve overall stability and developer experience.
  • Adapter-specific improvements:
    • BigQuery: Batched source freshness for improved performance and reduced API overhead.
    • Snowflake: Improved support to materialize Iceberg tables via a Glue catalog–linked database.
      • cluster_by support in dynamic tables: You can now specify the cluster_by configuration when working with dynamic tables to give teams more control over how data is organized and optimized. For more information, see Dynamic table clustering in docs.
    • Spark: New profile configurations have been added to enhance retry handling for PyHive connections.

Partnerships

  • Microsoft Fabric: dbt jobs in Microsoft Fabric are now available in public preview to bring deeper performance optimizations and native integrations across the Fabric ecosystem. This first phase of the integration uses dbt Core to orchestrate transformation workflows directly within Fabric. Microsoft and dbt Labs are actively collaborating to expand this experience for the dbt Fusion engine planned for 2026.
  • Databricks: A new dbt platform Task Type is now available in beta for Databricks Lakeflow. This makes it easier for joint users to orchestrate dbt models as first-class tasks within Lakeflow pipelines. This improves reliability, observability, and end-to-end data workflow management across the Databricks Data Intelligence Platform. Learn more about this task type in the announcement blog post.

What’s next

We’re excited to hear your feedback on these new features! In the meantime, don’t miss our upcoming webinar, Maximizing the business value of your data platform with dbt on January 20th & 21st. Save your spot for the live session to learn how top teams cut rework, reduce costs, and turn technical wins into real business value.

Ready to get hands-on?

Stay tuned for more dbt updates in the new year! ❄️✨

VS Code Extension

The free dbt VS Code extension is the best way to develop locally in dbt.

Share this article
The dbt Community

Join the largest community shaping data

The dbt Community is your gateway to best practices, innovation, and direct collaboration with thousands of data leaders and AI practitioners worldwide. Ask questions, share insights, and build better with the experts.

100,000+active members
50k+teams using dbt weekly
50+Community meetups