/ /
dbt Core v1.11 is GA

dbt Core v1.11 is GA

Grace Goheen,Sara Gawlinski

last updated on Feb 10, 2026

The dbt Core team wrapped 2025 with a final release of v1.11.

This release continues to drive forward the language of dbt, adding new features and fixes for teams who want to standardize logic across their data stack, understand when they're adhering to the dbt standard and when they're doing something custom, and benefit from better debugging and performance optimizations.

Check out our upgrade guide for more information, and read on for a deeper dive into the community efforts and conversations that helped bring these features to life. If you’d rather hear it straight from the team, we’ll also be hosting a live virtual release event to walk through v1.11, share roadmap context, and answer questions. Save your seat here.

Let’s get into what’s new.

⭐️ The star of the show: User-defined functions (UDFs)

With v1.11, dbt introduces support for defining and managing user-defined functions (UDFs) directly within your dbt project.

For years, the community has used macros to encapsulate reusable logic. Macros are powerful, but they live only inside dbt.

UDFs extend the same principle of DRY ("don't repeat yourself") code by allowing you to register custom functions as objects inside your warehouse, so the same transformation logic can be used everywhere in your data ecosystem.

Check out our FAQ page here for a deeper breakdown of when to use a UDF instead of a macro.

Out-of-the-box support for UDFs in dbt has been a long time coming. Our CEO Tristan Handy opened up the original feature request back in 2016.

The original feature request for UDFs in 2016

Since then, community members have come up with a variety of workarounds to half-bake UDFs into dbt - pre-hooks, on-run-start hooks, even SQL headers. A couple of community members took it one step further and developed a strategy of defining UDFs as models via a custom materialization (looking at you @Brabster and @anaghshineh).

Stories like this really highlight the magic of dbt. Its extensibility allows the community to experiment, share their findings, and have the feature organically grow in adoption and maturity. The most successful experiments eventually make it into the dbt standard, just like UDFs now have in v1.11.

After almost a decade of letting this discussion bake, as different community members figured out what worked best and where the thorny edges were, it became clear these workarounds weren’t quite right. It was time to make UDFs an official part of the dbt standard.

In the months leading up to the v1.11 release, we had great community discussions on GitHub and live on Zoom, all of which helped to shape the feature.

Thank you to everyone who participated. We’re excited to finally deliver dbt-managed UDFs to all of you.

What UDFs enable

When you define a function in dbt, it becomes a node in your DAG. That means dbt manages building and updating your UDF in the warehouse before the model(s) that references it.

Let's say you want to create a function that checks if a string represents a positive integer. (The regular expression here is fairly simple, but we know there are more-complex regexes hanging out in many dbt projects.)

First, create a file in the functions/ directory that contains your logic:

# functions/is_positive_int.sql

REGEXP_INSTR(a_string, '^[0-9]+$')

Second, define your function's name, configs, and properties in a corresponding YAML file:

# functions/schema.yml

functions:
  - name: is_positive_int
    description: >
      My UDF that returns 1 if a string represents a naked positive integer (like "10", "+8" is not allowed).
    config:
      schema: udf_schema
      database: udf_db
    arguments:
      - name: a_string
        data_type: string
        description: The string that I want to check if it's representing a positive integer (like "10")
    returns: 
      data_type: integer

Now, you can reference your function in a dbt model:

# models/my_model.sql

select
  maybe_positive_int_column,
  {{ function('is_positive_int') }}(maybe_positive_int_column) as is_positive_int
from {{ ref('a_model_i_like') }}

When compiled, the {{ function('is_positive_int') }} is replaced by the fully-qualified UDF name udf_db.udf_schema.is_positive_int.

Finally, run dbt build to materialize your UDF in the warehouse before the model(s) that references it.

UDF as part of a DAG

User-defined functions bring warehouse-level extensibility into the dbt workflow, reduce duplication, and enable cross-tool logic reuse.

What’s new for UDFs since beta

If you tuned in to Coalesce, you already saw a demo (on rollerskates) of UDFs in action.

Jeremy and Grace on skates on stage demoing UDFs

So what's new since October?

In the final release of v1.11, you can now define UDFs that run Python logic, when supported by your data warehouse and adapter. Simply define your logic in a python file in your functions/ directory, and provide additional python-specific configurations such as runtime_version and entry_point. This makes it possible to reuse complex transformations, calculations, or logic that would be difficult or verbose to express in SQL.

Additionally, you can now configure:

  • default_values for arguments
  • volatility to describe how predictable the function output is
  • aggregate functions that operate on multiple rows and return a single value

Do you have additional extensions of UDFs you want dbt to support? Open up an issue in the dbt Core repo! Javascript UDFs and pypi packages are already on our radar :)

UDF support expectations

UDF support varies by adapter and execution environment. UDFs are currently supported across BigQuery, Snowflake, Redshift, Postgres, and Databricks, with some limitations depending on the platform.

Python UDFs are currently only supported in Snowflake and BigQuery when using dbt Core.

Read more about UDFs, including prerequisites and limitations in the UDF documentation.

The authoring experience is getting stricter (in a good way)

A focus of v1.11 is to make dbt authoring more explicit and predictable. The goal isn’t to be pedantic, it’s to make sure projects fail early and clearly, instead of succeeding quietly and behaving strangely later.

If you’ve ever found yourself asking, “Why did dbt accept this config but not actually do what I expected?”, this work is for you.

To get technical about it for a moment, the dbt language spec is now codified by a set of strongly-typed jsonschemas. These are automatically generated and define what is acceptable “dbt code”.

We introduced many deprecations warnings for invalid code in v1.10 (see GitHub discussion), with a subset of those deprecation warnings (the ones that use the new jsonschemas) gated behind a behavior change flag to give users a migration window to update their project code.

But starting in v1.11, dbt now warns you by default when your project code doesn't align to the standard dbt spec.

These warnings help you proactively catch and clean up things like:

  • misspelled or deprecated config keys
  • invalid top-level properties
  • missing + prefixes

If you’ve ever had a project where a subtle config typo didn’t show up until late in development (or only appeared as wonky downstream behavior), you’ll feel this improvement immediately.

Importantly, this does not mean dbt is taking away customization. That kind of flexibility is what enabled our community to experiment with UDFs as a custom materialization for the past decade ;)

You can still add custom configuration in dbt, those values just need to live under the meta key moving forward.

models:
  - name: my_model
    config:
      meta:
        my_custom_thing: is_awesome

If you’re not yet ready to resolve these issues, you can use the warn_error_options configuration to silence specific categories temporarily while you update your code on your own timeline.

flags:
  warn_error_options:
    silence:
      - CustomTopLevelKeyDeprecation
      - CustomKeyInConfigDeprecation
      - CustomKeyInObjectDeprecation
      - MissingPlusPrefixDeprecation
      - SourceOverrideDeprecation

Note: dbt Core v1.11 surfaces invalid code as warnings. In dbt Fusion, they’re treated as errors. This gives teams a safe migration window to clean up code before they upgrade to Fusion. Check out the dbt-autofix tool to autofix many of these!

Adapter-specific improvements

Our adapter releases also bring the ecosystem forward. Highlights include:

BigQuery: batched metadata-based source freshness

dbt can now issue a single batch query when calculating metadata-based source freshness (instead of one query per source) for BigQuery. This reduces overhead and can dramatically speed up freshness checks for projects with lots of sources.

Enable it with:

flags:
  bigquery_use_batch_source_freshness: true

Thanks @adamcunnington-mlg for the help stress-testing this new approach.

Snowflake: Iceberg + dynamic table clustering

Snowflake users get improved support for:

  • basic Iceberg table materialization via a Glue catalog–linked database
  • cluster_by supported on dynamic tables—closing an annoying gap for teams adopting dynamic tables and wanting more control over how data is organized and optimized

Spark: better retry handling for PyHive

New profile configs improve reliability for async query polling and connection issues:

  • poll_interval
  • query_timeout
  • query_retries

Databricks: snapshots and real-world deletes

Snapshots now support hard_deletes='new_record' , which enables users to track deletions from snapshots as auditable events.

Shoutout to @randypitcherii for the initial implementation.

Bug bashing during ‘Debug-cember’

A healthy open source project isn’t just about shipping new features, it’s also about doing the unglamorous work of making things more stable and predictable.

In December, the dbt community and maintainers leaned into “Debug-cember,” working through long-standing issues across parsing, execution, logging, and error messages. Thanks for the contributions from community members @edgarrmondragon, @asiunov, @rjspotter, @avasireddi3, @mattogburke, @D3nn3, and @mjsqu!

Core v1.11 rolls up 35 Debug-cember fixes into one release.

If you want to see the running commentary (and celebrate the wins), check out #dbt-core-development in the Community Slack.

Debug-cember post in the dbt community slack

Upgrading to v1.11

A few reminders as you plan your upgrade:

  • dbt Labs is committed to backward compatibility for all versions 1.x. Any behavior changes are accompanied by behavior flags to provide a migration window.
  • If you’re on dbt platform release tracks, Latest and Compatible already have access to the newest Core features.
  • Read through our upgrade guide for more information

What’s next

If you want to read more about what’s next for dbt Core, check out our latest roadmap post over in github (musical-theater-references guaranteed).

To hear directly from the people doing this work, you’re invited to join the dbt Core product, engineering, and developer experience teams for a live virtual event on February 19. We’ll cover what shipped in v1.11, why it matters in practice, share roadmap context, and answer your questions live. Save your seat here.

And as always, we want to hear from you: What custom solutions are you maintaining today that you wish dbt handled out of the box? What workflows still feel painful, even though you have to deal with them all of the time?

Because the pattern is the same as it’s always been: you experiment, you share, we learn, and the standard evolves.

To everyone who contributed to a discussion, filed an issue, proposed a PR, tested a beta: it’s time to celebrate. dbt isn’t dbt without you.

VS Code Extension

The free dbt VS Code extension is the best way to develop locally in dbt.

Share this article
The dbt Community

Join the largest community shaping data

The dbt Community is your gateway to best practices, innovation, and direct collaboration with thousands of data leaders and AI practitioners worldwide. Ask questions, share insights, and build better with the experts.

100,000+active members
50k+teams using dbt weekly
50+Community meetups