One of the perks of my job as an analytics engineering manager at dbt Labs? I get to dogfood the latest and greatest from the dbt platform. That means playing with new features before they hit general availability, and seeing firsthand how small changes can add up to big wins.
Lately, that’s included a trio I’m pretty excited about:
- dbt’s new cost management dashboard
- The dbt Fusion engine
- And Fusion powered state-aware orchestration
Together, they’ve helped us build the habit of cost-aware data development: a mindset where optimization isn’t an afterthought, it’s just how we work.
This post explores how we’re using the dbt platform to free up 25% of our annual spend, speed up workflows, and make cost optimization part of our everyday work.
It starts with visibility

With cost monitoring directly in the dbt platform, we finally have a clear view into which projects, environments, and models are using the most compute, and which ones to focus on.
Before this dashboard, we were flying in a fog, only optimizing the obvious big models. We used to rely on query tagging and build our own data pipelines just to track costs; essentially incurring overhead and costs just to manage our costs.
Now that entire burden is shifted to the dbt platform, eliminating the need for these extra models and pipelines.. We uncover hidden inefficiencies and prioritize models that truly matter. Instead of trying to optimize everything, we can see exactly which transformations drive the most value: core metrics, key dashboards, AI inputs, and which ones were silently consuming resources without delivering proportional value. This visibility into our cost drivers was a game-changer.
Visibility shifts the game. We stop guessing where the cost lives. Instead, we spot issues we wouldn’t otherwise notice: long-tail jobs, legacy models, or transformations we haven’t touched in months. Without that signal, they quietly run every day and quietly burn compute.
Now, we don’t need to wait for a fire drill to investigate. We review cost impact regularly, prioritize what matters, and keep the entire pipeline healthy.
I saw things I couldn’t unsee, and I had to fix them.
Remediation: How do we fix the opportunities identified?
With better visibility, we can focus on real opportunities for efficiency. In April, we made a lot of changes based on what the dashboard was telling us. I have to admit, this part is fun: identify + fix is a fun workflow. I’ll give three examples of areas that really moved the needle for us - these remediations brought down our spend by 8% annually with just a few hours of work.
Model tuning
Materializations represent one of the most significant cost drivers, with incremental materializations in particular emerging as a big expense bucket for us. When we examined our data platform costs through the cost management dashboard, we discovered that these operations were consuming a disproportionate amount of compute resources, especially for large event-based models.
The fix in many of these turned out to be a fairly deep concept in dbt called incremental predicates. Incremental predicates are a dbt configuration you add to incremental models. They allow you to limit the portion of the existing target table that gets scanned during a merge operation, based on conditions like only considering rows from the last week.
We use incremental predicates to scan only the rows we need. Same results, less compute and less build time. Just a few of these provided ~3.5% annual savings
Depreciations
It’s difficult to understand ROI without visibility into both productions costs and consumption; another feature of the cost management dashboard is the ability to see consumption queries: The total number of queries of a given resource across all usage in the warehouse (includes BI/analytics tools, query consoles, etc.)

This visibility, combined with understanding downstream lineage, allowed us to find some very clear deprecations: expensive models with little to no consumption metrics and no downstream dependencies. By examining both consumption metrics and lineage analysis, we could confidently identify models that were costly but provided no value. Simply disabling or deleting these unused models in the DAG provided an additional ~2% in our yearly warehouse costs with zero impact on business operations.
Optimized testing
Another area that is relatively expensive is testing, particularly for huge event tables. Luckily, again, the dashboard makes it easy to identify some expensive culprits.

We ended up finding around 15 tests that were costing a lot unnecessarily. Another 3% annually saved.
The fix? You guessed it, another obscure but powerful feature: the WHERE
clause in tests. For large, incremental event tables, we don’t need to test every single row, on every single run - we can just test the last few days of data. This can be achieved with the where test property.
Fixing existing issues is great, but what if there were a way to avoid excess costs and inefficiencies altogether?
Avoidance: Fusion powered state-aware orchestration
If this sounds like rocket science, it sort of is. Lucky for me, this feature is as simple as flipping a setting.
State-aware orchestration is a feature powered by the dbt Fusion engine that intelligently determines which models to build based on detecting changes in code or data. It significantly reduces compute costs and runtime by only building models that will actually change.

Key principles:
- Real-time shared state: Jobs write to a shared model-level state, allowing dbt to rebuild only changed models across all jobs
- Model-level queueing: Prevents "collisions" by queueing at the model level, avoiding unnecessary rebuilds
- Flexible support: Works with both dynamic (state-aware) and explicit (state-agnostic) job building approaches
- Sensible defaults: Works out-of-the-box with optional advanced configurations
This feature is currently in beta and available to Enterprise and Enterprise+ customers using the dbt Fusion engine. For us, turning it on will save us 9%+ of our annual dbt warehouse workload costs and will result in 25% fewer excess models built.
There’s a lot more we can do with state-aware orchestration. Next up, we will start to use the advanced configurations to set SLAs, and allow us to do just a single dbt build
command that intelligently orchestrates only what it needs to, this would reduce the number of jobs we have and significantly reduce maintenance.
What’s next
Building the habit of cost-aware engineering isn’t just good for your budget: it’s good for your workflows. It leads to faster builds, leaner DAGs, and fewer surprises. With the dbt platform, we’re no longer guessing where inefficiencies live. We’re acting on real signals and building with intention.
If you’re on an eligible data platform and plan, here are a few steps to take:
- Join the Preview for the cost management dashboard
- Migrate to the dbt Fusion engine to lay the foundations for smarter orchestration and better performance
- Ask to join the Beta for Fusion powered state-aware orchestration to stop running what doesn’t need to be run
Each step gets you closer to a more efficient, scalable data platform and gives your team more time to focus on what actually matters. There’s a lot more coming in this area of the platform, so stay tuned.
Cost awareness isn’t a side quest. It’s just good engineering.
Published on: Jul 03, 2025
2025 dbt Launch Showcase
Catch our Showcase launch replay to hear from our executives and product leaders about the latest features landing in dbt.
Set your organization up for success. Read the business case guide to accelerate time to value with dbt.