/ /
Smarter pipelines, 29% more efficient: How the dbt Fusion engine optimizes data work
Upcoming

Smarter pipelines, 29% more efficient: How the dbt Fusion engine optimizes data work

🌐 Global friendly sessions on December 3 & 4

Data leaders are facing a no-win situation: Accelerate data development to support strategic business initiatives and risk exploding cloud costs, or manage cloud spend and likely reduce data quality and trust, and limit AI opportunities.

The new dbt Fusion engine eliminates that tradeoff. It enables teams to build and scale faster while achieving measurable efficiency gains of 29% or more across data workflows.

In this live virtual event, we’ll show how teams are rethinking their build strategies with state-aware orchestration—running and testing only what’s changed for smarter, more efficient operations.

You’ll see how Fusion brings intelligence and flexibility to your data workflows:

  • Cut redundant work: Skip unchanged models and tests automatically with state-aware orchestration and efficient testing.
  • Simplify complex jobs: Codify SLAs and build logic directly in your project to reduce coordination overhead.
  • Optimize performance: Gain measurable efficiency without compromising data freshness or reliability.
  • Flexible foundations for what’s next: See how upcoming capabilities—like Iceberg support and semantic-aware CI—extend dbt Fusion’s advantages across your stack.

If you’re running dbt Core, this session shows what you’re missing. If you’re already on the dbt Platform, it’s your guide to operating more efficiently at scale.

Save your seat to see how dbt Fusion helps data teams run smarter, deliver faster, and spend less time rebuilding.