/ /
ETL is over: faster, smarter pipelines with dbt

ETL is over: faster, smarter pipelines with dbt

Kathryn Chubb

on Sep 09, 2025

In data analytics, there's always been a trade-off between speed, data trust, adoption, and cost. Today, AI has completely reset expectations around the time it takes to deliver insights.

Everyone, but especially executives, now expects insights in minutes, not days. They want to know their decisions are based on trusted business and operational data. They want self-service. And they want to know that asking deeper questions leads to deeper insights, not to data teams scrambling to rebuild pipelines. Even if you’re not embracing AI today, your data teams are under pressure to move faster than ever.

We’ll look at how dbt and GenAI-enabled data migration tools from Tredence can help you meet those demands for speed without sacrificing trust, adoption, or cost efficiency.

Why legacy ETL is failing

Unfortunately, the old extract-transform-load (ETL) paradigm can’t meet the need for speed, trust, adoption, and collaboration in today’s AI-driven data analytics paradigm.

Data engineers spend more time fixing broken pipelines than building robust new ones. Analysts, distanced from the ETL process, either doubt the metrics or lose momentum waiting for slow transformations.

Even when they trust the data, sluggish delivery limits adoption. Teams may resort to pulling their own extracts, fragmenting insights, and weakening the company’s overall intelligence.

When engineers do keep pace, the sheer data volume inflates cloud costs, and pipeline rework triggers unpredictable spikes. And when trust erodes or speed lags, stakeholders stop asking the big questions, allowing your faster‑moving competitors to turn data into insights first.

ETL pipeline challenges

Here are some of the common challenges you may face with a legacy ETL pipeline process:

Performance and scalability bottlenecks: Typical ETL pipelines are hardware‑ and resource‑intensive, slow to run, and costly to scale, often requiring expensive infrastructure upgrades just to keep pace with demand.

Collaboration gaps: Without modern version control and CI/CD integration, teams struggle to share, review, and test changes in real-time. This slows delivery and increases the risk of errors.

Data quality and validation challenges: Manual checks for every job slow down developers and stall pipeline creation. The lack of integrated version control and automated analytics code deployments further hinders accuracy and rapid deployment.

High total cost of ownership (TCO): The higher demand for performance and scalability, combined with the need to process data further from the source, results in costly hardware and added licensing fees. That, in turn, drives up infrastructure overhead and TCO.

Cloud cost volatility: Inefficient pipelines, frequent rework, and poor visibility make monthly cloud spend unpredictable. That translates into sudden spikes that are hard to forecast or control.

Legacy ETL forces teams into trade‑offs between speed, trust, adoption, and cost — trade‑offs modern data teams can no longer afford. By contrast, dbt eliminates those compromises by embedding software‑engineering best practices like CI/CD, version control, modular code, and automated testing directly into your modern analytics workflow, so trust is engineered in from the start.

How dbt and Tredence overcome legacy ETL challenges

By addressing the trust, speed, and cost constraints of legacy ETL, dbt creates the foundation for faster, more collaborative, and more cost‑efficient analytics. Getting there from a legacy stack is where the Tredence migration framework and T‑Converter accelerator tool come in.

T‑Converter automates the conversion of legacy ETL jobs into dbt‑native models that run directly in your cloud warehouse. This enables you to adopt dbt’s test-driven, version-controlled workflows, minimizing the need for costly hardware overhauls.

T-Converter’s AI-powered accelerator parses ETL metadata, generates dbt-native SQL and YAML, and integrates with Git-based CI/CD for automated testing, documentation, and deployment. The result: clean, trusted data ready for analysis in a fraction of the time.

In our ETL is Over: Faster, Smarter Pipelines with dbt” webinar, Devang Pandya, VP of Growth Partnerships at Tredence, showcased the company’s five-step migration methodology and demonstrated the value of the T‑Converter accelerator. This proven approach, successfully deployed across multiple enterprises, speeds migration from legacy ETL platforms, like Informatica and SAP DS, to Snowflake and dbt. This reduces time to value by unlocking faster data workflows.

Exploring the Tredence migration journey

Tredence breaks migration into five steps, each designed to move you from legacy ETL to a modern, dbt‑powered cloud stack with speed and confidence.

1. Discover and define: Assess the current data landscape, map dependencies, and set the migration roadmap using the Tredence T‑Analyzer to accelerate discovery.

2. Data migration: Move historical and incremental data into the target cloud platform (e.g., Snowflake), optimizing architecture along the way.

3. Convert and build: Use T‑Converter to translate legacy ETL such as Informatica and SAP DS into dbt-native SQL/YAML, enabling test‑driven, version‑controlled workflows.

4. Retrofit and optimize: Validate outputs, tune performance, and align processes for cost‑efficient operation.

5. Sustain: Embed governance, monitoring, and continuous improvement to protect long‑term value.

In Tredence’s broader modernization framework, two additional stages extend the value. The Enable AI/ML Ops step integrates machine learning workflows to support advanced analytics. Finally, Validate and Optimize fine‑tunes workloads for performance, scalability, and cost efficiency while ensuring outputs match legacy baselines.

The Value of the Tredence Approach

Using this methodology and its AI-powered accelerators, Tredence has helped customers accelerate the time-to-value of their data migrations, improving accuracy and trust while reducing costs. An analysis by Pandya highlighted the following benefits:

  • 80% faster data discovery
  • 30% productivity improvement
  • 50% faster data migration
  • 80% savings in platform costs in one year

One global travel and hospitality company, with a large Informatica and SAP DS footprint, utilized Tredence to modernize its infrastructure to a Snowflake and dbt stack. Tredence ingested all legacy jobs into its converter, analyzed the complexity of the existing landscape, and built a migration roadmap identifying components to retire, consolidate, redesign, or directly convert. The result was 50% faster delivery, a 90% reduction in errors by eliminating manual code rewrites, and a 30% productivity gain.

Tredence’s approach provides customers with a modern, cloud-native foundation that is faster, more accurate, and less costly than their legacy ETL environment. Paired with dbt’s collaboration, testing, and version control, teams can maintain trusted data, adapt quickly, and prepare for the broader advantages dbt offers at scale.

The dbt platform vision for modern data pipelines

dbt helps data teams go from raw data to reliable insights faster, with tools to help you build robust transformations, manage production pipelines, and analyze data securely at scale. Using dbt, you can bring software engineering best practices to analytics with the Analytics Development Lifecycle (ADLC).

Once you’ve migrated your data to dbt with a tool like Tredence, you no longer need to face those traditional ETL tradeoffs between speed, trust, adoption, and cost.

Speed

Speed is about more than just writing SQL faster or producing more transformations. It’s about removing blockers so everyone, from engineers to analysts, can work with confidence.

dbt acts like a baton that passes context, trust, and automation along the workflow to accelerate the whole team. The team gets shared definitions, integrated workflows, and consistent experiences that reduce rework and increase velocity.

Unlike traditional ETL or BI tools built for a single persona, dbt provides a shared context and common language for all roles, preserving both trust and speed across the workflow.

Trust

One of dbt’s founding principles is that data teams should work like software teams. That’s why we built features like CI/CD, version control, modular code, and testing into the framework from the start.

Today, these practices are formalized in the ADLC, helping teams reduce rework, collaborate across roles and organizations within a company, and move fast and iterate without fear of breaking things. Trust isn’t an afterthought — it’s engineered into every step.

Adoption and scale

dbt began as the standard for data transformations, turning raw data into analysis‑ready insights. Today, we offer a unified data control plane that integrates orchestration, observability, cost management, catalog, and semantics into a single interface powered by the dbt Fusion engine.

Processing still happens in‑database, and now increasingly across multiple warehouses and engines, thanks to our data mesh architecture. The platform supports a wide range of personas, including data engineers, analytics engineers, analysts, and business stakeholders.

We’ve recently introduced the dbt MCP Server, which provides capabilities for AI agents and accelerates the value of dbt. This breadth makes it possible for virtually anyone in the company to participate in the analytics workflow.

Cost

dbt doesn’t just improve developer productivity. It helps your whole organization become leaner by reducing duplication, waste, and confusion across the stack.

One of the most significant expenses for modern data teams is platform cost. dbt’s cost optimization features help drive that down. With observability and automation embedded directly in the transformation layer, teams can spot inefficiencies, prevent unnecessary compute costs, and align data investments with business impact. ROI tooling makes it easier to measure and demonstrate these savings.

How leading companies win with dbt

Don’t just take our word for it. Organizations across industries are demonstrating what’s possible when speed, trust, scale, and cost efficiency come together in a single transformation framework. With dbt, teams move fast while preserving trust in their data, driving adoption across roles, and keeping control over business logic, pipelines, and cloud spend.

Proven results include:

  • Enpal: 30X faster delivery
  • Pepperstone: 80% decrease in inconsistent reports
  • Siemens: 300,000 employees and 70,000+ data consumers powered by dbt at massive scale
  • Roche: 70% cost reduction by consolidating tools like Informatica, Talend, and Microsoft into a single, globally adopted transformation framework

Today’s dbt — with AI copilots, intelligent observability, and conversational interfaces — is the control plane for modern data teams, uniting transformation, governance, and AI to supercharge every step of the data workflow.

Overcome your ETL challenges - watch the webinar or request a demo of dbt today.

Published on: Sep 09, 2025

Rewrite the future of data work, only at Coalesce

Coalesce is where data teams come together. Join us October 13-16, 2025 and be a part of the change in how we do data.

Set your organization up for success. Read the business case guide to accelerate time to value with dbt.

Read now

VS Code Extension

The free dbt VS Code extension is the best way to develop locally in dbt.

Share this article
The dbt Community

Join the largest community shaping data

The dbt Community is your gateway to best practices, innovation, and direct collaboration with thousands of data leaders and AI practitioners worldwide. Ask questions, share insights, and build better with the experts.

100,000+active members
50k+teams using dbt weekly
50+Community meetups