/ /
How dbt and Tableau bring better governance to analytics

How dbt and Tableau bring better governance to analytics

Kathryn Chubb

on Jul 11, 2025

With great self-service comes great responsibility. While easier access to data has broken down barriers to information, it’s also led to a “data wild west” that undermines data trust.

dbt and Tableau have partnered to address three critical challenges facing modern analytics teams. Organizations that use them in concert have improved data governance, increased data trust, and accelerated productivity.

Deputy, an HR management software company, is a prime example. The company faced numerous challenges with its existing data architecture that eroded trust in data. In this article, we’ll look at these challenges, how the dbt and Tableau integration improves data governance, and the remarkable results that Deputy enjoyed after moving to dbt as their data control plane.

The modern data trust crisis

Only 57% of data analytics leaders say they have complete confidence in their data's accuracy. This trust deficit stems from what’s become known as the "wild west" of self-service analytics.

Tools like Tableau have democratized data access by enabling self-service access to data for BI. Unfortunately, this also introduces new challenges.

Organizations often have hundreds of dashboards running, some built using a myriad of tools, some built on custom SQL queries that connect directly to data warehouses. This freedom to connect anywhere creates environments with no standardization, massive duplication of effort, and unclear data sources.

Many analytics teams struggle with data existing across multiple platforms—Google Sheets, BigQuery, Snowflake—and often duplicated across systems. Building data models in Tableau becomes unnecessarily complicated when it's unclear which source contains production-ready data. The lack of consistent documentation can make tracing data back to its source nearly impossible.

Deputy's breaking point

Deputy's situation exemplified these challenges at scale. As a fast-growing HR management software company, their already complicated architecture became increasingly convoluted. Simple requests took far too long to complete. Developers created complicated workarounds in dashboards because the architecture wasn't designed with BI in mind.

The breaking point came when even the data team—the experts—could no longer efficiently navigate their own system. Deputy's data director, Huss Azfal, admitted that "leadership had nothing but horror stories about working with the data team."

This wasn't a people problem. It was an architecture problem that made everyone's jobs harder than necessary.

The three core challenges

The problems faced by Deputy illuminate three core problems with data development:

No consistent approach to building analytics pipelines. There’s no one, single location to find data. For data producers, this leads to duplicated effort and redundant code. For data consumers, it makes it hard or impossible to find high-quality, polished datasets.

Siloed data leads to inconsistencies. This lack of visibility into what other teams are doing means everyone ends up doing the same thing, but differently. We’ve seen different teams define the same metric in different ways in order to suit the needs of their particular use case or dashboard. That leads to mass duplication of effort, which further contributes to data trust issues.

Poor visibility into data quality. We’ve all looked at a dashboard and wondered if the numbers were accurate. Even if data consumers can find data, they lack the tools to verify the data’s quality, determine its freshness, or trace it back to its source.

How dbt transforms data pipeline development

dbt addresses these challenges by allowing analytics teams to prepare data in a modular, centralized, and governed way. It does this in three ways:

  • Bringing engineering practices to data analytics through the Analytics Development Lifecycle (ADLC), a process that accelerates data velocity while improving data maturity
  • Centralizing all data transformations in a singular data control plane, enabling data discovery and collaboration.
  • Automated testing and version control to increase data quality, collaboration, and traceability

From monolithic to modular

This transformation is immediately visible in how analytics code is structured.

Let’s look at an example. Legacy SQL scripts feature table references scattered throughout, embedded Common Table Expressions (CTEs), and nested subqueries, making them difficult to read and debug.

With dbt, these monolithic scripts break down into smaller, manageable "models" that reference each other. You could, for example, structure an order items model into different stages derived from multiple upstream models: two staging models and one intermediate model.

The modular approach creates more readable code with better white space and clearer logic flow. It's also more reusable—models can be referenced in multiple downstream models, eliminating code duplication.

When a staging orders model needs updating, developers only need to change it once, rather than hunting down every instance across multiple scripts. This approach saves analysts time, eliminates redundancy, and accelerates the development lifecycle.

Built-in quality and collaboration

dbt's GitHub integration enables parallel development, allowing multiple developers to work simultaneously on different models. More importantly, dbt enables specific tests on individual models. Developers can easily test, for example, whether a column contains unique values or has nulls, thereby identifying errors early and preventing them from cascading downstream.

The automated lineage is another game-changer. dbt automatically generates data lineage, showing how metrics like gross profit trace back through revenue and supply cost calculations to source data. This easily produced lineage provides complete transparency without manual effort.

How dbt and Tableau work together

dbt and Tableau have worked closely together to enable access to key features of dbt from Tableau reports. Using dbt’s Semantic Layer and data health tiles, Tableau users can access standardized metrics and track overall data health. This results in less redundancy and greater data trust.

Semantic Layer: Centralizing business logic

The Semantic Layer is one of dbt's most powerful features for ensuring consistency across an organization. It serves as an abstraction layer that translates raw data into business terms that companies use daily.

When a company typically defines a metric such as "typical revenue,” this definition lives in multiple places, such as individual dashboards or analyst queries, often with different definitions.

The semantic layer defines entities (primary and foreign keys), dimensions, measures, and complex metrics, such as gross profit (revenue minus cost). In dbt, total revenue would live in the Semantic Layer, providing a consistent, discoverable, and accessible result to anyone who requires it.

Unlike pre-aggregated views, Semantic Layer metrics are calculated dynamically at query time. This approach enables non-additive metrics—percentages that can't simply be summed to get accurate totals at different aggregation levels.

For example, if you have a percentage in a view or a table, you can't sum or average that percentage to get it at a different aggregate level. You have to recalculate it. A Semantic Layer definition of a metric that’s abstracted from the dimension itself means you can do calculations on the fly more efficiently.

Semantic Layer integration with Tableau

The integration between dbt's Semantic Layer and Tableau ensures consistency across all reports. After downloading the dbt Semantic Layer connector from Tableau Exchange, analysts connect to predefined metrics rather than raw data.

When building a chart showing typical revenue per customer, analysts can't change the aggregation from sum to average in Tableau. It’s already defined in dbt. This ensures that many different developers and analysts working with the same Semantic Layer use consistent definitions across all business reports.

Data health tiles: building stakeholder confidence

Data health tiles provide real-time visibility into data quality through embedded tiles in Tableau dashboards. These tiles show two key checks: data freshness (stale vs. fresh) and quality assessment (passed/failed tests).

The implementation uses dbt exposures defined in YAML files. Developers specify which models a dashboard depends on; dbt automatically runs freshness checks and displays test results. The health tile appears as an embedded URL in the Tableau dashboard, requiring minimal setup for maximum impact.

Data consumers frequently question whether the data they're viewing is accurate. The health tile provides immediate confidence that data is both fresh and has passed quality tests. Developers also benefit from this during production pushes, as they can verify that new data has surfaced correctly to prod. The real value, however, comes from giving data stakeholders peace of mind about data reliability.

Deputy: From horror stories to skyrocketing productivity

To reap these benefits for themselves, Deputy rebuilt their data architecture with dbt, focusing particularly on modularization and reducing code and metrics redundancy.

The numbers demonstrate the significant improvement Deputy's implementation of dbt Cloud with Tableau achieved across multiple dimensions. Dashboard development time plummeted from months to just one week. Computing costs decreased by 20%. Most impressively, their data team earned a +100 Net Promoter Score (NPS) rating from internal stakeholders, meaning people would actively recommend working with the data team.

The architectural benefits went beyond metrics. Deputy unified their data sources into a single source of truth. They improved governance through standardized development practices. Data freshness checks, previously nonexistent, became automatic. The modularization reduced redundancy throughout their architecture.

Productivity was now "skyrocketing," Deputy’s data director said, rather than generating horror stories from data stakeholders. Gaining a 360-degree view of its data earned Deputy a 180-degree turnaround in how both its data producers and data consumers viewed working with data.

Getting started with dbt and Tableau

The partnership between dbt and Tableau addresses fundamental trust issues that plague modern analytics. By applying software engineering practices to analytics development, organizations create reliable and scalable data products that stakeholders can actually trust.

Deputy's success proves that transformation is achievable with concrete, measurable benefits. Their journey from "horror stories" to "skyrocketing productivity" shows what's possible when data teams have the right tools and practices.

For teams inspired by Deputy's transformation, several practical resources exist. The dbt Learn platform provides free courses, including certification paths for both dbt Developer and dbt Architect exams. dbt Quickstarts offer hands-on guides for connecting to Snowflake, Databricks, and other platforms.

To discuss your dbt and Tableau integrations needs in more detail, ask us today for a demo.

Published on: Jul 11, 2025

2025 dbt Launch Showcase

Catch our Showcase launch replay to hear from our executives and product leaders about the latest features landing in dbt.

Set your organization up for success. Read the business case guide to accelerate time to value with dbt.

Read now

Share this article
The dbt Community

Join the largest community shaping data

The dbt Community is your gateway to best practices, innovation, and direct collaboration with thousands of data leaders and AI practitioners worldwide. Ask questions, share insights, and build better with the experts.

100,000+active members
50k+teams using dbt weekly
50+Community meetups