Table of Contents

  1. Data transformation
  2. Data testing
  3. Implementations + deployment
  4. Documentation + metadata
  5. The modern data stack
  6. Data dream teams

dbt at the centre of all pipelines

Jonathan is the Data Engineering Lead at Nearmap. His team is responsible for ingestion, processing and provisioning of data to internal stakeholders from all departments. He had a past live as analyst in different companies before becoming an engineer. Outside of work, Jonathan is a keen Snooker fan with a high break of 70 and a long time Arsenal supporter.

Originally presented on 2020-12-14

We use dbt not just for data transformation but also data movement in/out of Snowflake. This makes dbt more akin to a generic scheduling and orchestration tool to us and it lives at the centre of our data pipeline. I'd like to discuss in my presentation why we do it this way, the pros and the cons of bastardising dbt this way. May also touch on our migration to Snowflake a while ago which allowed us to use dbt this way.

Browse this talk’s Slack archives #

The day-of-talk conversation is archived here in dbt Community Slack.

Not a member of the dbt Community yet? You can join here to view the Coalesce chat archives.