Table of Contents

  1. Data transformation
  2. Data testing
  3. Implementations + deployment
  4. Documentation + metadata
  5. The modern data stack
  6. Data dream teams

dbt at the centre of all pipelines

Jonathan’s team at Nearmap is responsible for ingestion, processing, and provisioning of data to internal stakeholders from all departments. He had a past life as analyst in different companies before becoming an engineer. Outside of work, Jonathan is a keen Snooker fan with a high break of 70. He is also a long-time Arsenal supporter.

We use dbt not just for [data transformation](https://www.getdbt.com/analytics-engineering/transformation/) but also data movement in/out of Snowflake. This makes dbt more akin to a generic scheduling and orchestration tool to us and it lives at the centre of our data pipeline. I'd like to discuss in my presentation why we do it this way, the pros and the cons of bastardising dbt this way. May also touch on our migration to Snowflake a while ago which allowed us to use dbt this way.

Browse this talk’s Slack archives #

The day-of-talk conversation is archived here in dbt Community Slack.

Not a member of the dbt Community yet? You can join here to view the Coalesce chat archives.

Last modified on: