/ /
Mistakes I made as the head of analytics (and what I’d do differently now)

Mistakes I made as the head of analytics (and what I’d do differently now)

Kyle Salomon

last updated on Apr 09, 2026

Before I joined dbt as a solutions architect, I spent time leading an analytics and data engineering team. I managed the business analytics (BA) and data engineering (DE) teams, helped build out our data platform, and made a lot of decisions I thought were right at the time.

Spoiler alert: some of them were not.

Now that I sit on the other side—helping customers adopt and scale dbt—I see the patterns everywhere. The same mistakes I made are the ones I watch teams make every single day. So consider this my confessional. These are the things I got wrong, what I wish I'd done instead, and what I'd tell any data leader who's willing to listen before they learn the hard way.

A quick note: Jack Kennedy deserves a ton of credit here. Jack was the brains behind implementing dbt at our company. I was his manager; my job was to remove blockers and give him room to operate. A lot of the sharpest insights in this piece come directly from conversations with him about what we would've done differently.

Mistake #1: I didn’t push my team to stay ahead of the platform

This is the one that stings the most because it was entirely within my control.

When we adopted dbt, we learned what we needed to learn to get the job done. And then… we stopped. New features would ship, new paradigms would emerge, and we'd collectively shrug and say "we'll look at that when we need it." Classic mistake.

What I should’ve done:

  • Made continuous learning non-negotiable. I should have pushed every person on my team to regularly dig into newly released features and the roadmap. Not just read the changelog—actually think about how those features could solve their day-to-day problems.
  • Set up monthly knowledge-sharing sessions. Each person owns a topic, learns it, and teaches the rest of the team. Simple, cheap, high-impact.
  • Gotten the team through SA-level training. The depth of understanding you get from that kind of enablement is a game-changer. It would have completely shifted how my team thought about the platform.

And this isn’t just about dbt:

I'm telling this story through the lens of dbt because that's where I live now, but this lesson applies to every tool in your stack. Your cloud data warehouse, your BI platform, your orchestration layer, your ingestion tools—all of them are shipping new capabilities constantly. I know for a fact we left a lot of meat on the bone with dbt, but I'd bet we were underutilizing our data warehouse and other tools just as badly. We simply never created a structured way for the team to investigate what each vendor was offering and learn from it.

If I could do it over, I'd assign ownership of vendor relationships across the team. Someone owns staying current on dbt. Someone else owns the warehouse. Someone else owns the BI tool. Each person is responsible for knowing what's new, what's coming, and how it could help us. That kind of distributed awareness is how you stop leaving value on the table across your entire data stack—not just one platform.

The “faster horse” problem:

This scales beyond individual habits into how entire teams think about solutions.

It's the classic Henry Ford analogy. If you asked people what they wanted, they would've said "a faster horse." Ford didn't build a better horse—he built a car. A completely different solution to the same underlying problem. That's exactly what happens when a team is so locked into their current approach that they can't see a fundamentally better one sitting right in front of them. They keep breeding faster horses—optimizing workarounds, layering on complexity—instead of stepping back and asking whether the whole approach needs to change.

When your team knows what's on the roadmap and what's newly available, the question shifts. They stop asking "how do I make this workaround better?" and start asking "do I even need this workaround at all?" That shift in thinking is everything. It's the difference between a team that's perpetually catching up and one that's building ahead of the curve.

The mesh example:

Our team operated as a classic hub and spoke model—a Center of Excellence made up of data engineers and analytics engineers (the hub) supporting domain teams across Finance, Product, Marketing, Sales, and others (the spokes). On paper, it's a solid structure. In practice, we made it harder than it needed to be.

We ran everything out of one monorepo and one project. Every spoke team was working inside the same massive codebase as the hub. That meant they were exposed to things they didn't need to see, navigating complexity that wasn't theirs, and constantly dependent on the core team for changes that should've been within their own control.

If I had taken the time to learn dbt Mesh and multi-project architecture when it was introduced, we could have fundamentally changed this dynamic:

  • Simplified what the spoke teams saw. Each domain team could have had their own project scoped to the models and data that mattered to them—not the entire estate.
  • Given them more responsibility and autonomy. Most of those spoke teams wanted more ownership over their data. Mesh would have let us give it to them in a controlled, well-bounded way instead of the all-or-nothing access of a monorepo.
  • Set clear boundaries between hub and spoke. The core team could have published curated, contracted interfaces for the spokes to build on, instead of everyone reaching into the same tangled codebase.

Instead, we stayed in our old patterns—one repo, one project, one bottleneck—and missed an opportunity to fundamentally improve how we collaborated across domains.

Why this matters:

The dbt platform is evolving faster than most teams realize, and the ones who invest in staying current are the ones who get outsized value. The teams that treat dbt like a static tool are missing out on massive capability.

Mistake #2: We were too married to our original data estate

We built our data estate, and then we treated it like it was sacred. Every layer, every naming convention, every model structure—it was all inherited from the original design, and we never seriously questioned whether it still made sense.

It didn't.

What went wrong:

  • We should have changed the shape of our layers and our data as our needs evolved. Instead we kept bolting things onto a structure that wasn't designed for where we were headed.
  • Data discoverability suffered. When your architecture doesn't reflect how people actually need to find and use data, everything gets harder.

The lift-and-shift trap:

When we moved to dbt, the question was: do we lift-and-shift first and fix later, or do we rebuild from scratch?

Here's what I'd say now, with the benefit of hindsight:

  • If your team is coming from nothing—lift and shift into a controlled environment. Don't stress about perfection. Get into dbt, get version control, get testing. Then begin to fix.
  • But the follow-up is absolutely necessary. You must reorganize into proper architecture after the initial migration. Too many teams treat lift-and-shift as the finish line. It's the starting line.
  • Before you build anything net-new, make sure your business objects are well-defined and you have at least a base layer of documentation in place. Otherwise you're building a house on sand.

But first ask yourself: should we even move this?

This is the lesson Jack and I regret the most from our migration. Before you lift and shift anything, you need to ask two questions:

  1. Do we actually need this?
  2. Does someone own this?

If the answer to either question is no—do not bring it into your new environment. Full stop. Don't migrate garbage just because you can.

We learned this the hard way. One of our engineers wrote a script to lift and shift everything (it was pretty sweet)—every model, every object, regardless of whether it was actively used or owned by anyone. We knew a lot of it was garbage. We did it anyway. And then we spent an enormous amount of time dealing with the consequences: maintaining things nobody needed, debugging things nobody understood, and cluttering an environment that was supposed to be a fresh start.

The instinct during a migration is to say "let's just move it all and sort it out later." Resist that instinct. Be ruthless about what gets to come along. If it doesn't have a clear owner and a clear purpose, leave it behind. Your future self will thank you.

Why this matters :

If you’re thinking about migrating to dbt platform or expanding your deployment, you need to hear this. The migration itself is just step one. The real value comes from what you do *after—*rethinking your architecture, improving discoverability, and being willing to break from the old design.

Mistake #3: Our job orchestration was stuck in the past

We ran our jobs the way everybody runs their jobs: daily schedules, hourly schedules, cron expressions, and a prayer that nothing breaks overnight.

It worked. Until it didn't scale.

What we should’ve done differently:

  • Used triggers instead of schedules. Instead of running everything on a timer, we should have leaned into triggering jobs off of completed upstream jobs. Take advantage of the API—let the work flow naturally instead of forcing it onto a clock. The tools to do this existed. We just defaulted to what was familiar: cron jobs and hope.
  • Communicated better with stakeholders. If my team of data analysts that were closer to the business had truly understood how orchestration worked under the hood, they could have set better expectations with the business. Instead, we were reactive—explaining delays after the fact instead of designing for reliability upfront.

And here's the kicker: just because Rapid Onboarding gets you started doesn't mean that's the entire window. There's a lot of value outside of that initial onboarding phase that teams completely overlook.

The SAO connection:

Now, I want to be clear—I left my previous role before Fusion and state-aware orchestration were even announced. There's no way we could have known SAO was coming. But that's almost the point. If we had built our orchestration the right way from the start—using triggers, thinking in terms of dependencies rather than schedules—the team I left behind would have been in a much stronger position to adopt SAO when it arrived. The mental model would've already been there. The architecture would've been aligned. Instead, they'd have to unwind years of schedule-based patterns before they could even begin to take advantage of it.

The lesson isn't "you should've predicted the future." The lesson is: build with the best patterns available today, and you'll be ready for whatever ships tomorrow.

Why this matters:

Orchestration is where a lot of hidden pain lives—wasted compute, late data, frustrated stakeholders. Adopting trigger-based orchestration now isn't just about solving today's problems, it's about being ready for where the dbt platform is headed.

Mistake #4: We never treated our dbt infrastructure as code

This one is squarely Jack's domain; he owned this and would be the first to say we should have done it sooner. I'm not a Terraform expert, but the lesson is clear.

What we missed:

  • We should have used Terraform with the dbt provider to manage our jobs, groups, and infrastructure as code from the start.
  • Instead of manually configuring jobs in the UI, we could have had our entire dbt platform infrastructure defined in a Terraform repo—1:1 mapping between what's in the repo and what's running in production.
  • This would have given us:
    • Reproducibility—spin up environments, replicate configurations, no more "who changed that job?"
    • Control—permissions, jobs, resource blocks, all managed in version-controlled code
    • Auditability—every change is a PR, every PR is reviewable

Why we didn’t do it:

Honestly? It was a capability that was introduced after we were already on the platform, and we fell into the same trap as Mistake #1—we didn't stay ahead of what was new and possible. By the time we realized Terraform could have transformed how we managed dbt, we had already accumulated a bunch of manual configuration debt.

Why this matters:

If you are scaling your dbt deployment or managing multiple projects and environments, Terraform is a game-changer. It's the kind of thing that sounds like overhead upfront but pays for itself ten times over as complexity grows.

Mistake #5: We ignored governance until it was too late

Nobody wakes up excited to talk about data governance. I get it. But ignoring it cost us more time and headaches than almost anything else on this list.

What went wrong:

  • We built data products without clear owners. If nobody owns it, nobody maintains it. And if nobody maintains it, it rots. We should have had a hard rule: don't build objects that don't have owners. Only pull in data products that have clear, accountable ownership.
  • We underutilized our resident architect. We had access to an RA (during our early stages of dbt) and we used them for short-term tactical wins—firefighting, basically. We should have engaged them for long-term strategic value: architecture reviews, best-practice adoption, asking what’s the future of dbt, and building a foundation that would scale.
  • We ignored the dbt style guide, especially around marts. This sounds minor, but when your marts layer is a mess, everything downstream suffers. Naming conventions, model organization, documentation—all of it compounds.
  • We had little to no documentation and context. This is one I knew was a problem from the day I took over. Our models were poorly documented, and instead of fully understanding what dbt, MCP, and the catalog could do for us, we reached for outside tools—Confluence, Google Docs, our company intranet—to try to fill the gap. We were patching documentation together across many different platforms when the capability to do it properly was sitting inside the tools we were already paying for. I just never took the time to research the best solution and understand what our vendors actually offered.
  • We let data testing slide. Documentation wasn't the only thing we neglected—data quality suffered right alongside it. We had some testing in place, but nowhere near enough. It was the same pattern: we knew it was a problem, we knew better solutions existed, and we still let it slide for far too long. Governance, documentation, and testing are a package deal—when you let one go, the others tend to follow.

Why this matters:

Governance isn't sexy, but it's the difference between a data platform that scales and one that collapses under its own weight. Ownership, style guides, and proper RA engagement aren't "nice to haves"—they're the foundation.

And here's the part that makes this mistake feel even bigger in hindsight: AI is coming for every data platform, and it needs context to be useful. Everything I neglected—documentation, testing, clear ownership, well-defined processes—is exactly the context that AI needs to actually help. Without it, AI doesn't accelerate your team. It just confidently generates garbage faster.

Think about the dbt MCP server. If you expose it to an LLM today, that LLM is going to interact with your models, your definitions, your metadata. If none of that is documented, tested, or governed—what exactly is the LLM working with? It can't be trusted. Not because the tooling is broken, but because the foundation underneath it is hollow. The AI is only as good as the context you've given it, and if that context is incomplete, inconsistent, or missing entirely, you've just handed an LLM the keys to a house with no blueprints.

Of all the mistakes on this list, this might be the one that compounds the hardest going forward. I did no favors to the leaders who came after me. I left no context behind—no documentation to build on, no testing framework to trust, no governance structure to lean into.

And now, in a world where AI is supposed to unlock the next wave of productivity for data teams, the teams that ignored this work are the ones who will be the least prepared to take advantage of it. The ones who invested in governance, documentation, and testing? They'll plug AI into a well-documented, well-tested estate and actually get value from it. Everyone else will be starting from scratch—or worse, trusting outputs they shouldn't.

So don't be me. Get this right now—or better yet, yesterday.

Mistake #6: I stopped failing fast

This one is personal.

Early in my career as a web developer and data analyst, I lived by a simple principle: fail fast. Try things. Figure out quickly whether something is going to work or not. If it doesn't, pivot. Iterate. Move on to the next solution. Don't fall in love with your first attempt; fall in love with finding the right answer, however many attempts it takes to get there. Be AGILE!

That mindset was my edge. It's how I learned, how I solved problems, and how I built confidence in tackling things I'd never seen before.

And then I became the head of analytics, and I lost it.

What happened:

When I took over a larger team, I defaulted to what was safe and "what worked." I stopped taking swings. Instead of rapidly testing approaches and iterating toward the best outcome, I leaned on the familiar. If something was functioning—even if it wasn't great—I left it alone. I chose stability over experimentation, and I told myself that was the responsible thing to do as a leader.

But that's not leadership. That's risk avoidance dressed up as pragmatism.

What I should’ve done:

  • Kept the fail-fast mentality, even at scale. Leading a team doesn't mean you stop experimenting—it means you create an environment where the team can experiment safely. Timeboxed spikes. Low-risk proof-of-concepts. Small bets with big learning potential.
  • Modeled the behavior I valued. If I wanted my team to be bold and iterate quickly, I needed to show them what that looked like—not retreat into safe, predictable decisions.
  • Stayed true to who I am. This is the real lesson. When you step into a bigger role, the instinct is to become someone else—someone more cautious, more "executive." But the traits that got you there are the ones you need to keep. You have to bring your whole self to the job, not a watered-down version that's afraid to break things.

Why this matters:

No one should lose their edge when they take on more responsibility. If anything, a bigger team needs that willingness to try, fail, learn, and pivot more than a small one does. The problems are bigger, the stakes are higher, and the cost of staying stuck in "what works" only compounds over time.

Be true to yourself. Create the culture. Take the chances you believe in. And if they don't work out—good. You just learned something faster than everyone who's still playing it safe.

The meta-lesson

Every single one of these mistakes boils down to one thing: we didn't invest time in understanding what was possible before we needed it.

We were always reactive. Always catching up. Always learning about a feature or a pattern after we'd already built around the old way of doing things.

If I could go back and give myself one piece of advice, it would be this:

Carve out time—real, protected, recurring time—for your team to explore what the tools you have at your disposal can do for you. Not what it does today for you, but what it could do tomorrow. That's where the compounding value lives.

Thanks to Jack Kennedy for being the real architect behind all of this—both literally and figuratively. Any of the smart ideas in here are probably from conversations with him. The mistakes are all mine.

VS Code Extension

The free dbt VS Code extension is the best way to develop locally in dbt.

Share this article
The dbt Community

Join the largest community shaping data

The dbt Community is your gateway to best practices, innovation, and direct collaboration with thousands of data leaders and AI practitioners worldwide. Ask questions, share insights, and build better with the experts.

100,000+active members
50k+teams using dbt weekly
50+Community meetups