/ /
Data ins and outs for 2026: what data teams are keeping, cutting, and reconsidering

Data ins and outs for 2026: what data teams are keeping, cutting, and reconsidering

Kathryn Chubb

last updated on Feb 20, 2026

In the latest episode of The View on Data, hosts Jerrie Kenney, Erica “Ric” Louie, and Faith McKenna do a New Year reset. But instead of setting perfect goals they’ll forget by February, they share their “ins and outs” for 2026: what they want more of in their work, what they’re done tolerating, and what they’re trying to do differently as data teams keep scaling (and as AI keeps… being AI).

🎧 Listen & subscribe: Spotify | Apple Podcasts | Amazon Music | YouTube

In: simplifying your work without dumbing it down

Jerrie opened with a theme that’s easy to say and harder to do: simplify.

Not because simplicity is trendy, but because complexity has a way of multiplying when you’re busy, growing, or adopting new tools. Over time, “just ship it” turns into a maze: more models, more layers, more documentation, more handoffs, more ways for someone new to get lost.

The group’s point wasn’t “do less.” It was closer to “make it easier to understand what’s happening, and why.”

In practice, that looked like:

  • Trimming unnecessary bloat in projects (especially long chains of intermediate models that nobody can explain)
  • Writing documentation that answers the questions people actually ask, instead of documenting everything just to prove you did it
  • Cleaning up metadata so future-you (and future teammates) can navigate without reverse engineering your entire stack
  • Choosing one shared way of working across teams so collaboration doesn’t feel like translating between two systems

This ties back to the reality that most data teams are judged on: trust, speed, and cost. When your work is understandable, it’s easier to trust. When your processes are consistent, it’s easier to ship. When your stack is less chaotic, you spend less time rebuilding the same logic in five places.

In: frameworks that stop you from solving the same problem 12 times

Ric brought up something that comes up in almost every scaling data org: you can fix individual problems forever, or you can build a framework that prevents the problems from repeating.

A broken dashboard is a problem. A revenue model no one trusts is a problem. A messy intake process that sends every request into a black hole is also a problem.

But if those issues keep showing up, the real problem is usually the workflow around them.

Ric talked about the friction that happens when teams work closely together but operate in completely different modes (like Kanban in one corner, sprints in another). Sometimes that split works. Sometimes it just guarantees a constant series of “wait, who owns this?” conversations.

The “in” here was alignment: fewer parallel processes, clearer paths from request to production, and shared habits that help teams move faster without creating governance nightmares later. It’s very ADLC in spirit: treat data like software, and build a workflow that supports quality over time, not just speed today.

In: making work fun again (seriously)

Faith shared an idea she’s been thinking about a lot as someone who teaches technical topics: learning is only engaging when it feels alive. She jokingly framed it as “learning is boring unless it’s gossip,” but the underlying point was real.

People remember what feels interesting, human, and relevant. They forget what sounds like a monotone tutorial they’re forcing themselves through.

They also talked about how “fun” can show up in normal work, not just in training. PR reviews that feel like humans wrote them. A little personality in collaboration. Team culture that doesn’t treat every interaction like a compliance audit.

If you’re building enterprise software and the whole experience feels joyless, it’s harder to stay curious. And if curiosity dies, you get stagnation, burnout, and a team that stops experimenting.

In: using AI to get unstuck, not to outsource your thinking

AI came up in a way that felt grounded: it’s helpful, and it also makes it easier to create nonsense at scale.

Jerrie shared a workflow that a lot of people will recognize. When you can’t turn your thoughts into a clear plan, it’s often easier to talk it out than it is to write it. So she’ll “narrate” the problem to ChatGPT and ask it to organize the mess into something usable.

That’s the good version of AI support: getting momentum when you’re stuck, or turning scattered thoughts into a structured first draft.

The group was also clear that this only works if you keep ownership of what you’re saying. If you let AI generate documentation (or solutions) that you don’t actually understand, you’re not saving time. You’re pushing complexity downstream to whoever has to debug it later.

Out: over-engineering (especially when AI makes it easier)

When they moved to “outs,” the biggest one was the obvious opposite of simplify: over-engineering.

They called out the specific flavor of over-engineering that’s getting worse right now: AI-generated complexity. You ask for a solution to a simple problem, and suddenly you have a query that looks like it was designed to win an argument on the internet instead of run reliably in production.

The underlying warning was pretty simple:

  • If you can’t explain what it does, you shouldn’t ship it
  • If it takes five layers to get to something that should be straightforward, there’s probably a clearer approach
  • If you’re building clever abstractions mostly because they’re clever, you’re creating future maintenance work for someone (possibly you)

It’s also a cost issue. More complexity often means more compute, more rebuilds, and more time spent on rework instead of delivering useful data products.

Out: “That’s too technical for me”

Faith’s “out” was personal and honestly pretty relatable: she’s done saying “that’s too technical for me.”

Her point wasn’t that everyone needs to become a platform engineer overnight. It was that this phrase can turn into a reflex that stops you from even trying. And in data, almost everything feels technical until you’ve had a few reps with it.

It also turned into a broader conversation about culture: some corners of the data internet make it easy to feel behind, especially when the loudest voices are early adopters who live for the newest tooling.

The counterpoint the group offered was healthier:

  • You can learn it, even if it takes longer
  • You’re allowed to ask questions without knowing the perfect term
  • “I don’t know yet” is a normal state, not a personal failure

Episode takeaways

A few things the episode kept circling back to:

  • Simplification is work, but it pays you back every time someone new touches your project.
  • If your team keeps tripping over the same problems, you probably need a shared framework more than you need another patch.
  • AI can help you think and draft faster, but it can also help you create bad complexity faster.
  • You don’t have to be the most technical person in the room to be effective. You do have to stay curious and keep learning.
  • Saying “I don’t know” is often the most competent thing you can say, as long as you follow it with “and here’s how I’ll find out.”

As they wrapped, the hosts challenged listeners to write their own work ins and outs for 2026 and share them. What are you keeping because it makes you better, calmer, and faster? What are you dropping because it creates noise, rework, or insecurity?

VS Code Extension

The free dbt VS Code extension is the best way to develop locally in dbt.

Share this article
The dbt Community

Join the largest community shaping data

The dbt Community is your gateway to best practices, innovation, and direct collaboration with thousands of data leaders and AI practitioners worldwide. Ask questions, share insights, and build better with the experts.

100,000+active members
50k+teams using dbt weekly
50+Community meetups