Operational analytics is the practice of producing datasets for customer-facing operations teams (customer service, logistics, sales, support).
You know you’re practicing operational analytics when you find yourself wanting to implement a reverse ETL tool.
Reverse ETL tools solve the last mile problem of operational analytics, and sync data back to where it’s needed day-to-day. This means syncing data from an analytics warehouse, back out to tools like Hubspot, Intercom, Salesforce and their ilk.
Ownership of reverse ETL tooling generally follows the model of ETL tooling:
The process can be owned by a data engineer, who will step in to write any necessary integration scripts.
Generally, though, off-the-shelf reverse ETL tools (Census, Hightouch, Omnata, and others) are sufficient to do the job.
These tools can be configured by an analytics engineer, or by a data engineer — just depends on how your team is structured, and if you’ve centralized ownership of data sync workflows under data engineering.
What happens next (downstream dependencies) is generally owned by operational users (the marketing or sales ops team, for example).
Operational analytics results in modeled data being populated in operational tools (Hubspot, Intercom, Salesforce, etc).
Many of these reverse ETL syncs kick off customer-facing workflows, like generating custom audiences for advertising campaigns.
This makes our analytics engineering output even more critical — money is being spent and customers are being contacted based on the outputs of our models.
So how might our tooling evolve to support this new criticality?
Let’s take customer segmentation as one example: if a customer is having a completely different experience (different messaging, ads, sales experience) based on our segmentation models, how can we play with sensitivities before putting models into production?
One thing we’re noticing is that notebooks + exploratory analysis tools (more on these in the next section) play really well with reverse ETL workflows — because they allow an entire team (data + business people) to play with the downstream impacts of modeling decisions.
Does the implementation live up to the hype?
We caught up with Athena Casarotto from Drizly’s data team to find out — they recently implemented Census to sync data to primarily Salesforce.
Thanks to Dennis Hume, who’s a data engineer at Drizly, for introducing us!
The way that I and my stakeholders use tools like census, are basically to expose information that’s relevant to them in the places that it’s relevant to them.
Our team has spent a lot of time and effort setting up Looker, so that the whole company would be able to visualize data and have access to that.
But the reality is that not every person at Drizly spends most of their time in Looker.
They have other tools that they use. And one of those is Salesforce . And so in that instance, because my primary stakeholders spend so much time in this one particular tool, it is not the most efficient workflow for them to bounce it back and forth between Looker , in the Explores I’ve set up for them and to go into Salesforce and then not have one place with all that information.
And so we use Census to get the metrics that they care about , at the granularity that they want to see, in the place that they are working most often. And so that is how I use the tool, and how I work with my stakeholders to make sure that they have access to the things that they’re looking for.
Before we started moving data into other tools, all of the stakeholders just kind of accepted that the workflow was okay.
Have multiple windows open, click between them.
Say, okay, I care about this account, and that account has this Drizly store ID. And so I plug in this Drizly store ID into this visualization tool and I pulled the metrics I want here. And then, I go back to my original tool and I use that information to inform my workflow here.
Whereas now it is much more streamlined.
I have enjoyed working with this tool. I feel like it provides a lot of value to my stakeholders, but if you ask my stakeholders, they would be infinitely more excited because of the benefits it’s given.
I feel like every time I leave a meeting and I say, “Oh, I can get that into Salesforce for you.”
Grins across the board. They’re so thrilled because it’s simplifying your workflow. It’s making their lives easier.
And since my job is to hopefully make their lives easier, it’s great that I have this like relatively straightforward way to do that, um, is just the transferring of information into a place where it’s most useful.
So how might operational analytics + reverse ETL look for your team?
To help expand your vision, we’ve pulled together example use cases that Fishtown Analytics’ reverse ETL technology partners have seen in the wild:
These are real use cases that teams using their tools have implemented.
Sales Ops Workflows
Shared by Tejas Manohar of Hightouch
Based on changes in customer behavior, an account team may want to fire off customer communications, to reach out to:
This requires analytics engineers to calculate lead scores (or product-qualified lead scores) within transformations, so that sales ops folks setting up downstream workflows don’t need to know SQL.
Sales reps and customer success teams at Retool, Spendesk, CloudBees and Zeplin use custom lead scores in this way.
Customer Service Personalization
Shared by James Weakley of Omnata
Losing a loyal customer because of poor customer service is an unnecessary shame.
And a warranty claim being filed is a particularly fraught moment — something has gone wrong enough with the product that the customer is seeking compensation.
We can use operational analytics to prioritise our customer service queue, and make sure that our best customers aren’t falling through the cracks.
To get set up, we’ll want to calculate a customer VIP score in our transformations, based on equal weightings of:
We can use Omnata to pop this importance score onto the case record in Salesforce, and use it to prioritise the processing of high-value cases first. Agents can also live-query historical orders from within the Salesforce UI, if they need additional info on a customer.
This would mean that a loyal customer, who’s likely to come back to support the business, would have their warranty claim processed as quickly as possible, because the agent doesn’t have to toggle between tools to do their job.
Marketing Personalization at Scale
Shared by Sylvain Giuliani of Census
Marketers need answers to tough questions like who to target, when, and why.
Marketing teams need product usage data to answer those questions, which is where we at Census come in.
A couple use cases we commonly see on marketing teams:
Optimizing ad campaign budget by automatically creating audience segments based on product usage data This avoids wasting budget on already-highly-engaged users.
Personalizing product activation campaigns by setting up alerts for when different milestones are hit, so you send the right message at the right time.
And a couple more ideas for the road!
Thanks again to Athena and Dennis, Tejas, Chris and Syl for sharing your reverse ETL + operational analytics stories.