Plentific implements a robust, scalable data workflow with dbt Cloud
This is the story of how Plentific uses dbt Cloud to bring accuracy and stability to their data pipelines and reports
in data pipeline breaks since implementing automated end-to-end testing
data team growth
over the last 3 years
modeled from 1000s of raw tables
A real-time property management orchestration platform
Plentific is pioneering real-time property operations for real-world impact. Its software as a service (SaaS) platform seamlessly connects owners, operators, service providers, and residents in one place, making operations simpler, faster, and more efficient. Working with clients to streamline operations, unlock revenue, enhance resident experience, and remain compliant, Plentific empowers clients with data-driven insights that drive action.
Plentific is dedicated to building stronger communities where people can thrive, with its growing network of 1.5M+ properties and 25,000+ service providers worldwide.
Providing recommendation models and customer-facing data products
To recommend the best contractors for each service, Plentific uses machine learning models. Data not only impacts revenue analysis but also generates direct revenue for Plentific. With their flagship data product, Advanced Analytics, customers can view in real-time everything about their property’s repairs, maintenance, inspections, and compliance jobs.
A lack of data development standardization
Like many companies, in the first years of existence, Plentific was a start-up that operated without a data warehouse and data infrastructure in general and instead used ad-hoc SQL scripts, which ran against their production databases. This led to:
- Siloed data knowledge: Business users needed help from SQL-proficient technical stakeholders to extract or interpret data.
- Pipeline instability: Without pull requests or version control, new code from the product would break data pipelines. There were no alerts or testing in place.
- Lack of observability: The data team could not visualize the status of their cron jobs or easily perform root cause analysis.
As Plentific grew, these issues hindering data productivity and quality were remediated by reassessing the data stack and implementing a scalable infrastructure.
Selecting the right tools
Plentific’s data stack reassessment began with selecting a new data warehouse (Snowflake) and business intelligence tool (Looker). But they were still missing a piece in their architecture puzzle: a middle layer for data transformation.
“We needed to transform approximately 1000 tables in a normalized form to 15-20 tables consumable by non-technical users,” explained Raúl Aviles Poblador, Head of Data Engineering at Plentific.
“We searched for tools for this use case and found there wasn’t any real competition to dbt. Airflow was doing something similar, but it wasn’t SQL-specific. Looker had a semantic layer, but it was at the time more focused on business intelligence.”
To assess whether dbt could meet their transformation requirements, the Plentific data team got started with dbt’s open source offering, dbt Core, before later migrating to dbt Cloud.
Reaping the data quality benefits of a modern data stack
Improving data stability with automated testing
The team implemented an automated testing system to prevent pipeline breaks caused by product changes. Now, whenever there’s a new pull request on Github, a Jenkins job checks the JSON generated by dbt to assess whether the new code could impact existing data pipelines.
Since implementing these new processes, Plentific hasn’t yet had a single broken data pipeline in months. The new system boosts the product team’s confidence, as they know their changes won’t break anything downstream.
“We’re very proud of this system because it protects the integrity of our data products and obviously, both we and our clients highly value that,” emphasized Raúl.
“dbt brought this new structure into place. It made it easier for us to test our changes across machine learning and advanced analytics,” added Bruno Lopes, Principal Data Engineer at Plentific.
Consolidating revenue reporting
Plentific’s revenue reports, built on Looker, are widely used—accessed by 1 in 3 employees across the company.
“Each financial management vendor reports data differently so we need to consolidate all revenue data into a single, accurate model that can be consumed and understood by all stakeholders,” explained Raúl.
Data from Netsuite and Xero is transformed and joined in dbt, simplifying the creation, management, and maintenance of the revenue reports.
Honoring Advanced Analytics SLAs with increased stability
Beyond reporting, dbt is used to create the consumer-facing models in Plentific’s flagship data product: Advanced Analytics. These models are updated in almost real-time and accessed by power users at large property manager organizations in the UK, Germany, and the US.
“dbt professionalized all transformation steps and enabled us to scale, minimize risk, and increase stability,” said Raúl.
Automated documentation with dbt Cloud, Github, and Confluence
Plentific set up an automated workflow that creates a Confluence page whenever new code is published to Github. Business and data users can now leverage this automatic documentation to answer their questions and reduce dependency on data and engineering stakeholders.
“It’s very quick for everyone to see what tables and sources are being used in the Confluence docs. And, because now we centralize all our data on dbt, this workflow was easy to implement,” shared Bruno.
What’s next for Plentific: enhancing the platform with more machine learning and AI
Moving forward, Plentific will keep using its data stack and workflow to support the development of all data products. Next on the list: building and improving new revenue-impacting machine learning models, like pricing recommendation algorithms, and generative AI that enhances work order accuracy and diagnostics.