With over 50 million happy users globally, Miro is a poster child for product-led success.
Not only have they been successful in delighting end-users, but Miro has also attracted a sizable customer base from large enterprises through their Product-Led Sales motion.
For the uninitiated, Miro is making it easier for remote teams to collaborate digitally. Their digital whiteboard product connects your teams in real-time to collaborate on all kinds of work so that you can co-work with your team from around the globe.
As you can imagine, with millions of users across hundreds of thousands of workspaces, that is a lot of data to collect, organize and make available to the go-to-market team.
We chatted with Aaron Bannin, Analytics Engineer at Miro, to learn more about how he and the team at Miro handle all of this data and their approach to working with tools like Pocus to make data insights actionable for the go-to-market team.
In this article, you’ll learn about:
- Miro’s data team and their charter
- Challenges faced by the data team at Miro
- How to set up your data stack for product-led
- Why do data teams like Miro’s love Pocus
Meet Aaron, Analytics Engineer at Miro
Aaron Bannin is an Analytics Engineer on Miro's Data & Analytics team. His role and his team's charter is to design strategies for how the company should move and consume data to make decisions. Aaron works on the key integrations from Miro’s data stack to external systems for various workflows and reporting, including Pocus.
Aaron spends his days living in the Snowflake data warehouse and their transformation tool, dbt. In terms of data maturity, Miro has built an incredible data stack over the years, the data team has invested in building a thoughtful strategy for how data is managed and consumed by other tools. With a Product-Led Sales (PLS) motion generating high volumes of data, it was not feasible for all of this data to live in Salesforce, a data warehouse was a necessity.
The challenge: making data actionable for GTM
Go-to-market teams are probably some of the largest data consumers within an organization. In Aaron’s role, he is responsible for understanding the data needs of go-to-market and making that data available across various tools.
However, with so many tools across the go-to-market stack, data silos and reliability can become issues. When data points, and data definitions begin to differ across tools, go-to-market trust in that data begins to erode.
A perfect example of this issue is how CRMs have become a huge data silo, making it increasingly difficult for GTM teams to access the data they need. In Aaron’s experience, this is especially a problem for product-led teams that need access to product usage data.
Existing solutions are not built for GTM
The data team had tried to push insights to the sales teams with BI dashboards but found the issue was that the UX was not ideal for GTM teams and was not always the format they needed.
“BI tools are all about aggregating data. Visualizing tabular data in an actionable format for sales teams was difficult.”
Miro’s go-to-market team needed a single source of truth to reduce wasted time and effort from their reps digging through various tools to get a 360 view of insights about customers.
Aaron’s team and Pocus were very aligned on the need to solve this problem.
“We have many different tools, and within those tools, there are lots of different places the data lives. Solving this is obviously very aligned to the Pocus vision of having a unified and consistent view of the data. So I was happy to work with Pocus on making this happen.”
Setting up your data warehouse
Aaron and the team at Miro are quite mature in their data setup today. Almost all data is piped into the data warehouse and then transformed using dbt before it can power other tools like Pocus. Yet some data silos and data reliability concerns still exacerbated the problem. This prompted the team to begin a project to clean up and standardize data tables in the warehouse before connecting Pocus to reduce hassle and rework for the data team in the future once GTM began to experiment with data in Pocus.
To facilitate the standardization project, Aaron and the team had alignment meetings internally to figure out what data they had, how that data behaved, and the best way to store that data that made sense for go-to-market use cases.
The result was Aaron and team built dbt models and Pocus ingested them seamlessly.
The solution: a collaborative platform for data and GTM
Data teams are often nervous about the tools the go-to-market team brings to the table. Rightfully so, often, these tools create an additional burden for the data team to set up and maintain while also creating yet another silo of data.
Data and GTM teams love Pocus because it is a collaborative platform where data and GTM can get alignment that benefits both teams.
For the go-to-market team, they benefit from a unified single source of truth that they can trust and rely on to drive sales playbooks without coming back to the data team every time.
For why data teams benefit from Pocus, Aaron puts it best, on the data side “We have more control over data and better data governance consumed by the go-to-market team. All with less work from our team, freeing up our time to work on other mission-critical tasks.
“We have more control over data and better data governance consumed by the go-to-market team. All with less work from our team, freeing up our time to work on other mission-critical tasks."
”On the go-to-market side, “Pocus is highly attuned to how sales organizations think and their expectation. The biggest advantage of using Pocus is the time spent thinking about the UX. Surfacing data in an actionable way that they will actually want to use. Plus, the team will continue to iterate and refine - it would be hard to replicate that internally with a data or engineering-led effort”
How the data team interacts with Pocus
We often get asked by data and RevOps teams how exactly Pocus works with their existing stack of tools and that largely depends on the maturity of their stack, go-to-market motion and a number of other factors.
In Miro’s case here is an inside look at how we worked with the data team to get set up and the ongoing work required by the data team to keep end users in Pocus happy.
Then it’s time to get data into the right shape to make it actionable for go-to-market teams Pocus has flexible data tools to make it easy for resource-strapped data teams who would rather not own all data modeling and transformation, but we’re equally happy to let your data teams take the lead.
Here are three approaches for how data teams can work with Pocus:
#1 Let Pocus handle all the data modeling and prep
For teams combining multiple data sources and not having the data team resources internally, use Pocus’ data tools to build the model. In this approach, you can use our in app modeling tools to bring together data from the data warehouse, CRM, and other sources. We can work with your data team to identify the right raw data sets needed and build all metrics needed to power GTM team use cases.
#2 Data teams handle the prep, and Pocus connects the dots with GTM
For data teams who want to manage transformations in the data warehouse themselves (like Miro), Pocus will work with the go-to-market team to define the metrics and data needed. With this approach, your data team will manage all of the prep, typically pointing us to tables in the data warehouse or creating new dbt models.
#3 Mixture of both approaches
Some customers will take a combination of these two approaches. In this scenario, foundational data will be managed and prepped by the data team, think key tables for users, metrics around active users, and payments. But the team can still use Pocus’ data transformation features to do “last-mile” transformations as requests come up from the go-to-market team (think stuff like account whitespace = employees - active users).
Pocus has first-class support for data warehouses (CRMs and other common repositories for product usage and customer data), so we handle the complexity here. All we need are credentials for various tools (scoped to the necessary data), in Miro’s case, it was a set of specific tables built in Snowflake for Pocus consumption. Unlike other tools, where the data team would then also be responsible for keeping an eye on the integration, Pocus is responsible for all data pipelines, maintenance, and data quality monitoring.
Once the initial setup is complete, for Miro ongoing work from the data team is only required when there are net new metrics are needed to be calculated. As mentioned above, if your team uses Pocus’ last mile modeling, you will need even less ongoing maintenance.
“At this point, only net new data. Other than that it is pretty low lift for the team. I’ve spent way more time thinking about how to get the same data we push to Pocus into other tools.”
The outcome: less time wasted reconciling data discrepancies
In the end, Aaron and his team have saved hours per month on average by using Pocus. They have also saved the cost of building and maintaining a solution internally.
“Simply put, it would have taken a team of three or four engineers a year to get anywhere near feature parity to what Pocus built. There’s a reason you don’t rebuild Salesforce or Snowflake. Pocus has expertise in the PLS use cases, understanding the end user (sales reps), and how they want to work.”
Learn more about build vs. buying Product-Led Sales Platform.
Advice for early-stage companies
Earlier-stage companies either have smaller data teams or no data/analytics resources whatsoever and may find the explainer above daunting.
Don’t worry, Pocus can work with your data too.
Larger companies with mature data teams and data stacks may opt to connect Pocus and build solely on top of the data warehouse, but the reality for earlier-stage companies is much messier.
Most customers use Pocus to combine data from a variety of tools including the data warehouse, but also pull in data from CRM (Salesforce or HubSpot) and anywhere else important business data lives.
Pocus has flexible data tools to make it easy for resource-strapped data teams who would rather not own all data modeling and transformation, but we’re equally happy to let your data teams take the lead. Which option is right for your team will depend on your data maturity, resource availability, and what your data team wants to own.