Panel Discussions
March 6, 2025

Inside the ReOps Maturity Matrix

On March 6, Lauren Gibson, Content Marketing Manager at Rally, hosted an exclusive in-person panel and breakout sessions featuring the six key contributors behind the ReOps Maturity Matrix. Together, they shared insights on why they built the matrix, how it came together, and how teams can use it to assess and improve their Research Ops maturity.

Missed the event? Or want to revisit the key takeaways? Read the full recap below or watch the live panel discussion here.

Who are the 6 ReOps contributors?

👋 Andrew Warr: “I’m Andy, a Product Manager at Dropbox. I was very passionate about contributing to the ReOps Maturity Matrix because it’s something I wish I had earlier in my career.”

👋 Brad Orego: “My name is Brad. I use they/them pronouns and I’m based in Brooklyn. I lead the research team at Webflow. I’ve been here for about seven months and spent a few years building Research from scratch at Auth0 before that. Super excited to chat with you all today. This matrix was a great collaboration, and I’m excited to talk about how we built it.”

👋 Bruno Daniele: “I’m Bruno! I’m officially a UX Program Manager at Google, but I lead UX Research Operations internally. I focus on everything from sourcing participants to managing logistics and incentives, so leveraging a framework like this Matrix is near and dear to my heart. I’m based in Dublin, Ireland but I’m originally from Brazil.”

👋 Crystal Kubitsky: “I’m the Research Operations Lead at MongoDB. Like Bruno, I also work on broader UX Research Operations. I live in the greater Philadelphia area with my husband and two sports-loving kids. I transitioned from research leadership into Research Ops at MongoDB and it’s been an exciting journey. I’ve worked in large organizations with democratized research, compliance needs, and red tape. The Matrix is the tool I wish I had when I started.”

👋 Graham Gardner: “I’m Graham Gardner joining from Chicago. I work at U.S. Bank in Research Operations. My background spans Research Operations in different forms, from social sciences to a long stint at IDEO, where I focused on ethnographic research and participant management. I’m especially interested in tools and automation, so this has been a really fun project.

👋 James Villacci: “I’m James Villacci calling in from New York. I’m Head of Global UX Research at HelloFresh. I started here six years ago as the sole UXR and built a research function from the ground up.  My newest focus has been ramping up a new Research Platform team, which focuses on enabling and operationalizing research across our entire company.

Why did we choose a matrix instead of a traditional maturity model? 

Brad: Initially, Lauren & the Rally team presented a stage-based model (Stage 1 through Stage 5) but when I looked at it, I thought, I can’t assess my entire research practice on a linear scale.

We might be really, really strong in data governance but terrible in process and scope of practice. Or vice versa: maybe our tooling is really far along, with robust support, but our training is lacking. A rigid stage-based model doesn’t capture that nuance.

It didn’t reflect the reality of how Research Operations function in organizations. So through a lot of conversation, we got to this point where we asked: How many levels make sense? How many different verticals should we include?

We needed something flexible, but not so complex that it became impossible to use. If you have eight different verticals and twelve different levels, that’s just too much. So we found a balance, pulling from existing frameworks, like the Eight Pillars of Research Ops, and distilling them into the six key areas we have now.

This approach lets teams evaluate themselves across multiple dimensions rather than trying to fit into a one-size-fits-all progression. It also helps identify where to invest. For example, maybe your recruiting is great but your knowledge management is struggling. That’s something you wouldn’t get from a traditional model.

Every company I’ve worked with has had strengths and weaknesses in different areas depending on its history and priorities. A startup might not care as much about data governance as a publicly traded company. That’s just the nature of the beast. The goal was to build something that works for everyone, regardless of size, structure, or maturity.

At the end of the day, Research Operations have always been a bit chaotic and evolving – we’re all doing the best we can. We want our practices to be ethical, scalable, and impactful, but what that looks like varies by organization. The matrix allows for that variation while still providing guidance. That was the driving force for me.

Crystal: I think we all had a similar reaction when we first saw the traditional staged model. We had to start somewhere. You take a look at it, examine it from your own context, and figure out: Can I actually use this?

For me, I think we struck the right balance in the matrix. We show some leveling but we don’t make it all about that. Instead, we divide it into the necessary groupings where teams can lean into specific spaces and figure out the right step forward based on their position.

What I love is that this structure allows you to look at your own environment and assess what matters most right now. It gives teams a way to level set and re-examine priorities in a way that actually makes sense for them.

So when we were building this, I kept thinking, Is this going to be useful for someone in my position, in my org? Will it help us make better decisions? And I think the answer is yes. It’s a tool that can grow with us as we continue to develop our Research Operations practice.

What was your inspiration behind your contribution to the matrix?

Graham: Similar to Brad, I think the idea a rigid framework where you input the things, and you get a number, and you say, great, I’m an A plus or I’m a C is oversimplified in a lot of ways, and obviously, that’s not what it looks like.

And even in some ways, the matrix itself is a little simplified, because within an organization – like if you look at a bank like where I work – each part of the organization might also be on its own journey in those different pillars or different stages and levels. So you can’t really look at it organization-wide.

There’s also the question: How are you going to use the matrix to be better? And is it a tool that’s going to work well internally in your organization?

I think some people might bristle at the idea of being told that they’re “not mature,” especially if what they’re putting out in the world is really high quality. There’s a little bit of a disconnect between, Hey, we have great design, great products, great services and we can still make improvements internally.

And how do you position that? Like, if I’m a leader internally, how do I tell people, Hey, you’re not mature?

So that was a little bit of my thought process. My hope with my input was focused on, “How can we make this feel more like a compass rather than a map?” Because everybody’s going to have their own complexity.

How have you applied the matrix in your organization? 

James: Honestly, when I first saw the draft, I thought, We’ve been doing this for years, we just didn’t have a name for it! What I love about the matrix is that it provides insight into how other companies approach research operations. It helps us learn from industry best practices and mistakes.

At HelloFresh, we enable research across 18 geographies and 8+ product lines. Seeing how others approach similar challenges has been invaluable.

What’s really exciting about this is that, across industries, we tend to not share information as much as we could. There’s always this underlying sense of competition, whether it’s “trade secrets” or just company policies. But this matrix gives us a window into what other companies are doing: what’s working for them and how we compare.

I’ve seen a lot of value in using this matrix to validate what we’re already doing well and to surface areas where we can improve. It also helps highlight, Oh, I never actually thought about this specific thing before! For us, moving to a global approach to research means that every new best practice we identify helps thousands of people do research better. And that’s a big deal.

How do you approach discussions about maturity with your team & cross-functional stakeholders?

Bruno: I like to start discussions with problem statements not with the framework itself. If you bring in a structured framework too early, people might react like, Oh, here comes the theory guy or this feels too academic.

So instead, I start with what people are already experiencing day to day. What’s frustrating them? What’s slowing them down? What’s a problem they’d love to solve? Then, I introduce the matrix as a way to contextualize solutions.

One thing I really like about this framework is that you can then position how another company might interpret a problem differently. Maybe another company is doing something at a more advanced level and we can learn from that.

I also love that the matrix shows interconnectivity between different parts of Research Ops. For example, if we’re talking about repository management, we can’t just focus on storing research; we also have to consider engagement and activation. Otherwise, why are we doing all that work if no one is using the insights.

So I like to start with a real-world problem and then use the matrix to frame the discussion. That way, people don’t feel like I’m introducing a theoretical framework for the sake of it.

What’s the best mindset for evaluating maturity?

Graham: Organizations are unique and different. But I think from a Research Operations philosophy standpoint, the job is really about making strategic decisions about what should be standardized across the organization and what should be left flexible to grow in the ways that make sense. So in some ways, I’d approach a maturity matrix in the same way.

What are the things that should be consistent and standardized? And what are the things where we should allow teams to adapt and move in their own direction?

I think you have to be both flexible and realistic about what’s achievable. Not every organization is going to be a top-tier example in every category and that’s okay.

Brad: I don’t have a specific strategy necessarily, but the mindset is: Don’t pretend. It doesn’t help you to say, Oh yeah, we’re really great at this thing if you’re actually not. You have to get into the right headspace to be honest about where your org stands.

When I first started thinking about this question, what first popped into my head was:, Pour yourself a coffee, go for a walk, take a long hard look in the mirror, and then get to work. You have to be realistic and ask yourself: Do we have templates for this? Do we have a documented process for that?

One good example is GDPR compliance. Do you actually have a system in place for handling the right to be forgotten? Probably not.Can you actually purge every single piece of customer data across all your tools, surveys, interviews, recordings, and insights repositories? Again, probably not. And that’s okay. But you have to be realistic about where you stand so you can actually improve.

James: Another way to think about it is research debt. We talk a lot about tech debt in the industry like, what’s the legacy code we need to update? Or design debt: What UX problems have accumulated over time? Well, research has debt too.

So as you go through the matrix and see where you’re not as strong, think about it as identifying research debt – what gaps need to be addressed and how can you plan a roadmap to fix them?

What’s next for the matrix?

Andy: I’ll start off by saying when I introduced myself earlier, I mentioned that this is something I wish I had. And that was echoed by a lot of folks here. If I can reference a movie (The Matrix 😉) quote: “It’s the question that brought you here. You know the question, just as I did. What is the matrix?”

And now we have the matrix. So, what’s next?

We hope this becomes a tool for self-reflection – for yourself, your management chain, your peers, and for the discipline of Research Ops at large.

I’ve worked at both mature organizations and less mature ones, and in the past, I was always figuring things out as I went learning from experience, talking to others, and just making it up.

But now, we have something concrete to build upon. And I think the real power comes from this community because the best way for this matrix to evolve is through feedback and shared experiences.

Brad: The big question is: Is this actually useful for you?

Because we can tweak the levels, change the categories, whatever, but at the end of the day, it only matters if it’s actually helping teams plan roadmaps, assess their needs, and advocate for resources.

So if it’s not working, tell us. We want this to be as actionable as possible.

Breakout room 1

In this breakout session with Andrew Warr and Bruno Daniele on insights activation and enablement management, we explored a challenge nearly every research team faces: How do we ensure insights aren't just stored, but actually used? 

Too often, research reports get buried and forgotten. As And put it: “A research report is a coffin for insights.” Even when teams invest in repositories, they frequently become static storage spaces rather than dynamic tools for decision-making. Bruno pointed out that scaling insights and improving access is still a major challenge. It’s not just about storing knowledge, but ensuring it actually guides decisions. So how do we change that?

Bringing insights to where people already work

One of the biggest takeaways from the discussion was that expecting people to “go find research” doesn’t work. Instead insights need to be delivered where decisions are happening. 

  • Andy shared that at Dropbox, the research team built a repository in Airtable only to realize that their users lived in Slack, not Airtable.
  • The solution? ush insights into the platforms teams already use, whether that’s Slack, Jira, Notion, or dashboards.
  • Bruno said that research activation and organizational alignment go hand in hand. A repository alone won’t solve anything if research teams aren’t also advocating for its value.

💡Instead of forcing people to adopt a new system, embed research into their existing workflows and actively evangelize its importance. 

Measuring impact: More than just storage

A repository alone isn’t a solution. The real value comes from how often insights are referenced and applied.

  • At Dropbox, Andy’s team measured success by tracking how often past research was cited in new executive narratives. 
  • Bruno pointed out that organizations need better ways to measure how insights are being consumed and acted upon, not just whether they exist in a repository.
  • Some of the most impactful research happens when insights surface at the right time, in the right place, instead of sitting in a system no one checks.

The future of insights: From storage to action

One of the biggest challenges teams face isn’t just organizing insights, it’s activating them.

  • AI and better search capabilities could help surface relevant insights when they’re needed, making repositories more dynamic.
  • Bruno mentioned that research shouldn’t stop at discovery, it should naturally lead to actionable workflows.
  • Imagine a world where insights don’t just inform decisions, but also automatically transition into feature requests, product roadmaps, or strategic recommendations.

At Dropbox, a company-wide leadership offsite focused on “rebooting customer focus” helped surface research insights. Without leadership support, research often remains an afterthought.

👉Key takeaways👈

🔑 Meet teams where they are by integrating research into existing workflows.

🔑 Measure research impact by how often it’s referenced, not just stored.

🔑 Make insights actionable. Don’t just store them, embed them in decision-making.

🔑 Advocate for research at a leadership level to drive real adoption.

🔑 Leverage AI for smarter discovery. Insights should find you, not the other way around.

The discussion made one thing clear: Research is only valuable when it’s used. As teams look ahead, the focus should be on activation, integration, and enablement – not just documentation.

Breakout room 2

As research teams scale, the need for automation and governance becomes more pressing. How can we leverage tools and AI to streamline research operations without losing human oversight? And as more teams beyond Research Ops start conducting their own studies, how do we ensure governance keeps pace with democratization?

In this breakout session with Graham Gardner and Crystal Kubitsky, we tackled these questions, exploring the role of AI in research, the challenges of scalable governance, and how to automate wisely without compromising quality.

The role of AI: Support, not replacement

AI is making its way into research processes, but where should it fit and where should it not? 

  • AI has the potential to automate repetitive tasks, making research teams more efficient.
  • Some teams are experimenting with synthetic users to simulate real user behavior, but this sparked debate and introduces valid concerns.
  • Rally’s CEO, Oren Friedman, who moderated the discussion said AI should support research, not replace human-to-human connection.

💡AI can be useful for streamlining research processes, but organizations must be intentional about where it fits to avoid losing the depth and nuance of real user insights.

Scaling Governance as research expands

As more teams outside of traditional research functions start conducting their own studies, governance becomes a challenge.

  • One attendee raised the question: How do we create scalable governance best practices as other teams take on research?
  • The group discussed the importance of clear research guidelines to ensure consistency in consent, ethics, and data handling.
  • AI may help standardize research governance, but it requires human oversight to ensure quality and compliance.

💡Governance can’t be an afterthought. Organizations need structured yet scalable guidelines to keep research ethical and effective.

Smart automation: Knowing what to automate

Not everything in research can (or should) be automated. But how do teams decide what to automate?

  • Graham suggested a simple exercise: Ask AI, “Here’s my research process: What should I automate?”
  • The most effective automation supports research without compromising human judgment.
  • Teams using Slack can leverage pre-built workflows to streamline repetitive research tasks.

💡 Automation should enhance research efficiency, not replace critical thinking. The goal is to eliminate bottlenecks, not eliminate human insight.

👉Key takeaways👈

🔑 AI should enhance research, not replace human insights.

🔑 Governance must scale as research democratization grows.

🔑 Automation works best when it eliminates bottlenecks, not human judgment.

🔑 Standardized research processes help maintain ethical and high-quality research.

🔑 Intentional automation leads to efficiency without sacrificing insight depth.

Breakout room 3

Research teams are constantly balancing business alignment, resource constraints, and strategic impact. How do you make research an indispensable part of decision-making rather than an afterthought? How do you scale insights across teams without overwhelming them?

In this breakout session with Brad Orego and James Villacci, we tackled these challenges head-on. The discussion covered how to align research with business goals, optimize resource management, and ensure insights don’t just get produced but actually drive change.

Aligning research with business objectives

One of the biggest takeaways was that research teams need to speak the language of the business. If leadership sees research as disconnected from the bottom line, it will always be deprioritized.

  • James shared that smaller-scale research can be a great entry point for getting stakeholder buy-in.
  • His team at Hellofresh created UX Research insight shows for stakeholders, which was especially effective during company reorganizations.
  • Brad said that strategic alignment is more important than volume. Not all research requests should be pursued.

💡Research teams should actively connect insights to business objectives to ensure their work remains relevant and valued.

Making research discoverable & scalable

Research repositories are only valuable if people can find and use the insights.

  • One attendee shared the challenge that their team struggled to find insights stored in Confluence. 
  • James suggested thinking about the front-end experience of knowledge management, not just the backend storage. Could a chatbot or AI-powered search help teams access insights faster?
  • Brad stressed that training teams to navigate research repositories is just as important as having a structured system.

💡Research Ops isn’t just about storing knowledge, it’s about enabling easy discovery and use.

Resource constraints and how to do more with less

Many teams, especially those in growing or resource-limited organizations, struggle with bandwidth. The discussion centered on how to maximize impact when headcount and budget are limited.

  • James suggested that when headcount growth isn’t an option, focus on knowledge building instead of conducting more research.
  • Researchers should question the relevance of incoming requests, Brad said, and prioritize work that aligns with business impact and figure out how to operationalize along the way (instead of doing it up front). 
  • An attendee shared that as a team of one, balancing research and operations has been a constant struggle. James recommended bringing in interns or temp employees as a way to grow capacity over time. 

💡To scale research effectively, teams should focus on refining processes, strengthening knowledge management, and building a strong case for additional resources.

Leading research in complex organizations

For teams in large companies or heavily regulated industries, navigating internal bureaucracy can be a challenge.

  • An attendee who works in an emerging Fintech-like team in a bank shared that balancing speed and compliance is an ongoing struggle. 
  • Brad’s advice: Figure out what will actually get you fired vs. what’s just a slap on the wrist, and work within that space.
  • Some teams have successfully used “wrist-slap” moments to highlight inefficiencies and advocate for process changes.

💡 Understand organizational boundaries and use them as leverage for change.

👉Key takeaways👈

🔑 Align research with business objectives to increase stakeholder buy-in.

🔑 Make research insights easy to find and reference. Storage alone isn’t enough.

🔑 Prioritize research requests strategically to focus on impact.

🔑 Maximize resources by investing in knowledge management and scalable processes.

🔑 Use internal barriers as a tool for advocating change.

The discussion reinforced a critical truth: For research to be impactful, it must be aligned, accessible, and strategic. Teams that invest in business alignment, knowledge management, and thoughtful resource allocation will be best positioned to scale their impact.

Get your copy of the ReOps Maturity Matrix

If you haven’t already, grab a copy of the ReOps Maturity Matrix and get access to our assessment template here.