Casey Gollan on building systems for scaling research and product impact
Casey Gollan, Manager, Product Excellence at IBM, joined us for a Rally AMA on November 7 to discuss building systems for scaling research and product impact.
If you missed our live event, or want to revisit the highlights, read our recap below. If you’d like to watch a recording of the full AMA, follow this link.
Who is Casey?
I’m an engineering manager on what we call the Product Excellence team within IBM Software. You could think of it as a mix of product operations, design operations, and research operations. It’s part of a larger organization, which encompasses everything from design systems to accessibility to how we work together.
My focus is on the systems-building side of things: how can we build internal platforms to scale, automate, and integrate cutting edge practices into the day-to-day work of our product teams?
When you first joined IBM, was your role similar to what it is now? What has that evolution been like?
It’s definitely been an evolution. When I joined about three years ago, we were officially called a Research Ops team. My background includes operations,UX, and engineering so I’ve worked across many areas of Research Ops, from tool selection and procurement to research data management.
At IBM, the team was ahead of its time in starting a research repository – it was already live when I got there, and advancing that initiative has been a continuous thread in my work. Over time, I’ve focused more on the internal platforms and systems we build to scale and operationalize practices.
We have an incredible team spanning the gamut from software developers to operational experts to change management leaders and service designers. All of these people and skills work together to create a huge impact.
How does research fit within IBM, and where do you sit organizationally?
At IBM, research sits within design. There’s always the question: should research be a peer to design, or is it just another type of designer? IBM has swung back and forth on this, even in my time here.
Ultimately, the bigger question is: are teams set up to collaborate? To me, the shape of the org chart feels less important than making sure cross-functional collaboration is effective.
IBM’s design history is fascinating. About 15 years ago, there was only one designer for every 72 engineers. Design as we know it today barely existed. Then, a leader was brought in to really establish design at IBM. Around 2013, IBM announced plans to hire 3,000 designers over a few years, and transform the organization with design thinking.
Today’s IBM is one of the single largest design organizations in the world, we have a robust design system called Carbon, which is open-source and used by companies even beyond IBM. We also have a mature accessibility initiative within our design org. It’s amazing how deeply design is embedded into IBM now.
What’s the most important factor for ensuring strong collaboration?
One of the most important factors is creating a shared understanding of collaboration. For example, at IBM, we have a one page diagram for our Product Development Lifecycle (PDLC). It shows how cross-functional partners contribute at every stage of product development.
It’s not a linear handoff – it shows that product managers, researchers, designers, developers, and marketers all play a role throughout the process. Visualizing collaboration in this way helps make it tangible and actionable.
How can teams make research insights actionable and relevant for product teams?
When I joined IBM, my team had already created a research register as a way to track who’s working on what within the organization. That’s the first step: understanding what work is happening.
From there, it’s about connecting insights to the product lifecycle. Some of the most effective researchers at IBM were already manually and painstakingly creating slides that mapped their insights directly to PMs’ epics. They’d show how insights aligned with product goals and track the status.
What we’re working on now is automating this. Instead of researchers manually cross-checking systems, we want them to push their insights into the system and have the progress tracked automatically. This frees up researchers to focus on their next insights while our systems handle the impact accounting.
How can teams embed research earlier in the product development process?
It starts with shared practices, not tools. Tools can make things easier, but practices are the foundation. For example, we’ve adopted the concept of a “discovery backlog.” This runs parallel to a delivery backlog and is a space for PMs and researchers to collaborate. It’s where product teams can identify areas where they need more confidence or clarity, and researchers can help determine whether an opportunity is worth pursuing – or if it’s better to pivot.
Another practice we’ve implemented is the “insights debrief.” Instead of waiting until the end of a study to share findings, we hold collaborative checkpoints where researchers and stakeholders assess preliminary insights. This enables insights to flow into product planning even before a study is fully concluded. It’s about integrating research into the rhythm of decision-making.
How do you handle resistant stakeholders?
Resistance, for me, isa cue to look at your organization through the lens of change management. Get curious and understand why a stakeholder is resistant and what’s important to them. What are their goals and how can research align with and help accelerate those?
Stakeholder mapping is also helpful. Do you have leadership buy-in? Is there grassroots support among the people doing the work? Understanding these dynamics can guide your approach. Ultimately, it’s about relationships. Insights don’t live in tools; they live in the connections between people.
What aspects of research or operations are most suitable for automation?
Automation shines in areas where people feel frustrated doing repetitive, time-consuming tasks. For example, scheduling status syncs or chasing updates – these are perfect targets for automation. I like Simon Willison’s definition of generative AI as a “text calculator.” AI isn’t a replacement for skilled knowledge work like concise insight articulation, but it can assist a skilled and busy product manager with overwhelming tasks like summarizing long research reports or extracting answers to specific questions.
Automation is also no substitute for human relationships. Something as simple as sending a generic scheduling link instead of asking about how somebody’s day is going may seem efficient but can feel impersonal – and that’s a missed opportunity for connection. Strong relationships are key to impact, and will always be between people, not bots.
What’s been the most successful automation initiative for your team?
The automation I’m most excited about is connecting research insights to product roadmaps. By automating the linkage between insights and roadmap milestones, we’re saving hundreds of hours per year while improving data quality. This increases the speed and quality of product discovery while also helping us track which research efforts lead to product improvements and revenue impacts, creating a clearer picture of research’s value.
What tools do you recommend for process automation & what metrics do you use to measure success?
For tools, it depends on what’s available within your organization. In highly regulated industries, you may not have the freedom to pick and choose. But in general, low-code or no-code tools have been game-changers. They allow non-developers – like researchers and operations folks – to build automations that directly address their pain points.
For metrics, I focus on productivity and satisfaction. Are we saving people time? Do they like using the system? Tools that frustrate people aren’t really successful, even if they’re technically effective. I want our team to build tools that spark joy.
What are the biggest challenges in knowledge management?
Change management is a big one. Asking researchers to log their studies and insights or asking stakeholders to search a research repository every time they’re doing discovery adds steps to everyone’s workflows. You need to understand people’s real-world needs and the value of these systems needs to speak for itself at every touchpoint.
Another challenge is information overload. At IBM’s scale, we generate thousands of studies and insights. It can be both overwhelming and surprisingly siloed. We’re focusing now on driving research reuse,ensuring insights are easy to find and can create second-order benefits, like looking across studies from all product areas to improve our enterprise-wide design system and impact hundreds of products.
How can teams retroactively organize their knowledge management systems?
First decide how far back to go. You don’t need to update everything! Focus on the last few years or the most critical investment areas.
Also, identify the minimum amount of information needed to make the system useful. Just because you can add a field doesn’t mean you should. Prioritize what’s actionable and relevant.
Why do you think knowledge management is often overlooked for research teams?
I think the problem is rooted in measures of success that are too short-sighted. Immediate, tangible impacts like delivering a research insight to inform a product decision are relatively easy to see, appreciate, and measure. Long-term benefits,like the ROI of reseach reuse, are harder to justify because they’re distributed, lagging, and often remain invisible.
Matt Duignan from Microsoft has a great concept called the “Quadrant of Doom.” It’s about the tension between work that is high-impact and low/recognition. For example, reuse of an insight over 5 years might benefit 10 teams you’ll never meet. It’s impactful, but it doesn’t provide the immediate satisfaction of delivering a single insight to your own team tomorrow. That lack of recognition can make knowledge management feel unrewarding.
At IBM, we’ve been fortunate to have strong and growing executive-level support for the ROI of reuse. This allows our team to focus on driving productivity by enabling product teams to get more value and even recurring value from insights that the company has effectively already paid for and oftentimes already paid off. High quality insights are a gift that can keep giving. But only if they can keep being found, shared, and integrated long into the future.
What strategies have you found effective for advocating for the value of Research and Research Operations?
The first step is recognizing that research ops is a real job. It can’t thrive as the side project of your most organized and passionate researcher. At enterprise scale, you need a dedicated team to drive these efforts.
At IBM, we’ve also adjusted how we position ourselves. We rebranded from “Research Ops” to “Productivity” to reflect our broader impact on product development. Sometimes, the term “ops” is misunderstood as overhead, and we wanted to make it clear that we’re builders too. Framing your work in terms of its business value is key to growing support.
This transition to “Product Excellence” has been unfolding over a few years. Initially, we focused heavily on access to users and insights – things like recruitment and research repositories. While those are still important, our scope has expanded.
Now, we’re thinking cross-functionally: how do we make entire product teams more effective? It’s a big shift, but it aligns research with broader organizational goals. For me, it’s about getting out of silos and thinking about how research de-risks and accelerates the entire product development process.
Beyond product outcomes, how do you measure research impact?
One of the most important metrics we look at is whether teams are doing enough discovery work. Are product teams engaging with research to inform their decisions? That’s a key indicator of research influence.
We’re also exploring speed of product discovery, meaning how quickly teams can gain the confidence they need to make decisions. Research that helps teams decide to pivot or avoid pursuing bad ideas can save significant time and resources. We can now capture and highlight that kind of impact.
What are common characteristics of an organization that lead to successful strategic support systems?
Leadership needs to value excellence, not just in our outcomes, but in the processes that drive excellence. This creates an environment where systems and practices can sustain long-term success.
Dan Hill’s book Dark Matter and Trojan Horses talks about the “dark matter” of organizations – the invisible forces that shape outcomes. Thinking about the broader context, like how teams collaborate and how decisions are made, is essential for building successful systems.
What non-research-related domains or areas inspire your perspective on research and operations?
I find inspiration in systems thinking. Donella Meadows’ article “Leverage Points: Places to Intervene in a System” is a classic. It explores different ways to drive change, from changing the rules of the game to shifting paradigms. Her highest leverage point is changing your own mental models! Recognizing that “all paradigms are, themselves, paradigms” and the way we approach problems and solutions is neither fixed nor inevitable.
I’m also inspired by the metaphor of firefighting. Many teams describe themselves with frustration as always “putting out fires,” but what if we designed our teams like actual fire squads? They’re equipped, trained, and ready to respond quickly. Thinking about how to build a team that’s prepared and agile enough to respond to the ceaseless change in large organizations, leading edge technologies, research, and ops has been a fun organizational puzzle.
What principles guide your approach to selecting tools for research and operations?
I use an acronym: PIES: Portable, Integrated, Extensible, and Sustainable.
- Portable: Can you get your data in and out of the system easily? If you switch tools, will you lose everything?
- Integrated: Can the tool connect to other systems, either directly or through third-party integrations?
- Extensible: Can you build on top of the tool? Instead of forcing everyone into one platform, can the tool do what it does best while integrating with other systems?
- Sustainable: Is the company or team behind the tool stable? The research tooling space is competitive, and tools often merge or shut down. Long-term viability matters.
Ops teams often face constraints such as budget limitations, regulatory requirements, or company-wide initiatives to reduce vendor spend. So while it’s great to evaluate tools, it’s equally important to work diplomatically and realistically within your constraints.
Any advice for teams looking to incorporate service design principles into their work?
Hire service designers! But also: you don’t need a formal background in service design to get started. One of our previous leaders ran the entire team through an online course called Service Design for the Real World. It’s self-paced and provides a great foundation. There are also plenty of books and resources available.Service design is about thinking holistically – understanding the journey of your product teams, their pain points, and the touchpoints where they interact with your systems. As you build that understanding, you can design processes and tools that truly meet their needs.
Connect with Casey
If you enjoyed Casey’s AMA:
- Follow Casey on LinkedIn and say hello!
- Read Casey’s article on integrating UX insights into product planning.
- Dive into more from Casey by checking out his Medium.
Thank you, Casey!
We loved having Casey join us and are so grateful he was willing to share his time, insights, and expertise with us. If you’d like to watch the full AMA, follow this link. The above article is personal and does not necessarily represent IBM’s positions, strategies, or opinions.