AMAs
August 22, 2024

Kate Moran on training researchers & non-researchers alike

Kate Moran, VP of Research & Content at NN / g, joined us for a Rally AMA on August 22 to discuss how to elevate your team through training researchers and non-researchers. 

If you missed our live event, or want to revisit the highlights, read our recap below. If you’d like to watch a recording of the full AMA, follow this link

Who is Kate?

I’m a VP at Nielsen Norman Group (NN / g). For nearly 30 years, NN / g has been a resource for UX professionals, providing practical research guidance balanced with rigor. My role focuses on identifying industry trends, crafting guidance, and producing resources like articles, templates, and training. I’ve been in UX for 14 years, including almost a decade with NN / g, training teams to do better, faster, and more reliable research.

What strategies are most effective for upskilling researchers and staying on top of trends?

Research is an interesting field because, in many ways, it doesn’t change much – human beings don’t change that much. The essential tactics or philosophies of working with users to figure out what they need have remained constant for decades. However, the tools we use and the things we study are always evolving. For example, now we’re researching VR, AR, and other realities, which weren’t a focus years ago.

For upskilling, the biggest opportunity is learning to use the tools that accelerate workflows. Tools like Rally or User Interviews for participant management, or analysis tools like Dovetail, have replaced the days when I was taking notes and analyzing qualitative data in Excel. Speed matters, and tools help.

Another big piece of upskilling is ongoing, incremental learning. I always recommend lunch-and-learns because they work. People are busy, and they don’t have time for massive training sessions, but small, digestible pieces of content help teams absorb new skills without feeling overwhelmed. It’s not revolutionary advice, but it’s effective.

Any tips for engaging fully remote teams in training or research?

Remote research and remote training require intentionality to be effective. I’ve loved remote research since before it was cool (pre-pandemic) because it stretches your budget, allows for diverse participants, and can be more convenient for everyone involved. But it’s harder to engage stakeholders remotely, and you lose the serendipity of in-person debriefs. To address that:

  • Assign observer roles: Give specific, meaningful jobs to observers, like taking notes on content or usability issues. This keeps them engaged and accountable.
  • Schedule buffer time: After sessions, build in time for reflection. Discuss interesting moments, like when a participant misunderstood a task, and ask observers how they would respond in real time. This builds their instincts and adaptability.
  • Encourage active participation: During sessions, pause and ask observers, “What would you do here?” or “What follow-up questions would you ask?” This helps them think critically and practice skills that experienced researchers have developed over time.

What are the most common mistakes novice researchers make?

The most common mistake is underestimating how difficult research is. It seems simple: “I’ll just talk to participants and ask questions,” but once you’re in the session, you realize how nuanced it is. Poorly phrased questions or tasks can derail the whole study.

Pilot tests are invaluable for catching issues early. Run a test session with real participants (not Dave from Accounting) to uncover problems in your setup. However, pilot tests won’t address the larger issue: a lack of exposure to proper planning and execution. This is why democratization can be controversial. Without training, people don’t realize the complexity of designing and conducting good research.

Done correctly, though, democratization can educate stakeholders and enhance buy-in, ultimately making research more impactful.

How do you address situations where stakeholders question research methods or results?

This happens all the time, especially with qualitative research. You present findings from 12 participants, and a stakeholder says, “That’s not enough people.”

Sometimes, you can explain: “Studies show that five to eight participants uncover 85% of usability issues.” But education isn’t always effective. Instead, treat stakeholders like customers. Ask yourself, “What do they need to feel confident in the data?” Maybe they need quant data to complement the qual insights. In that case, leverage analytics to provide supporting evidence, like error reports or clickstream data.

If skepticism occurs after research is complete, it’s about presenting findings in ways that resonate. Show how the insights align with their goals or address risks they care about.

How do you tailor research communication and deliverables for different stakeholders?

This is so important. Early in my career, I made the mistake of delivering 100-page reports because I thought everything was interesting. Stakeholders don’t feel the same way – they need concise, actionable insights.

For leadership, use a layered approach:

  • Start with bullet points summarizing key findings and recommendations.
  • Provide more detailed slides or documents for those who want to dig deeper.
  • Use research repositories like Dovetail to embed tagged clips or raw data for validation.

This way, you cater to busy stakeholders while still offering depth for those who want it. Always think about what’s relevant to your audience.

How can researchers address biases when involving stakeholders in research?

Bias is a huge risk, especially in strategic or discovery research. When stakeholders are involved, it’s easy for preconceived ideas to influence how research is conducted or interpreted.

One way to mitigate this is through training. Teach stakeholders about common biases and how to avoid them. During planning, focus on neutral language and framing for tasks and questions. 

When analyzing data, encourage a collaborative workshop where multiple perspectives are considered. This reduces the risk of one person’s bias dominating the interpretation.

Ultimately, the goal is to balance inclusivity with rigor. Stakeholders bring valuable context, but researchers must guide the process to ensure the integrity of the findings.

How can organizations move beyond feature requests to deeper user insights?

Moving beyond feature requests requires shifting research upstream. Instead of waiting for requests to pile up, start discovery research earlier to identify root causes and broader user needs.

It also requires breaking silos. Stitch together insights across teams to create a unified user journey. This level of integration often depends on top-down buy-in, so it’s a cultural shift as much as a process one.

How do you see AI complementing research methods?

AI is reshaping research in fascinating ways. Tools can analyze large datasets faster or simulate personas based on aggregated data. 

For example, “synthetic personas” can summarize desk research by impersonating user groups like “American teenagers.” While useful for initial exploration, these tools shouldn’t replace real User Research. 

AI also needs guardrails. Without thoughtful design, AI can produce unpredictable or harmful outcomes.

How do you see AI shaping the future of research?

AI is already having a massive impact, and it’s only the beginning. Tools like synthetic personas simulate user insights based on aggregated data. While they’re not a replacement for real users, they’re useful for desk research and summarizing initial findings. 

Looking ahead, I’m excited about AI tools that will work with your organization’s own data to create dynamic personas or synthesize insights. Imagine being able to “talk” to a persona built from your research repository, with the ability to ask it questions or simulate scenarios. That said, we must use AI responsibly. Researchers play a critical role in defining the guardrails to ensure AI enhances – not replaces – human-centered design.

How do you deal with AI hallucinations in desk research?

AI is like early Wikipedia: helpful but not always accurate. Always verify its outputs, especially for critical decisions. If the stakes are high, validate findings with primary research. For low-risk tasks, like generating ideas, AI can save time, but don’t trust it blindly.

How do you involve non-researchers, like designers, in research projects? What should they do or avoid?

Involving non-researchers in research requires an open mindset. Sometimes, we gatekeep and assume certain roles, like designers or developers, aren’t suited for research. That’s not fair or accurate. For example, I’ve trained people fresh out of school with no UX or research experience who have turned out to have excellent instincts and facilitation skills.

However, there are limits. Some people may not naturally excel at research, and that’s okay. You can still involve them in low-risk ways, like planning, note-taking, or observing sessions. Planning is especially valuable because it helps stakeholders understand how much effort goes into research. When they see the complexity, they’re more likely to respect the process and results.

Avoid pigeonholing people based on their roles. Instead, assess individual strengths and find ways to leverage them in the research process.

How can democratization of research be done effectively without diminishing researchers’ roles?

Democratization is a double-edged sword. Done poorly, it can lead to poorly executed research or even undermine the value of professional researchers. However, when done intentionally, it can amplify a team’s impact and help stakeholders better understand research.

The key is to think strategically about what types of research to democratize. Tactical and evaluative studies, like usability testing, can often be handed off to designers or other cross-functional team members with some training. Discovery and strategic research, however, typically require deep expertise and should remain in the hands of experienced researchers.

Also, remember that democratization doesn’t mean eliminating researchers – it’s about extending their influence. By training others to take on smaller tasks, researchers can focus on more strategic initiatives. This can free up time for discovering insights that shape the organization’s direction.

What’s your perspective on research maturity models?

Research maturity is complex and highly dependent on organizational culture. Moving up the maturity ladder involves not just conducting more research, but integrating it earlier in the product lifecycle and across teams. This requires top-down support and cross-functional collaboration.

One challenge is that maturity isn’t linear. You can backslide. Many organizations are currently struggling due to budget cuts and shifting priorities. Progress takes time, and it’s important to celebrate small wins along the way. Focus on what’s within your control and build on that.

Connect with Kate

If you enjoyed Kate’s AMA:

Thank you, Kate!

We are extremely grateful to Kate for sharing her time, energy, and insights with us – we easily could have continued the discussion for several more hours. If you’d like to watch the full AMA, follow this link