So you use AI to help with customer insights?
If AI outputs towards customer insights feel generic, you may be solving for speed and quantity, not impact and quality.
(This post forms a neat reading pair with Repositories are a systems problem and not a tooling one)
In the post mentioned above, I explored how repositories are systems, not just a tool that reflect the type of knowledge you build. The same applies to how teams use AI. Whether AI improves or dilutes insights depends on how you harness it into your workflow.
In this post, I will focus on how to use AI effectively in the context of customer research, particularly qualitative studies, where nuance and context matter. The key is recognizing how:
AI can evolve from tool, to teammate, to teacher
The maturity curve of AI in insights process
The temptation is to treat AI as a utility: summarise this, cluster that, rewrite those. To truly use AI on the path to customer insights, you need to think of it on a maturity curve:
Tool → Teammate → Teacher.
Tool: Fast, shallow. Good for cleaning, summarising, and organising.
Example: You are trying to narrow down to top 5 most mentioned friction points in the onboarding interviews.
AI prompt: “You are a researcher with attention to detail. Please extract the top recurring friction points from these 15 onboarding interviews. Present them as a simple list, with quotes where possible.”
Teammate: Starts to help you identify patterns, ask counter-questions, and generate alternatives. (develops breadth)
Example: You are trying to find what contradictions exist between what users say they want and what they actually do.
AI prompt: “Compare user-stated preferences (e.g. ‘I like simple onboarding’) with behavioral indicators from notes and transcripts (e.g. where they got stuck or confused). Identify mismatches or contradictions with supporting quotes.”
Teacher: Offers pushback, creates strategic reframing, and identifies new pathways. Teaches you what you didn't know to ask. (develops depth)
Example: You are trying to identify unmet needs or hidden goals that might explain some unexpected behavior in users.
AI prompt: ”Given these interview transcripts, infer potential goals, unmet needs, or motivations users may have that are not directly stated. What deeper needs or use cases might the users be solving for without explicitly saying so? Explain your rationale and references for stating the above”
Each phase requires a different level of intent, input structure, and review rigor.
Broad vs. Deep: Why sequencing is everything
Most teams default to one of two extremes while harnessing AI during research synthesis.
Casting a wide net
“can you summarise 20 interview transcripts and give me top 5 themes?”
Why is a wide net bad? You get surface-level themes, but will miss what’s underneath.
OR drilling into specifics
“ can you read the interview and identify what pain points are missing and don’t pertain to the research question?”
Why is drill-down-only bad? You miss patterns, connecting threads, and sum of parts.
In practice, the real value is in sequencing breadth and depth intentionally. Start broad to detect themes. Then go deep to find nuances. Or vice versa…start with a small set of complex cases, then widen the lens to see if patterns recur.
Example sequence:
Broad prompt: “Summarise all user-reported friction points across 15 interviews. Group by task and frequency.”
Deep follow-up: “In interviews where friction was high during Task A, what emotionally charged language did users use?”
Reframing: “What assumptions about user expertise/familiarity are embedded in Task A? Can you How did you arrive at that conclusion?”
The point isn’t to choose one mode over the other. It’s to design the sequence based on your research goals, craft expertise, and data availability.
Where not to use AI
As a person with years of observation, instincts, and education, you know when to break the rules of process.
You know when a “note-worthy” user quote matters more than the average sentiment. AI doesn’t.
There are moments where AI lacks value or is unhelpful:
During (most) live interviews: AI is a distraction during live interviews. The novelty and experimental nature of AI biases participants. However there might be contexts where AI is better than humans e.g. sensitive, private, and triggering topics.
The ‘never seen/heard before’ observations: That can lead to future explorations and deep dives. AI will skim through these as they are not a pattern.
In final decision-making: AI can propose redesigns, but innovative and nuanced decisions require understanding organisational constraints, market dynamics, and brand ethos.
For team alignment: The sense-making conversations between research, product, development, and design are where shared understanding is built. AI can inform, but not replace them.
Moral or strategic decisions – AI can map options, but can’t weigh them against your company’s values.
Knowing where not to use AI is just as important as knowing where it helps. Used well, AI enables you to spend more time interpreting and less time organising. But if used poorly, it accelerates the production of low-quality output.
Don’t just aim to move fast or with low effort. Move meaningfully.
I currently partner with product teams’ continuous discovery and research programs to enable them to use AI at its fullest potential.
If you want an advisor or an interim partner and want to see how I can help, get in touch.
Again, a great post filled with practical steps and thanks for capturing the nuances of using AI in the research process