The More You Know: AI in Market Research - Conference Edition

What I Learned About AI In Market Research This Year

I co-hosted the Ignite: AI conference from the Insights Association and attended TMRE this year, Ignite AI from the Insights, and both offered a useful look at how AI is actually being used in research today.

  • Ignite was smaller and more focused with in-depth conversations and more honest discussions about what works. I also moderated a brand-side panel that shed light on various maturity stages of AI usage.

  • The size and breadth of TMRE gave me a good sense of how large companies are experimenting with AI and where adoption varies by industry.

Below are the distilled takeaways and the questions I’m still exploring.

Synthetic Respondents

Key Takeaways

  • AI is not strong at lived experiences or emotional nuance - it performs better on future looking questions where recall is unreliable.

  • Synthetic respondents answer slowly and do not anchor on past responses the way people do. They often amplify niche groups at the edges of a distribution. Also, they act as representatives of a segment, not replicas of individual people.

Questions I’m Exploring

  • In a segmentation, how well do synthetic respondents perform on attributes that are not strong differentiators for the product or industry?

  • Are there industries or audiences where synthetic data consistently performs better or worse?

  • Since synthetic respondents have no bias and take more time with questions, how should we compare their output to human responses in a fair way?

AI Tools in Research

Key Takeaways

  • Image generation is starting to show up in qualitative activities.

  • Some teams use AI trained on historical ad testing data for fast creative iteration.

  • Disclosing AI use in marketing can still reduce perceived credibility, though this may fade.

  • Voice response options in surveys tend to produce richer feedback than text.

  • “Human in the loop” is not only about accuracy. It also helps connect insights to stakeholder decisions.

Questions I’m Exploring

  • What other creative uses of AI could make workshops and brainstorming more productive without over-structuring them?

  • Which tools are genuinely improving research quality, and which are adding noise because they carry the AI label?

AI Moderators

Key Takeaways

  • AI moderators can support global qual by reducing cost and speeding up multilingual studies.

Questions I’m Exploring

  • When is an AI moderator appropriate, and when does the topic or audience require a human?

  • How accurate are translations, and where might nuance get lost?

  • What happens to rapport, and does it matter for the learning you expect?

Agentic AI

Key Takeaways

  • With agentic systems, the starting prompt, data, and constraints shape the entire output.

  • There is no mid-course correction. The structure and questions must be right from the beginning.

Questions I’m Exploring

  • How strict do prompts and guardrails need to be if the goal is to make insights more accessible to business partners?

  • Which parts of the workflow need human ownership, and which can be delegated safely to an agent?

  • How do we keep the output grounded in real data rather than assumptions or over-interpretation?

Where I want to explore next

  • Comparing synthetic respondents to real segment data to understand convergence and drift.

  • Testing agentic workflows in low-risk spaces to identify which steps benefit most from human guidance.

  • Designing human-in-the-loop processes that account for emotional context and not just accuracy.

  • Exploring AI-supported workshop tools that encourage creativity without overwhelming teams.

If any of these areas connect to challenges you’re facing or decisions you’re working through, I’m always open to talking through them.

Next
Next

The More You Know: Edition 7