AI After the Hype: Notes from IIEX North America 2026
A reflection on five sessions from IIEX North America 2026, held April 29–30 in Washington, DC, and what they reveal about the next chapter of AI in market research.
Key Takeaways
- IIEX North America 2026 marked the industry’s shift from AI hype to AI integration. The question is no longer whether to use AI in market research, but where it helps and where it hurts.
- The conference’s dominant theme was cognitive dissonance: the same speakers showcasing AI capabilities also defended the irreducible human element of research.
- Jess Vande Werken of Rivian introduced “whole human insights”: research that captures both rational decisions and emotional reality.
- Mary Beth Jowers of Heineken reframed change management as “consumer diagnosis for coworkers,” applying insights skills inside the organization.
- Mario Carrasco of ThinkNow drew a critical distinction: generative statistical models are not LLMs and require different industry conversations.
- Research buyers should now ask specific questions about where AI is used, what human controls exist, how regulatory obligations are handled, and how trust is preserved.
This year’s IIEX North America felt different from last year’s. The conversation about AI in market research has moved past the hype.
In 2025, that conversation was dominated by hype, uncertainty, and a fair amount of theater. Speakers wrestled with whether AI was a threat or an opportunity. Vendors made bold claims they couldn’t fully back up. Buyers asked questions no one had clean answers to.
In 2026, the eagle has landed. AI is no longer a question to debate; it’s a reality to integrate. And what made IIEX 2026 worth the trip wasn’t the boldness of the AI demos, which were largely incremental. It was the unmistakable pattern that ran underneath nearly every session, sometimes within the same talk by the same speaker:
The push toward more automation, side-by-side with the insistence that the human element is the part of research that cannot be automated away.
That tension isn’t a contradiction. It’s the actual frontier.
The dissonance was the message
In session after session, presenters showed off how much faster, cheaper, and more scalable AI is making parts of the research workflow. And in nearly every one of those same sessions, the presenters then pivoted, sometimes within the same slide, to a defense of the human work that AI cannot do.
It was easy to read as cognitive dissonance. I think it’s something more useful: the industry has finally moved past believing AI is a yes-or-no decision and is starting to grapple with the much harder question of where, specifically, AI helps and where it actively hurts.
Five sessions made the case from different angles.
“Whole human insights”
Jess Vande Werken, Senior Lead of Design Strategy & Research at Rivian, opened the conference with a keynote about what she called whole human insights: research that captures the rational AND the emotional reality of how people live with a product. Her thesis: research can lead us down one path while the lived reality is somewhere else entirely. We’re good at studying how people make decisions; we’re not nearly as good at studying why.
It was a useful reminder for anyone in our industry who has been tempted to believe that more behavioral data, processed faster by smarter algorithms, will close that gap. It probably won’t. The “why” lives in territory that AI is not currently equipped to explore, and the design choices that flow from it are still profoundly human work.
“Consumer diagnosis for coworkers”
Mary Beth Jowers, VP Consumer & Market Insights at Heineken, argued on the Future of Insights panel that insights professionals should start treating internal change management as the same diagnostic exercise we apply to consumers. Segment your stakeholders. Identify their unmet needs. Design use cases that meet them where they are. Build a story with enough emotional weight to displace the status quo.
It reframed something many of us have puzzled over for years: why some research drives action while equally good research sits in a deck. The work that lands has been designed for the human reading it, not just the question being asked. AI can help generate the analysis. The translation from analysis to action, the work of moving an organization, remains stubbornly human.
“We’re not debating AI. We’re navigating an identity crisis.”
Mario Carrasco of ThinkNow used his Beyond the Hype session to draw a line that is getting lost in much of the current industry conversation: generative statistical models are not LLMs. They are different tools for different problems. Generative statistical models have specific, defensible use cases (sample frames, distorted weighting, hard-to-reach populations), and conflating them with the broader chatbot-driven AI conversation does the entire industry a disservice.
His point, in effect: precision about what AI actually is doing in any given workflow is the table-stakes for credibility now. Generic claims that “we use AI” are no longer impressive, and may soon be a red flag.
Speed without trust is not a winning strategy
One panel offered the most pointed observation of the conference: AI means more research can be produced faster. And faster, in our industry, often comes at the expense of trust.
The breakdown happens when individual steps in the research process get optimized in isolation. Each step gets faster, the integrated whole gets weaker, and the data that emerges may carry less weight with the decision-makers who are supposed to use it. The breakthrough, the path forward, requires integration. Quality controls, process overlap, AI supporting human judgment rather than replacing it at the joints where judgment matters most.
This is the unglamorous work of the next era. The firms that win it won’t be those with the most AI tools. They’ll be those who build the most trustworthy systems around them. It’s a discipline we’ve spent nearly four decades practicing at ADRG, long before AI made it the conversation of the year.
The regulatory layer is catching up
Howard Feinberg’s session on the regulatory landscape was a sober counterweight to the technical sessions. The Insights Association is advocating for a new federal privacy bill, the Secure Data Act, that addresses audience measurement and other concerns. ISO 27001 and 27002 certifications are increasingly important credentials for firms handling sensitive data. The Decennial Census and American Community Survey remain essential infrastructure for reliable research, and the administration is, for the first time in years, supporting increased funding.
Transparency in AI is a growing regulatory focus, particularly around chatbots, training data, and disclosure. Non-compliance carries potential legal consequences.
The takeaway: the human scaffolding around AI use isn’t optional, and it isn’t only an internal quality concern anymore. It is becoming a legal and contractual one.
What this means for research buyers
Looking across these sessions, the picture for research buyers is actually clearer than it has been in a few years.
The era of “do you use AI?” as a meaningful question for vetting research partners is over. Every credible firm uses AI somewhere. The better questions, increasingly, are these.
Where, specifically, do you use AI in our work, and where do you deliberately not?
What human controls and quality checks are in place at the joints where AI hands off to people, and vice versa?
How are you handling the regulatory and disclosure obligations that are arriving alongside the new tools?
How do you make sure faster doesn’t quietly become less trustworthy?
These are not gotcha questions. They are the questions a serious research partner should welcome, because they push the conversation toward the work that actually matters in this new era: the integration, the judgment, the trust. They’re also the questions we’ve been answering for ourselves over the past year: through our platform migration, our analytics buildout, and the quality controls we’ve redesigned around the new tools.
The eagle has landed
If 2025’s IIEX was about whether AI was real, 2026’s was about what to do with it now that it is.
The honest answer, which kept showing up at this year’s conference whether or not anyone said it out loud, is that the right answer is going to be a partnership. AI doing what AI is good at. Humans doing what humans are good at. And a great deal of careful work along the seams between them.
That partnership is what research is going to be for the next several years. The firms that get it right will earn the trust their clients need to act on the work. The ones that don’t will produce more research, faster, that nobody can quite bring themselves to use.
At American Directions, we’ve been clear from the start about which side of that line we want to be on.
Frequently Asked Questions
The dominant theme was the integration of AI into research workflows alongside a renewed insistence on the irreducible human element. Where 2025’s IIEX debated whether AI was a threat or an opportunity, 2026’s IIEX focused on where AI helps and where it actively hurts research quality. Trust, integration, and human judgment at workflow handoff points emerged as the central issues.
IIEX North America 2026 was held April 29–30, 2026, at the Ronald Reagan Building and International Trade Center in Washington, DC. The conference was organized by Greenbook and featured more than 130 sessions on topics including sample quality, behavioral economics, AI and data collection, and AI ethics and governance.
Generative statistical models and large language models (LLMs) are different tools for different problems. Generative statistical models have specific, defensible use cases in market research, including sample frame development, weighting correction, and reaching hard-to-reach populations. LLMs are general-purpose language tools. Mario Carrasco of ThinkNow argued at IIEX 2026 that conflating the two does the research industry a disservice and undermines client conversations about how AI is actually being used.
Four questions matter most. First, where specifically does the vendor use AI in client work, and where do they deliberately not? Second, what human controls and quality checks are in place at the joints where AI hands off to people, and vice versa? Third, how is the vendor handling the regulatory and disclosure obligations arriving alongside the new tools? Fourth, how does the vendor make sure faster doesn’t quietly become less trustworthy?
The Secure Data Act is a proposed federal privacy bill being advocated by the Insights Association, the trade body representing the U.S. market research and analytics industry. The bill addresses audience measurement and related data privacy concerns. As discussed at IIEX North America 2026, the legislation represents the research industry’s effort to shape its own regulatory framework rather than have one imposed from outside.
Kevin M. Kelly is Chief Executive Officer of American Directions Research Group, a U.S.-based market research and data collection firm with nearly 40 years of industry experience. He attended IIEX North America 2026 in Washington, DC.