How are you measuring the ROI of AI?


All the studies I've seen, including NewMR's own studies, show that AI is being widely adopted across the research industry. But is it generating ROI? Indeed, how should we measure ROI? This second question is the main topic I will examine in this post.
1. Introduction: The Investment Imperative and the Measurement Gap
The adoption of Artificial Intelligence (AI) businesses represents a discontinuity in the history of technological adoption. Organisations across the globe are responding with aggressive capital allocation. According to Deloitte’s 2025 survey, investment in AI is rising precipitously across industries, with 85 per cent of organisations having increased their investment in the past 12 months and 91 per cent planning to increase it further in the coming year.
Studies whether the ROI of this investment produces strikingly different pictures. Studies show rapid, significant ROI. Some studies, such as Deloitte's above, show a good return on investment, but over three to four years. Other studies show failure to reach ROI.
One of the core concepts we need to grasp is the jagged frontier of productivity. In this jagged frontier, proficiency in one task (e.g., coding) may be counterbalanced by weaknesses in other areas, such as security vulnerabilities or even technical debt. This complicates the calculation of the actual net value.
In this post, I will briefly examine the broader professional knowledge work sector and then delve more deeply into the field of market research and insights.
2. Part I: The ROI of AI in Professional and Knowledge Work
Knowledge work, defined as the manipulation of symbols, strategy, and information, has historically failed to achieve the productivity gains observed in manufacturing. AI changes this by treating language and logic as computable assets. However, measuring the return on this change requires moving beyond simple "headcount reduction" models toward a more nuanced framework of value creation.
2.1 The Productivity Paradox and the "Jagged Frontier"
To understand ROI in knowledge work, one must first understand the non-linear nature of AI capability. Research from the Digital Data Design Institute at Harvard Business School introduces the critical concept of the "jagged frontier".
In traditional automation, a machine is generally superior to a human at a specific task (e.g., a loom weaving cloth). In contrast, current AI models exhibit a jagged capability profile. For tasks inside the frontier, such as summarising documents, drafting routine emails, or generating boilerplate code, productivity gains are massive. The Harvard study found that AI-equipped consultants completed tasks 25.1% faster and produced 40% higher quality results than their non-AI counterparts.
However, for tasks outside the frontier, those requiring nuanced ethical judgment, high-context problem solving, or navigating "edge cases", AI performance degrades below that of a human. This problem can be compounded because humans often fail to recognise when a task crosses this invisible frontier, leading to over-reliance and errors. This creates a "verification tax". The time saved in drafting is partially consumed by the time required to verify and debug the output. Therefore, ROI is not a fixed percentage; it depends on task selection. Applying AI to the wrong class of knowledge work results in negative ROI due to the high cost of error correction and "value destruction".
2.2 Frameworks for Measuring Value: The Four Pillars
A robust ROI framework must capture the multi-dimensional impact of AI. Here is a four-pillar framework, drawing on many sources.
Pillar I: Financial and Efficiency ROI (The "Hard" Returns)
This pillar focuses on the direct monetisation of time and labour. It is the easiest to measure, but it often underestimates the total value.
Time Allocation and Labour Savings:
Organisations can quantify the hours saved per employee on repetitive tasks. For example, if an AI tool saves 2 hours per week for 500 employees at an average cost of $75/hour, the calculation is straightforward:
Cost Avoidance:
This measures external spend that is no longer necessary. For instance, using internal AI agents to draft marketing copy or translate documents eliminates the need for external agencies or freelancers.
Asset Utilisation:
In software engineering, AI increases the "velocity" of code production.
(in 2024) that over 25% of new code is now AI-generated. If a developer costs $200,000 annually and AI increases their output by 20%, the "imputed value" of the AI is $40,000 per seat.
Pillar II: Revenue and Growth ROI (The Top Line)
AI is increasingly a driver of revenue generation rather than just cost containment.
Conversion Rate Uplift:
In sales, AI tools used for predictive lead scoring and personalised outreach
increase win rates by 76% and deal sizes by 70%. The ROI is calculated by attributing the difference in revenue between AI-assisted and non-assisted sales cohorts.
Customer Lifetime Value (CLV):
AI-driven personalisation engines (e.g., in e-commerce or content platforms) increase engagement and retention.
revenue lifts of up to 17% directly attributable to AI recommendations
Speed to Market:
In industries like pharmaceuticals or software, time is currency. AI accelerates R&D cycles, enabling the rapid generation of drug candidates or software features. If a product launches three months early due to AI acceleration, the ROI includes the three months of additional revenue captured.
Pillar III: Quality and Experience ROI (The "Soft" Returns)
While harder to quantify, these metrics are leading indicators of future financial performance.
Employee Net Promoter Score (eNPS):
By automating drudgery (data entry, scheduling), AI can improve employee satisfaction. 84% of AI ROI leaders position AI as a tool to "augment" rather than replace, leading to higher engagement, in Deloitte’s study mentioned earlier. However, this requires managing the anxiety of displacement.
Customer Satisfaction (CSAT/NPS):
AI agents that provide instant, accurate answers improve customer experience
sales teams expect AI initiatives to drive NPS from 16% to 51%.
Error Reduction:
In fields like finance or law, the cost of a mistake is high. AI can act as a "second pair of eyes," flagging anomalies or compliance risks. The ROI here is measured in "risk mitigation"—the cost of the lawsuit or fine that
didn't
happen.
Pillar IV: Strategic and Innovation ROI (The Future Value)
This measures the capabilities that were previously impossible for the organisation to execute.
Business Model Reimagination:
Leading organisations use AI to create new revenue streams, such as monetising proprietary data.
Scalability:
AI allows organisations to scale services (e.g., personalised financial advice) to millions of customers without scaling headcount linearly. The ROI is the marginal profit from serving new customers at near-zero marginal cost.
2.3 Calculating the Total Cost of Ownership (TCO)
A potential failure in ROI analysis is underestimating the denominator: the cost. It is insufficient to count only the software license fees. A credible TCO analysis must include:
Data Hygiene:
Bad data leads to bad insights. Organisations can spend significantly more on cleaning and organising data to make it AI-ready than on the AI itself.
The Human-in-the-Loop:
As noted in the "Jagged Frontier" study, human oversight is mandatory. If a lawyer uses AI to draft a brief, they must verify every citation. This labour cost must be subtracted from the time savings.
Governance:
The cost of compliance, legal reviews, and potential liability for AI errors (e.g., copyright infringement, hallucinations).
3. Part II: The ROI of AI in Market Research and Insights
The Market Research (MR) industry stands at a precipice. Traditionally defined by the "Iron Triangle" of Project Management, where one can only have two of Fast, Cheap, or Good, AI promises to break this constraint. By automating the collection, analysis, and even the generation of data, AI offers the potential for insights that are faster, cheaper, and better simultaneously (or the illusion of it, some would say).
However, the risks in MR are unique. In knowledge work, a poor email draft is an annoyance; in market research, a "hallucinated" consumer insight can lead to a failed multimillion-dollar product launch.
3.1 The Rise of Synthetic Data and Respondents
The most disruptive application of AI in MR is perhaps the use of synthetic data to simulate consumer personas (Synthetic Respondents) or to generate data that mimics real-world outcomes.
3.1.1 The ROI of Synthetic Respondents
The economic argument for synthetic respondents is overwhelming. Rising costs, low engagement and fraud plague traditional human panel research.
Cost Efficiency:
Synthetic can be (and often is) much cheaper than real data.
Speed:
Human fieldwork requires days or weeks to recruit and field. Synthetic data generation is potentially measured in minutes. This velocity enables product teams to iterate rapidly on concepts, testing 50 variations in the time it used to take to test one.
Accessibility:
Small organizations that previously could not afford primary research can now conduct "directional" testing using synthetic panels.
3.1.2 The Validity Question
Does the cost saving come at the expense of truth? The industry is divided.
The Pro Case:
Studies,
, have shown high correlations (up to 95%) between synthetic personas and human control groups for specific tasks like concept testing.
The Anti Case:
Those opposed to synthetic data point to studies showing model collapse, weaknesses, and the lack of formal statistics supporting synthetic data.
Bias Amplification:
Synthetic data reflects the biases of the internet data it was trained on. Some argue that this can lead to "diversity washing", creating the statistical illusion of a diverse sample without the genuine lived experience of those demographics.
3.1.3 The known savings and the unknown costs
Whilst it is relatively easy to calculate the price using synthetic data and therefore the savings, the probability of errors and, consequently, the costs are harder to estimate.
3.2 Automated Qualitative Analysis
Qualitative research (focus groups, in-depth interviews, open-ended surveys) provides the most profound insights. But these processes are generally the most expensive and time-consuming. AI is being used to automate the coding process.
For simple tasks, such as coding, AI is proving faster, reducing reliance on external coding services, and is often superior in terms of consistency (albeit less nuanced).
3.2.2 AI Moderation
Some platforms offer what they call "Qualitative at Scale."
Scalability:
An AI moderator can conduct 1,000 depth interviews simultaneously via chat or voice.
Cost:
Reduces moderation costs by up to 80%.
Limitation:
AI lacks the "anthropological eye." It misses nonverbal cues (e.g., sarcasm, hesitation) and struggles with deep emotional empathy. The ROI is high for functional topics (e.g., "Why did you buy this toothpaste?") but low for emotional topics (e.g., "How does this medication make you feel?").
4. Basic Rules for Conducting ROI Analysis on AI
To navigate the complexities of the "Jagged Frontier" and avoid mirages, organisations should perhaps adhere to these five fundamental rules.
Rule 1: Establish a Granular Baseline (The "Before" State)
You cannot calculate the return if you do not know the current cost. Most organisations lack detailed metrics on task-level duration.
Action:
Conduct a study or use digital tools to determine precisely how long it takes to "write a code block," "summarise a report," or "code a survey."
Metric Example:
"Current Cost per Qualitative Interview = $250 (Recruit) + $150 (Incentive) + $300 (Moderation/Analysis) = $700."
Rule 2: Calculate the Full Total Cost of Ownership (The Denominator)
Do not use the software license fee as the only cost.
Action:
Include the cost of "Data Readiness" (cleaning databases), "Prompt Engineering" (training staff), and "Verification" (human review).
Rule 3: Segment by Task Type (The "Jagged Frontier" Rule)
Do not apply a blanket ROI expectation across all departments.
Action:
Categorise tasks into "Inside the Frontier" (High ROI: Summarisation, Coding, Data Entry) and "Outside the Frontier" (Low/Negative ROI: High-stakes ethical decisions, novel invention).
Strategy:
Aggressively automate the former; use AI only as a "co-pilot" for the latter.
Rule 4: Value the Intangible via Proxies
Do not ignore "soft" benefits; quantify them via proxy metrics.
Action:
If AI improves employee satisfaction, measure the reduction in "Attrition Rate." Every retained employee saves the organisation 50%-200% of their salary in replacement costs.
Action:
If AI improves customer experience, measure the increase in "Repurchase Rate" or "NPS-to-Revenue" correlation.
Rule 5: Implement Governance as a Hedge (The "Safety Valve")
A single compliance fine or PR disaster can wipe out ROI.
Action:
Adhere to guidelines (such as Esomar’s) for market research transparency. Ensure that clients are informed when synthetic data is used.
Action:
Implement "Human-in-the-Loop" for all high-stakes outputs to mitigate hallucination risk.
5. Conclusion
The verdict on AI ROI is favourable but conditional. It is not a magic wand that universally lowers costs. It is a lever that can magnify capability. When applied to processes that benefit from scale and speed, the returns are significant. When applied to processes requiring deep human nuance or absolute truth, the returns tend to be negative.
In this post, I cite several studies that highlight positive gains, but many cases present a different picture. The lesson is that you need to measure your ROI. You can’t assume that just because others are getting a good ROI that you will too.
A popular, and perhaps correct, view is that the future of professional work and market research belongs to the "Centaur”. The hybrid model in which AI provides the raw horsepower of data processing and simulation, and humans provide the steering, the ethics, and the final verification of truth.
There is a special case! When the ROI is massive, don't waste too much time measuring it, roll it out and get the benefit while you can. This is the case for many organisations in the software industry, it was the case for many companies 25 years ago with the internet, and there will be other examples.