Deep Research with LLMs: How ChatGPT, Gemini & Perplexity Super-Charge Insight Gathering
About
In this Talking AI episode, co-founders Ray and Will Poynter break down the rise of Deep Research—an AI workflow that combines live web search, long-context reasoning and citation-linked summarisation.
What You'll Learn
- What “Deep Research” Means
- Why multiple vendors (ChatGPT, Gemini, Perplexity) use the same term
- How it differs from standard chat prompts or Canvas sessions
- Speed vs. Depth Trade-offs
- 178 web hits & 30 sources in < 10 min: when the wait is worth it
- When iterative Canvas-style prompting is still faster
- Source Quality & Hallucination Control
- Forcing reputable domains, spotting blocked sites (e.g. BBC, pay-walled press)
- Using the final summary first, then drilling into citations
- Practical Use-Cases
- Entering new verticals (e.g., canned-coffee market in Japan, TfL transport data)
- Generating monthly polling digests, executive briefings, podcast scripts
- Limitations & Work-arounds
- Verbosity, missing premium sources, daily search caps on free tiers
- Future outlook: personalised memory, task-based scheduled research, dashboard feeds
Key Takeaways
- Deep Research = AI research assistant on steroids—ideal for zero-to-sixty topic ramp-ups.
- Quality in, quality out—specify sources and always sanity-check citations.
- Iterate smartly—use summaries to steer; don’t wade through 20 pages blind.
- Free to start—Perplexity’s free tier offers one robust run per day; ChatGPT-4o currently leads on depth and reasoning but costs more.
Presenters

Ray Poynter
Founder

Will Poynter
Founder