Agents and AI: What they are and where they are going
About
Ray and Will from ResearchWiseAI talk about agents—what they are, how they work, and where they fit into market research. We explain the difference between autonomous agents, like a Roomba or Grammarly, and reactive agents powered by advanced LLMs such as ChatGPT Operator.
We also discuss how agents can handle tasks such as dynamic data collection, cleaning, and analysis by autonomously selecting the right tools for the job. Tune in to learn more about how AI agents are evolving and how they might eventually assist with day-to-day research tasks.
Transcript
RAY: Hi, my name is Ray Poynter
WILL: and I'm Will Poynter
RAY: and together we're the founders of ResearchWiseAI and today we're going to be talking about agents.
RAY: So Will, do you want to give us some context about what sorts of things people mean when they say agents?
WILL: Agents is one of those, like a lot of AI terms, there are multiple definitions floating around and you may hear a lot of people saying that's not an agent while other people think it is an agent and so we're going to take a somewhat broader approach to this and first up, an agent typically has to do something. A lot of people say an agent has to be autonomous and an example of autonomous agents that do things would include Roomba, the robotic hoover or Grammarly. In both cases they've been given a goal, one of them is to clean the floor, the other one is to improve my grammar and they autonomously set off to try and achieve their goal or in the case of Roomba they are set off but then they are left to go and achieve their goal and they finish when they believe they've achieved their goal even if the owner, the operator, the user may disagree and say the floor is not yet clean or in fact can spot some really badly spelled words in there. On the other side of things, we're seeing more and more with the world of LLMs and chat GPT like AIs, agents that I wouldn't necessarily call fully autonomous because we are setting them off to do something and we're setting their goal in real time but they are definitely going off and doing things for us without us having to give us the specifics. This was the analogy when we were setting up ResearchWise 18 months ago. I was talking to you about older versions of AI to make a cup of tea was go get a mug, put tea bag in mug, turn kettle on and the LLM world is make me a cup of tea and then you let it worry about everything else. And what we're seeing with things like chat GPT Operator or some of the ReACT agent AI work we're doing at ResearchWiseAI is that we are giving out the tasks and it goes off and does it. So the ChatGPT Operator can now interact with the web browser and therefore it could do things potentially like book a table reservation or do more detailed research that isn't simply just reading the website but requires logging in, clicking on things, interacting.
RAY: So where is that better than just normal running say chat GPT, Gemini, Copilot? Where do agents come in and give us something that's going to be useful?
WILL: Absolutely. So if we look at a sort of spectrum, and we'll use ChatGPT as the most familiar LLM, if we ask it certain questions, we know that it just begins answering from its own knowledge, in lack of a better word there, but let's say knowledge. So if we asked it about big world events, World War II, it will have a lot of information on that and it will be able to answer questions on World War II. That is really definitely not operating like an agent, most people would say. But at the point where we ask it, perhaps something a bit more contemporary, and therefore it needs to search the internet, it is now, in my opinion, moving into becoming more agent because it's making more decisions about how to leverage its tools that it has, whether that's code generation or web search or so forth. And then effectively, it's a spectrum, it continues to move as the AIs gain more power to do things and be more autonomous, all the way through to completely autonomous agents, for instance, in security, data security. So these days, we have AI agents that operate on networks, and are constantly monitoring the data flows of networks to try and look for patterns that look worrying. In the same way, we can imagine a physical security guard who will notice an odd person walking around, perhaps because they've been there 20 years and know who is who, or at very least, very odd behavior, 10 people walking constantly in, you know, from this room to that room, this room to that room, and they might ask what's up. The AI agent does the same for the network.
RAY: So how might we start using that in the market research world to go one step beyond? Yes, we can do things like going into Claude or whatever it is and say, please search the internet for all recent examples of this phenomenon, and write me a summary and give me the links to the original articles. I can see that. But I know that you're doing more interesting stuff inside the code with things like reactive agents.
WILL: Absolutely. So I would start by looking at where do we currently require market research, human intervention, where ideally, we would rather not. And one example that jumps to mind is in data collection, we would like to hit when using panels and so forth, certain demographics, certain amount of certain age categories, certain amount of different gender identities and so forth. And from there, what when doing data collection, it may be the case that the person operating running the collection has to suddenly has to change the criteria by which they're sending out invites or repush to get the threshold of how many respondents they need. What we would love is if we set it off, we set off a data collection program, we set up the criteria that we want to meet. And if an AI agent was dynamically choosing to invoke different email lists, different ways of contacting people, pushing in certain ways, even if let's say we've got a shortage of people at a certain age group, well, what we know is that age groups do align with different social media platforms. So if you're short on 18 to 21 year olds, don't push on Facebook or Twitter, or whatever it's called now, you know, push on TikTok. And we know that. And what we'd like to do is get to a place where an AI agent would know that and do it autonomously correctly without us having to supervise it. So you can just request the data collection and rely on it to do its job.
RAY: Well, I guess jumping from there, one of the the next most obvious agent probably is when you when we get the data, particularly from some sorts like panels, we suspect that about 30% of it is bogus and needs cleaning. That includes mistakes made by participants. It's not all just bogus fraudulent data. Um, but if we had an agent that would always clean and check the data before we processed it, that would stop us, I guess, having to remember to do it, and it would stop us being variable depending on who did it and how much time they had to do it.
WILL: Absolutely. And that would, in my mind, that would be similar to the Grammarly analogy. It is an agent that's working alongside you to elevate the quality of your work. It is, in this case, continually scanning, looking for anomalies in the data, whether that is bogus or outliers that either want handling separately or excluding altogether and suggestions around all of that.
RAY: Now, one of the quirky things at the moment that we encounter when we're working with chat GPT and data is that if you've got text, you really want it to be in this large language model mode. And if we've got data, we actually want it to be this analytic mode. We want it to write Python code and to get the Python to run things. Um, so we do that pretty manually and then program it in. But is that an avenue for agents there where we could imagine two different types of analysts working together?
WILL: Absolutely. So, um, what we are working on with ResearchWise AI is to try and create as much of an autonomous, uh, multi-agent reactive system. I've used the word autonomous and reactive, but different stages are autonomous and reactive. Um, it reacts to you and then it goes off autonomously. Um, to take a data file and first do a preliminary read of the data file and start to understand what is going on and how to approach a, an analysis in the same way we would expect an expert in the industry to look at this data file, consider the goals and device and analysis. And then from there, numerous different ages, agents are spawned, some with, uh, greater, uh, qualitative, uh, analysis ability, some with greater quantitative analysis ability, some that specialize in generating charts, um, and some that, uh, specialize in report writing and so forth. And effectively you create a, a small swarm of these agents, um, and almost put the data file of the project in the middle and they all sort of pick away at it and with their expertise and their, um, their, their value, um, to generate the final outputs.
RAY: Right. Okay. I think that probably covers the questions I had about agents. Um, anything else that's worth adding at this point?
WILL: I think the main thing is that, um, you, we're hearing a lot of excitement about agents and it will go through a slight excitement curve, just like LLMs. Uh, however, I, I do thoroughly believe that there is a lot to be said for both react agents, but also autonomous agents once we can work out how to fully bed them in and trust them. So even in, in the earliest days people, to go back to our robotic Hoover analogy, lots of people, there was early adopters who got it. And then there was some nightmare stories where it driven off cliffs or, uh, uh, uh, driven through, um, uh, waste left behind by pets and spread it everywhere. And so and so forth, but incrementally they have got better and it is becoming less and less of a niche project, product for early adopters and becoming a little bit more mainstream. And I think we'll see the same with agents in industry. Um, I don't think everyone next year is going to completely hand over their analysis and data collection to agents. I think that is obviously not going to happen, but it will, they will steadily come alongside us to help us with certain tasks. Um, and we will begin as we get used to them to trust them to do those tasks, but actually not trust them to do other tasks. Um, and we need to learn over the next few years, which of those tasks they are suitable for.
RAY: I think we're also going to see a few fairly ugly adjectives added to the language. I've already seen agentic and agential, um, and I'm sure there's going to be a few more spins on that too. Okay. Thanks very much. And, uh, we'll call that a wrap. We look forward to talking about another AI topic shortly.
WILL: Brilliant. Thank you, Ray.
Presenters

Ray Poynter
Founder

Will Poynter
Founder