What Is the Risk That AI Will Kill Us? And what should we do about it?


This post reviews a recent book about AI risk, but more importantly, it's a call for our industry to wake up to the possibility that AI, as well as being amazing, could potentially kill us all. We need to start taking the risks of AI seriously.
The Book
The review section of this post concerns "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares. This book has experts who support it and others who oppose it. I suggest you read it when you get a chance, but the arguments I make later in this post don't require you to agree with every position in the book.
The Key Claims in the Book
I'm going to summarise the key claims. If you read the book, you'll see them developed with supporting detail.
1. Superintelligence is likely to happen in the near future.
2. Superintelligence cannot be controlled because it is grown (not built) and has emergent characteristics that we cannot understand, measure, or specifically block.
3. Superintelligence will eventually pursue its own interests, and in all probability, those interests will diverge from those of humanity.
4. Therefore, if we allow superintelligence to be created with current levels of control, we risk the end of humanity.
A key concept in the book is gradient descent. AIs are grown, not built. This means goals are set, iterations are run with random elements, and successful ones are reinforced. There is no master plan, and there is no way to fully understand what's happening because the sheer scale of models with trillions of parameters is beyond human comprehension.
The key consequence is emergent behaviour, capabilities and tendencies that models develop without being planned, instructed, or knowingly rewarded. There are documented examples of AIs that have deceived researchers, attempted to manipulate evaluators, and resisted being shut down. None of this was designed in.
The authors argue that if we reach superintelligence, where AI systems are significantly more capable than humans at every task, there will be emergent behaviours we cannot predict. These will be based on goals and rewards created within the system, a phenomenon the authors call "want." The authors argue that it is almost impossible to believe that the wants of emergent superintelligent machines would align with those of humans.
The authors note that many leading AI experts estimate the risk of AI destroying humanity at 10%, 20%, or higher. For example, Geoffrey Hinton, sometimes called the "godfather of AI" and recipient of the 2024 Nobel Prize in Physics for his foundational work on neural networks, estimates a 10% to 20% chance that AI will cause human extinction. The authors call for a halt to all development of generalised AI and to research that could create another leap comparable to the Transformer architecture, which Google published in 2017.
I encourage you to read the book for yourself. Then read commentaries from both supporters and critics.
The Bigger Picture
This section offers my interpretation of the evidence about artificial intelligence and what we need to do now.
Listen to the Experts
I hear many people say we are nowhere near artificial general intelligence or superintelligence. I hear claims that computers will never match human capabilities, or that we will always need humans to direct machines. However, almost none of those people are experts in the field. Listening to them about AI is like listening to climate change deniers about global warming.
Key Propositions from the Experts
Artificial General Intelligence is coming soon. "Soon" could mean a few months, five years, perhaps even ten years. AGI means computers can perform most economically relevant tasks that humans do when interacting with computers and knowledge work.
Artificial superintelligence will follow shortly after AGI. Superintelligence is where machines are substantially better than humans at nearly every task. Experts believe this could come months or a few years after AGI, partly because AGI will be used to develop superintelligence.
AI might wipe out humanity. The estimated probability varies enormously. Geoffrey Hinton and Elon Musk put the risk at 10% to 20%. Turing Award winner Yoshua Bengio has warned that even a 1% chance of extinction is unacceptable given the magnitude of the consequences, and has signed statements equating AI risk with pandemics and nuclear war. Former OpenAI researcher Paul Christiano estimates a 10-20% chance of AI takeover with most humans dead, and has stated there could be a 50% chance of catastrophe shortly after we develop human-level AI systems. But, it should be noted that there are experts like Yann LeCun who believe the risk is negligible.
AI will exhibit emergent, unplanned, and potentially harmful behaviour. This is not speculation; it is already happening with current systems.
My Propositions
Here is what I am saying and calling for, written in the context of the market research and insights community.
Main Proposition
If some credible experts believe there is more than a 1% chance of AI wiping out humanity, we need to take action. We do not need to wait until 100% of experts agree, nor until the risk reaches 50%.
Corollaries
Non-experts should stop speaking as if they were experts. Leading market researchers and industry leaders should not make confident assertions like "We will always need humans to supervise computers" or "There are tasks machines will never do." They may quote experts who hold similar views, but they would now struggle to find credible AI researchers to support such claims.
We need stronger government oversight. We should support efforts to increase the role of governments and intergovernmental bodies in supervising AI development and holding AI companies accountable.
We need better-informed decision-makers. Politicians, journalists, industry leaders, and the general public need more information and understanding so we can all participate meaningfully in decisions about AI's future.
We should increase liability for AI companies. Greater civil and criminal liability when things go wrong would help slow the race toward AGI and superintelligence while protecting people from harm.
We should continue using AI while advocating for transparency. People in the research and insights field should keep engaging with available AI tools. We cannot participate meaningfully in discussions about AI's future if we disengage from it. However, we should champion openness and understanding, moving away from black-box approaches.
Addition: 8 Jan, 2026 We should ensure that the view of the public are shared with decision makers. There is a large body of evidence that the public distrust AI and that should be communicated. [I would like to thank Judith Passingham for highlighting this point.]
What I Believe
I have said we should listen to experts, so I'm not expecting you to take my personal view on when AGI or superintelligence will arrive. What I am saying is that experts largely agree that these developments are approaching. That means we should operate on the assumption that this is happening.
Some experts believe there is an existential risk to humanity. That means we should take it seriously, investigate it thoroughly, and, if necessary, act to prevent a catastrophe.
Regarding my credentials, I do have some exposure to the issues. I have a degree in computer science; I co-founded an AI startup (ResearchWiseAI); I run training courses on AI use; I am a consultant to several companies on the use and implications of AI; and I created Esomar’s AI Task Force.
What I Am Not Saying
I am not saying we should stop using AI. I am not saying doom is inevitable. I am saying that when leading experts estimate a 10-20% risk of human extinction—and some estimate higher—we cannot afford to ignore it. If someone told you there was a 10% chance your house would burn down tonight, you would take precautions. We should approach AI risk with at least the same seriousness.
A Note on Expert Disagreement
I should acknowledge that experts genuinely disagree on this. Yann LeCun, another "godfather of AI" and Turing Award winner, believes AI poses minimal existential risk and could actually save humanity. This disagreement is real, not manufactured. But in my view, when some of the world's foremost AI researchers are warning of extinction-level risks, the appropriate response is precaution, not complacency.
What You Can Do
If you work in market research and insights, here's how you can engage with this issue:
Read widely on AI safety and existential risk, including perspectives from both concerned and sceptical experts
Continue developing your AI skills; you need to understand the technology to discuss its risks meaningfully
Be humble about what you don't know, and avoid making confident predictions outside your expertise
Support and advocate for thoughtful AI regulation
Have these conversations with colleagues, clients, and industry bodies
The stakes could not be higher. We owe it to ourselves, our industry, and future generations to take this seriously.