Does AI pose an existential risk? We asked 5 experts

Source: ForeignAffairs4

Source: The Conversation – Global Perspectives – By Aaron J. Snoswell, Senior Research Fellow in AI Accountability, Queensland University of Technology

Sean Gladwell/Getty Images

There are many claims to sort through in the current era of ubiquitous artificial intelligence (AI) products, especially generative AI ones based on large language models or LLMs, such as ChatGPT, Copilot, Gemini and many, many others.

AI will change the world. AI will bring “astounding triumphs”. AI is overhyped, and the bubble is about to burst. AI will soon surpass human capabilities, and this “superintelligent” AI will kill us all.

If that last statement made you sit up and take notice, you’re not alone. The “godfather of AI”, computer scientist and Nobel laureate Geoffrey Hinton, has said there’s a 10–20% chance AI will lead to human extinction within the next three decades. An unsettling thought – but there’s no consensus if and how that might happen.

So we asked five experts: does AI pose an existential risk?

Three out of five said no. Here are their detailed answers.

The Conversation

Aaron J. Snoswell was previously part of a research team that competitively won a grant to receive research project funding from OpenAI in 2024–2025 to develop new evaluation frameworks for measuring moral competence in AI agents. The project has now completed and I no longer receive funding from OpenAI.

Niusha Shafiabady, Sarah Vivienne Bentley, Seyedali Mirjalili, and Simon Coghlan do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Does AI pose an existential risk? We asked 5 experts – https://theconversation.com/does-ai-pose-an-existential-risk-we-asked-5-experts-266345