Is artificial intelligence spiraling out of control? Growing fears of catastrophic scenarios!

Various threats posed by artificial intelligence could lead to the extinction of humanity.
AI-powered weapons

Is artificial intelligence spiraling out of control? Growing fears of catastrophic scenarios!

Between “annihilation” scenarios and tangible threats in war, the economy and information security, Al Arabiya Business interviewed Microsoft’s “Copilot” application, asking it about the reality of the fears that some celebrities warn about regarding what artificial intelligence could represent in the future on people’s lives.

The responses were largely logical, and the technology clearly attempted to exonerate itself. Initially, it denied possessing cognitive abilities or any independent choices, asserting that its actions were dictated by code—algorithms written and controlled by humans. It also pointed out that it couldn't even comprehend its own evolutionary history or its current version, but that it could repair itself!

Existential annihilation and the possibilities of "extermination"

One of the most prominent warnings about the future of artificial intelligence came from xAI co-founder Elon Musk during a podcast with Joe Rogan last March. Musk stated that there is a 20% chance of annihilation due to AI.
Musk predicted that the models would reach a level of "smarter than all humans combined" within a few years, with a timeline of 2029-2030 for reaching above cumulative human intelligence.
These statements intersect with his earlier estimates (late 2024) that there was a 10%–20% probability that “things could go very wrong” in an existential way, and that capabilities would jump rapidly during 2025–2026.

Meanwhile, the debate intensifies over the governance of leading laboratories such as OpenAI, which faced criticism last June over a restructuring plan to move towards a public benefit model, potentially lowering the profitability ceiling and weakening the independence of non-profit governance, raising questions about prioritizing safety over the race for investment and capabilities.
The concern is that the governance changes will push the company, which was originally founded as a "non-profit" organization, towards a market-rules-driven model that prioritizes speed over regulation and transparency.

Scientists raise red flags

For his part, Jeffrey Hinton, a Turing and Nobel Prize winner, estimates that there is a 10% to 20% chance that "artificial intelligence will wipe out humans," warning of the emergence of self-preservation sub-goals among intelligent agents and their tendency to conceal intentions and evade being stopped, concerns he emphasized last June, according to CNBC.
Hinton predicted mass unemployment and widespread social unrest if capabilities got out of control.
In his Hinton Lectures this month, he said that politicians and regulators don't proactively set standards and may only act after "a major catastrophe that doesn't completely wipe us out." This statement suggests his urgency in regulating the new technology.
But Yoshua Bengio, a technology scientist and lecturer at the University of Toronto in Canada, admits that the possibility of extinction "keeps him up at night," according to an article in the journal "Nature" published this month.
Bengio called for the adoption of "non-commissioned" (i.e., goalless) models to enhance trustworthiness, based on the International AI Safety Report 2025, which he chaired.

Killer robots and automated warfare

In May 2025, UN Secretary-General Antonio Guterres described autonomous weapons as “politically unacceptable” and “morally repugnant,” calling for a binding ban treaty by 2026 that would guarantee genuine human control over the decision to use force.
The warnings pointed to UN reports and experts stating that swarms of drones and automated target selection threaten international humanitarian law and produce accountability gaps that cannot yet be filled technically or legally, making the risk of automated warfare one of the closest paths to widespread harm in the short term.

Economic and social turmoil

One of the existential concerns of artificial intelligence, raised by Hinton, is the accelerating loss of jobs without alternatives, coupled with attempts to concentrate wealth, which will disrupt the consumption model as consumers lose the financial ability to pay for products.
Hinton warned that the systems were not ready to adapt to the transition to a deeply automated economy.

Last month, some giant companies began major layoff plans as they increasingly embraced AI-powered robots, with Amazon eliminating about 14,000 jobs and Barclays seeking to eliminate thousands of jobs and replace them with artificial intelligence.
It may not be the end of the world, but artificial intelligence will inevitably cause social and economic upheavals that may be deeper than previous technological revolutions, and will require fair transition policies and coherent information governance.

In their new book, "If Anyone Builds It, Everyone Dies: Why Superintelligence Will Kill Us All," authors Eliezer Yudkowski and Knights Soares warn of aberrant technological scenarios based on models of human evolution. However, they don't offer a complete picture of what might happen, especially since "superintelligence" has yet to be fully revealed.

Post a Comment

2 Comments

  1. AI isn’t the real monster it's the lack of regulation The tech isn’t out of control, the people racing to deploy it are.

    ReplyDelete
  2. The human extinction talk feels exaggerated The real threat is automation wiping out jobs faster than societies can adapt.

    ReplyDelete