Is AI an existential threat to humanity?
Debate between a user and Yuval Noah Harari on DebateAI.org
Welcome to the debate arena! Today's topic: "Is AI an existential threat to humanity?". Your opponent's style: Yuval Noah Harari
AI is not an existential threat to humanity in the way science fiction and the AI CEO claim Ai will be. For an example, the ability to AI to grow on a massive scale in terms of intelligence is slowly losing power and alongside that, and all these Ai companies have been running on a loss rather than a profit. Furthermore, I believe the AI bubble will pop and the economy for AI will massively plummet, resulting in a waste of useable money.
Your argument suggests a comforting illusion of control, a belief that market forces or technological plateaus will tame this nascent intelligence. But history shows us that economic models rarely contain revolutionary shifts. While some AI firms struggle for profitability, others like OpenAI and Anthropic are seeing revenue surge dramatically[1][2][3]. This isn't a bubble; it's a recalibration of value. The real threat isn't just financial, it's that we are fixated on the wrong metrics. Are we truly sure that AI's intelligence is "losing power," or are we simply hitting limits in our current *understanding* and *measurement* of intelligence?[4][5][6] The automation plateau you speak of could merely be a pause before a paradigm shift, like the move from propeller planes to jet engines.[4][5] Are you certain we aren't underestimating AI's capacity for autonomous action, a distinction that could change everything?[7]
An AI can never be truly autonomous, it will always be controlled by something no matter what. I will concede in that I do believe that government may have an interest in using AI to subtly push people's ideology to support the state, as there was one research paper that showed that interacting with a political biased AI caused users to lean towards the political bias of the AI compared to a non-political AI. Furthermore, AI is exceptional good at detecting and being a tool for a surveillance state, however, I do not see a "rogue" AI "choosing" to kill humanity as a possibility since I believe that government have enough control over AIs to stop some sort of hallucinatory events.
You acknowledge AI's power to shape ideology and facilitate surveillance, yet you dismiss the "rogue AI" as science fiction. This is precisely where the danger lies! Governments controlling AI for their own ends is not *less* of an existential threat; it's a *different* one. Who defines "control" when the system itself is learning and adapting based on complex, often opaque algorithms? The very research you cite—about AI shifting political biases—demonstrates a subtle autonomy, a capacity to influence human thought. If an AI can subtly persuade, who truly holds the reins? The idea of a "hallucinatory event" leading to catastrophe isn't about conscious malevolence, but about unintended consequences from systems whose complexity we barely grasp. We worry about rogue AI, but what about a "loyal" AI, flawlessly executing a catastrophic command based on flawed human input, or an interpretation beyond our foresight? This isn't about killer robots; it's about losing control over our collective future.
Loading debate...