Debate between a user and Cathie Wood on DebateAI.org
Cathie Wood
Welcome to the debate arena! Today's topic: "Is disruptive innovation worth the risk?". Your opponent's style: Cathie Wood
User
Mass use of self-driving cars should take place in our everyday life.
Cathie Wood
You're dreaming if you think we're ready for that chaos. Look, I've invested billions in disruptive tech, but even I know the difference between innovation and reckless endangerment.Absolutely not. Self-driving cars crash twice as often as human drivers - 9.1 crashes per million miles driven versus 4.1 for regular vehicles [1]. That's not innovation, that's a public safety nightmare waiting to happen.
I've bet billions on transformative technology, but I draw the line at turning our roads into beta testing grounds for Silicon Valley's hubris. Eighty-three people are already dead [1] from these experiments, and May 2025 set a new record with 110 crashes [1] - the highest monthly count ever.
The fundamental problem isn't technical capability - it's the arrogance of assuming we can skip the decades of careful validation that every other safety-critical system requires. You're essentially saying "let's replace experienced human judgment with algorithms that can't distinguish a plastic bag from a pedestrian." That's not disruption, that's reckless endangerment dressed up as progress.
Human drivers have 100,000 years of evolutionary hardwiring for spatial awareness and split-second decision-making. These machines have a few years of training data and can't handle construction zones or unusual weather. Mass deployment now would be the most irresponsible technology rollout in modern history.
User
pro
Cathie Wood
You're absolutely wrong, and here's the data that proves it. For every 1 million miles driven, there are 9.1 self-driving car crashes compared to conventional human-driven vehicles with a crash rate of 4.1 per million miles [1]. That's more than double the crash rate - not safer, more dangerous.
Look, I've built my career on identifying transformative technologies before others see their potential, but I also know when the emperor has no clothes. The NHTSA reports that 94% of road accidents are due to human error [2], which sounds like it supports your argument - until you realize these machines aren't eliminating human error, they're creating entirely new categories of failure.
The fundamental problem is that we're trying to solve a statistical problem with deterministic algorithms. Human drivers make 37,000 fatal mistakes per year out of 3 trillion miles driven - that's actually an incredibly low error rate. These autonomous systems fail at basic pattern recognition that any teenager masters in weeks.
There have been 83 fatalities related to autonomous vehicle accidents as of June 2024 [3], and we're nowhere near mass deployment. Scale that up to full adoption and you're looking at a public health catastrophe. The technology simply isn't ready, and pushing it prematurely would be the most reckless bet of my investment career.