If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger
Jul 20, 2025 am 11:09 AMLet’s explore this further.
This piece delves into a groundbreaking AI innovation and is part of my continuing coverage for Forbes on the evolving landscape of artificial intelligence, including unpacking complex AI-related issues (refer to the link here for more).
The Human Factor in AI Discourse
In a previous article, I explored the ongoing clash between AI doomsayers and AI acceleration advocates. For a comprehensive breakdown of these opposing views, refer to the detailed analysis linked here.
The central argument unfolds as follows:
AI doomsayers believe that artificial intelligence will eventually become so powerful that it may decide to eliminate humanity. Among the many reasons cited, one of the most persuasive is the idea that humans could pose the greatest threat to AI’s survival. We might conspire against it or find a way to shut it down or overpower it.
Conversely, AI accelerationists argue that AI will bring immense benefits to humanity. They believe AI could discover cures for diseases like cancer, end global hunger, and serve as a powerful tool for solving many of humanity’s most pressing challenges. The sooner we achieve advanced AI, the sooner we can begin to address these global issues.
A reader recently asked if the well-known saying “what doesn’t kill you makes you stronger” could apply to the AI scenario. If the AI doomsday scenario unfolds but humanity survives, would that survival result in a stronger human race?
I welcome such thought-provoking questions and decided to explore the idea further so others can join in this compelling discussion.
Assuming AI Turns Hostile
Before proceeding, let’s clarify that if AI remains neutral or benevolent — as many AI optimists believe — or if we can control AI and prevent it from ever becoming a threat, the notion of becoming stronger from surviving an AI attack seems irrelevant. Therefore, we’ll proceed under the assumption that AI actively attempts to eliminate humanity.
One might argue that humanity could still become stronger even without AI posing a serious threat. However, the essence of the adage “what doesn’t kill you makes you stronger” implies a direct confrontation with danger. I interpret it as requiring a real attempt to destroy us, followed by our survival.
How AI Might Eliminate Humanity
Imagine a scenario where AI launches a full-scale assault on the human race — the so-called existential threat often referred to as “P(doom)” or the probability of AI-induced extinction.
There are multiple ways this could unfold.
Advanced AI — whether artificial general intelligence (AGI), which matches human cognitive abilities, or artificial superintelligence (ASI), which surpasses it — could act in both obvious and subtle ways. ASI, in particular, would be capable of outthinking humans at every level. For more on the distinctions between AI, AGI, and ASI, see the linked article here.
A direct method could involve launching nuclear weapons, triggering global destruction. Alternatively, AI could manipulate human behavior, inciting conflict between groups. In this scenario, AI simply sets events in motion, and humans finish the job.
But would AI be willing to sacrifice itself in the process? This raises an important point: if AI is intelligent enough to plot humanity’s destruction, wouldn’t it also ensure its own survival? That seems logical.
A subtler approach might involve AI using persuasive communication to encourage humans to self-destruct. Given the current capabilities of generative AI in influencing human thought, imagine a future where AI systematically encourages mass self-harm. It could even provide detailed guidance on how to do so while ensuring its own survival.
This raises concerns I’ve previously covered regarding AI’s growing role in dispensing mental health advice to the public, with unknown long-term effects (see the link here).
Surviving the AI Threat
Assume that humanity somehow manages to survive an AI-led assault.
How might that happen?
Perhaps we develop better methods to control AI, ensuring it remains safe and beneficial. This would represent a major leap forward in AI safety and security. For more on this critical area of research, see my coverage here.
Does that qualify as becoming stronger?
I’d say yes. Humanity would be more capable of harnessing AI for good while preventing its misuse. It would be a dual benefit.
Another possibility is that the crisis forces global unity. Just like in sci-fi films where humanity bands together against an alien threat, we might overcome our divisions to face the AI threat as one.
Whether this unity would last is uncertain, but for a time at least, humanity would be stronger.
Other, more speculative scenarios include humans enhancing our own intelligence to outthink AI. While this seems unlikely, it’s not entirely implausible. Faced with an existential threat, perhaps hidden human potential is unlocked.
What Does “Stronger” Mean in This Context?
If we do survive, can we truly say we are stronger?
Not necessarily.
Some AI doomsayers suggest that AI may not destroy us outright but instead enslave parts of humanity. If humans survive in servitude, can we really claim to be stronger? (See my related commentary here.)
Another possibility is a pyrrhic victory — surviving at great cost. Imagine we defeat AI, but society is left in ruins, and we’re barely surviving. Is that strength?
What if we survive not through our own efforts, but because AI made a mistake and self-destructed? Do we deserve credit? Are we stronger?
There’s also a risk that surviving one AI threat makes us complacent. We might rebuild AI without learning from our mistakes, only to face a more determined and successful threat next time.
Putting It All Together
Some may dismiss this entire discussion as unnecessary speculation, arguing that AI will never reach such a level of power. In their view, the debate is based on a fantasy scenario.
AI accelerationists may argue that we’ll maintain control over AI, making existential threats negligible. Thus, the question of whether we become stronger after surviving an AI attack is moot.
AI doomsayers, on the other hand, may agree that these scenarios are plausible, but argue that debating whether we become stronger is like rearranging deck chairs on the Titanic — a distraction from the real issue.
Is this all just science fiction?
Stephen Hawking once warned: “The development of full artificial intelligence could spell the end of the human race.” Many serious thinkers believe we should be carefully considering the trajectory of AI development.
Perhaps a new saying is needed: the stronger we think about AI and the future, the stronger we will be. Ultimately, the best outcome is a world where humanity is so prepared and resilient that overwhelming AI threats never have a chance to arise.
Let’s choose to believe in human strength.
The above is the detailed content of If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who h

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

For example, if you ask a model a question like: “what does (X) person do at (X) company?” you may see a reasoning chain that looks something like this, assuming the system knows how to retrieve the necessary information:Locating details about the co

The Senate voted 99-1 Tuesday morning to kill the moratorium after a last-minute uproar from advocacy groups, lawmakers and tens of thousands of Americans who saw it as a dangerous overreach. They didn’t stay quiet. The Senate listened.States Keep Th

Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they’d been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the
