Setting unrealistic expectations compromises real value. Generative AI and predictive AI deliver concrete opportunities that will continue to grow, but the claim that technology will soon hold “agency” is the epitome of vaporware. It only misleads, setting up the industry for costly, avoidable disillusionment.
Most high-tech terms – such as machine learning, predictive modeling or autonomous driving – are legit. They represent one of two things: a specific technical approach or a novel goal for technology. But the terms “agent” and “agentic” fail in both respects: 1) Most uses of “agentic” do not refer to any novel technical methodology and 2) the ambition of increasing autonomy is not new – even as the word falsely implies otherwise on both accounts. Here’s a breakdown of those two failings and their ramifications.
1) “Agentic” Does Not Refer To Any Particular Technology Or Advancement
“Nothing draws a crowd quite like a crowd.” —P.T. Barnum, 19th century circus showman famed for hoaxes
“Agentic AI” poses as a credible near-term capability, but it represents only the most self-evident goal there could be for technology – increased automation – not a means to get there. Sure, we’d like a large language model to complete monumental tasks on its own – including gathering and assimilating information and completing online tasks and transactions – but labeling such ambitions as “agentic” does not make them more feasible.
The term “agentic AI” intrinsically misleads. Its sheer popularity widens the belief that technology will soon become capable of running much more autonomously, but the buzzword does not refer to any particular technical approach that may get us there. Its trendiness serves to institutionalize the notion that we’re nearing great new levels of automation – “agentic AI” is so ubiquitous that it may sound “established” and “real” – and this implies the existence of a groundbreaking advancement where in fact there is none.
Despite the fact that the vast majority of press about “agentic AI” only promotes this hype narrative with no substance to support it, autonomy itself is often a worthy goal and researchers are conducting valuable work in the pursuit of increasing it. For example, a recent collaboration between Carnegie Mellon University and Amazon curates a large testbed of modest tasks in order to assess how well LLMs can manage them autonomously. This study focuses on information retrieval tasks, such as “Retrieve an article discussing recent trends in renewable energy from The Guardian” and “Retrieve a publicly available research paper on quantum computing from MIT's website.” The study evaluates clever approaches for using LLMs to navigate websites and automatically perform such tasks, but I would not say that these approaches constitute groundbreaking technology. Rather, they are ways to leverage what is already groundbreaking: LLMs. As the study reveals, the state of the art currently fails at these modest tasks 43% the time.
2) “Agentic” Presents No New Goal Or Purpose
“Agentic AI” spotlights machine autonomy as if it were a new ambition, but it’s an old, self-evident goal. There’s no new, revolutionary thrust at play. While the buzzword is somewhat malleable and fuzzy, it generally refers to the desire for increased autonomy – “agentic AI” means hypothetical machines that could perform substantial tasks on their own. This has always been a core, fundamental objective. The very purpose of any machine is to automate some or all of what would otherwise be carried out by a person or animal. Put another way, we build machines to do stuff.
By reiterating our innate desire to automate, “agentic” only states the obvious. Sure, the more machines can safely do for us, the better. But there’s a fairly stubborn limit to the scope of tasks that can be fully automated with no human in the loop. For example, predictive AI instantly decides whether to allow each credit card charge, whereas the wholesale replacement of physicians with machines is a very long way off at best. “Agentic AI” is as redundant as “evil Sith Lord,” “book library” or “data science.”
To be clear, autonomy is often a worthy goal and there is potential for LLMs to excel, at least where the scope of automation is somewhat modest. Economic interests exert pressure to increase autonomy – and various societal concerns exert pressure in both directions. But the scope of unleashed machine autonomy only increases quite slowly. One reason is that technology doesn’t improve as quickly as advertised. Another is that cultural and societal inertia tends to spell slow adoption.
The Farfetched Notion Of Machine “Agency”
There’s another problem with using the words “agent” and “agentic” to evoke the goal of autonomous machines: Crediting machines with “agency” is fantastical. This doubles down on AI’s core mythology and original sin, the anthropomorphization of machines. The machine is no longer a tool at the disposal of humans – rather, it's elevated to have its own human-level understanding, goal-setting and volition. It's our peer. Essentially, it's alive.
The spontaneous goal-setting that comes with agency – and its resulting unbottleability – have been seeping into the AI narrative for years. "AI that works doesn’t stay in a lab," writes Kevin Roose in The New York Times. "It makes its way into weapons used by the military and software used by children in their classrooms." In another article, he wrote, “I worry that the technology will... eventually grow capable of carrying out its own dangerous acts.” Likewise, Elon Musk, one of the world's most effective transmitters of AGI hype, announced safety assurances that cleverly imply a willful or dangerous AI. He says that his company's forthcoming humanoid robot will be hardwired to obey whenever anyone says, “Stop, stop, stop.”
The story of technology taking on a life of its own is an age-old drama. We need to see this high tech mythology for what it is: a more convincingly rationalized ghost story. It’s the novel Mary Shelley would have written had she been familiar with algorithms. The implausible, unsupported notion that we're actively progressing toward AGI – aka artificial humans – underlies much of the hype (and often overlays it explicitly as well). “Agentic” invokes this narrative.
Despite the unprecedented capabilities – and uncanny, seemingly humanlike qualities – of generative AI, the limit on how much human work can be fully automated will continue to only very slowly budge. I believe that we will generally need to settle for partial autonomy.
Don't buy “agentic AI” and don't sell it either. It's an empty buzzword that, in most uses, overpromises. The AI industry runs largely – although certainly not entirely – on hype. To the degree that it continues to overinflate expectations, the industry will ultimately face a commensurate burst bubble: the dire disillusionment and unfulfilled debt that result from unmet promises.
The above is the detailed content of Agentic AI Is The New Vaporware. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who h

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

For example, if you ask a model a question like: “what does (X) person do at (X) company?” you may see a reasoning chain that looks something like this, assuming the system knows how to retrieve the necessary information:Locating details about the co

The Senate voted 99-1 Tuesday morning to kill the moratorium after a last-minute uproar from advocacy groups, lawmakers and tens of thousands of Americans who saw it as a dangerous overreach. They didn’t stay quiet. The Senate listened.States Keep Th

Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they’d been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the
