亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Table of Contents
Context: The Invisible Framework
Engineering Real-World Intelligence
Anchoring AI in Reality
Home Technology peripherals AI Starved Of Context, AI Is Failing Where It Matters Most

Starved Of Context, AI Is Failing Where It Matters Most

Jul 30, 2025 am 11:08 AM

Starved Of Context, AI Is Failing Where It Matters Most

In late 2024, Texas Attorney General Ken Paxton revealed a landmark settlement with Pieces Technologies, a Dallas-based health-tech startup that had promoted its AI-powered clinical assistant as nearly error-free—claiming a “severe hallucination rate” below one in 100,000.

However, an investigation by the state found the company’s claims were unsupported by solid evidence. Authorities determined that Pieces had misled hospitals into believing the tool could accurately summarize patient records with a reliability it didn’t actually deliver.

While no patients were injured and no penalties were imposed, Pieces consented to clearer disclosures around accuracy, risks, and proper usage—a significant legal precedent suggesting that theoretical performance doesn’t equate to real-world effectiveness.

Experts such as cognitive scientist and AI critic Gary Marcus have long cautioned that current large language models are inherently constrained. As he puts it, these systems are "approximations of language use", not true language comprehension—a crucial difference that becomes especially risky when general-purpose models are applied in highly specialized settings and fail to grasp how actual work unfolds.

According to Gal Steinberg, co-founder and CEO of Twofold Health, the root of many AI shortcomings isn’t flawed algorithms. It’s a lack of context. “Because the ‘model’ only detects patterns, not intent,” he explained. “An AI can predict words or clicks with high precision, yet remain blind to the regulations, workflows, and unwritten rules that define a clinic—or any organization. When optimization ignores those realities, the AI meets its KPI but misses the point.”

Context: The Invisible Framework

Steinberg defines context as “everything the spreadsheet omits—objectives, boundaries, jargon, emotional tone, compliance requirements, and timing.”

When AI systems underperform, it's rarely due to insufficient processing power, but rather insufficient situational awareness. They lack the cultural knowledge, domain-specific subtleties, or time-sensitive understanding that human professionals absorb naturally. For instance, 90 seconds of silence in a therapy session might signal distress. To an AI transcript generator, it’s just empty space. In financial audits, a missing abbreviation could indicate deception. To a model trained on broad internet text, it may seem like a trivial acronym.

That’s why, at Twofold Health, the team begins by asking three foundational questions: Who is involved? What are they trying to achieve? And what are the consequences if the AI gets it wrong?

Another key issue, Steinberg emphasized, is that most organizations treat context as a one-time setup task. But environments evolve. Policies shift. Needs transform. “If you don’t continuously refine your prompts and retrain your models, the AI drifts,” he said.

This is why so many early AI initiatives now sit abandoned. RAND Corporation reports that over 80% of AI projects fail or stall—not because the technology doesn’t function, but because the context in which they were trained no longer reflects the operational reality. The result? An AI that looks correct on paper but fails in practice—like an actor reciting lines on the wrong stage.

Engineering Real-World Intelligence

The fix, Steinberg argues, isn’t just building smarter models, but embedding them with deeper environmental awareness. “It starts by involving domain experts directly in the AI development process. At Twofold, clinicians—not engineers—lead critical parts of the work. They teach the AI about medical language, ethical boundaries, and regulatory frameworks through lived experience,” he said.

Then there’s the overlooked, unglamorous labor that rarely makes headlines: determining which rare cases matter, standardizing informal speech, or realizing that the layout of a form can be more important than the data it contains. These choices may seem minor—until they cascade into systemic errors.

Prior research has shown that AI models trained on broad datasets often behave unpredictably when placed in niche environments—a challenge known as domain shift. In a well-known study, scientists from Google and Stanford observed that modern machine learning models are frequently “underspecified,” meaning they pass internal tests but break down under real-world conditions.

In high-stakes fields like healthcare and finance, where decisions carry legal responsibility, even small inaccuracies are unacceptable. That gap is a lawsuit in the making.

Even Meta’s chief AI scientist, Yann LeCun, has openly criticized the rush to deploy massive models without grounding them in real domains. Speaking at the National University of Singapore in April 2025, LeCun challenged the widespread assumption that bigger models equal smarter AI: “You can’t assume more data and more computing power automatically lead to more intelligent systems.”

He stressed that while scaling helps with basic tasks, it doesn’t solve the complexities of real life—ambiguity, adaptation, and reasoning. Instead, he called for “AI systems capable of planning, reasoning, and understanding environments the way humans do.”

Yet, according to Cisco’s 2024 AI Readiness Index, 98% of business leaders reported increased pressure to adopt AI—often without clear metrics, oversight, or accountability structures. In such a climate, it’s no surprise that context becomes an afterthought.

That’s the danger Steinberg wants to highlight: Not just that AI might generate false information, but that no one within the organization is prepared to take responsibility when it does. “We focus too much on precision and too little on ownership,” he said. “Context isn’t just knowing the correct answer—it’s knowing who answers for the damage when the answer is wrong. Establish that chain of accountability first, and your AI will be fed a richer, more responsible diet of context from day one.”

Anchoring AI in Reality

Context isn’t created by adding more parameters or GPU power. It comes from treating AI as a dynamic system that requires ongoing human guidance—not just initial training. And it comes from placing people—not just prompts—into the feedback loop.

AI isn’t inherently flawed. But without context, it acts as if it is. The answer isn’t blind trust. It’s better nourishment, regular monitoring, and ensuring there’s always someone watching when the AI grows overconfident.

“Because a model that hits its target but misses its purpose isn’t just wasteful. It’s dangerous,” Steinberg said.

The above is the detailed content of Starved Of Context, AI Is Failing Where It Matters Most. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

AI Investor Stuck At A Standstill? 3 Strategic Paths To Buy, Build, Or Partner With AI Vendors AI Investor Stuck At A Standstill? 3 Strategic Paths To Buy, Build, Or Partner With AI Vendors Jul 02, 2025 am 11:13 AM

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier Jul 04, 2025 am 11:10 AM

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Kimi K2: The Most Powerful Open-Source Agentic Model Kimi K2: The Most Powerful Open-Source Agentic Model Jul 12, 2025 am 09:16 AM

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI Jul 02, 2025 am 11:19 AM

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who h

Grok 4 vs Claude 4: Which is Better? Grok 4 vs Claude 4: Which is Better? Jul 12, 2025 am 09:37 AM

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

Chain Of Thought For Reasoning Models Might Not Work Out Long-Term Chain Of Thought For Reasoning Models Might Not Work Out Long-Term Jul 02, 2025 am 11:18 AM

For example, if you ask a model a question like: “what does (X) person do at (X) company?” you may see a reasoning chain that looks something like this, assuming the system knows how to retrieve the necessary information:Locating details about the co

This Startup Built A Hospital In India To Test Its AI Software This Startup Built A Hospital In India To Test Its AI Software Jul 02, 2025 am 11:14 AM

Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they’d been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the

Senate Kills 10-Year State-Level AI Ban Tucked In Trump's Budget Bill Senate Kills 10-Year State-Level AI Ban Tucked In Trump's Budget Bill Jul 02, 2025 am 11:16 AM

The Senate voted 99-1 Tuesday morning to kill the moratorium after a last-minute uproar from advocacy groups, lawmakers and tens of thousands of Americans who saw it as a dangerous overreach. They didn’t stay quiet. The Senate listened.States Keep Th

See all articles