World's Largest Chip Sets AI Speed Record, Beating NVIDIA
May 29, 2025 am 11:27 AM"It's the fastest inference in the world," said Naor Penso, the chief information security officer of Cerebras, during our conversation at Web Summit in Vancouver today. "Last week, NVIDIA announced achieving 1,000 tokens per second on Llama 4, which is quite impressive. We just released a benchmark showing 2,500 tokens per second today."
If this sounds confusing, think of "inference" as the ability to think or act: forming sentences, images, or videos based on your input, or prompt. Imagine "tokens" as fundamental units of thought—a word, character, or symbol.
The higher the number of tokens an AI system can handle per second, the quicker it can deliver results. And speed is crucial. While it might not be a big deal for individuals, businesses need instant responses when adding AI engines to applications like suggesting one more ingredient for a complete recipe of Korean-Style BBQ Beef Tacos in a grocery shopping cart.
Interestingly, speed is about to become even more vital.
We're moving into an era where AI agents can manage complex, multi-step tasks for us, such as planning and booking a weekend getaway to Austin for a Formula 1 race. These agents aren't magical—they tackle large tasks step by step, breaking them down into 40, 50, or even 100 smaller tasks. This means significantly more work.
"AI agents require far more operations, and these operations must communicate effectively," Penso explained to me. "You can't afford slow inference."
The WSE's four billion transistors play a key role in enabling this speed. For context, the Intel Core i9 has around 33.5 billion transistors, and the Apple M2 Max chip offers just 67 billion transistors. However, it's not just the sheer number of transistors that makes a difference; it's also their co-location—everything being on one chip, along with 44 gigabytes of the fastest RAM (memory) available.
"AI computation requires a lot of memory," Penso noted. "With NVIDIA, you often need to go off-chip, but with Cerebras, you don't."
Independent agency Artificial Analysis supports the speed claims, stating they've tested the chip on Llama 4 and reached 2,522 tokens per second, compared to NVIDIA Blackwell's 1,038 tokens per second.
"We've tested dozens of vendors, and Cerebras is the only inference solution that outperforms Blackwell for Meta's leading model," said Micah Hill-Smith, CEO of Artificial Analysis.
The WSE chip represents an intriguing advancement in computer chip design.
Although we've been creating integrated circuits since the 1950s and microprocessors since the 1960s, the CPU dominated computing for many years. More recently, the GPU transitioned from supporting graphics and gaming to becoming the central processor for AI development. The WSE isn't based on x86 or ARM architectures but something entirely new that enhances GPUs, according to Julie Shin, Cerebras' chief marketing officer.
"This is not an incremental technology," she emphasized. "This is another leap forward for chips."
The above is the detailed content of World's Largest Chip Sets AI Speed Record, Beating NVIDIA. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who h

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

For example, if you ask a model a question like: “what does (X) person do at (X) company?” you may see a reasoning chain that looks something like this, assuming the system knows how to retrieve the necessary information:Locating details about the co

Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they’d been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the

The Senate voted 99-1 Tuesday morning to kill the moratorium after a last-minute uproar from advocacy groups, lawmakers and tens of thousands of Americans who saw it as a dangerous overreach. They didn’t stay quiet. The Senate listened.States Keep Th
