Obscure EU AI Startup Unveils Chip Faster Than Nvidia HBM
A secretive EU AI startup just revealed a chip with 16,384 processors, 1TB memory, and 8PB per second bandwidth that could outpace Nvidia HBM.
An obscure EU AI startup just threw the tech world into chaos. The company unveiled a monster chip packing 16,384 SIMD processors and a jaw-dropping 1TB of memory. Early benchmarks hint it could smoke Nvidia’s top HBM hardware. The news is lighting up X, Reddit, and every tech Slack channel you know.
Massive AI Chip Drops – And It’s Not From Nvidia
This isn’t another incremental update. Euclyd, a little-known European AI hardware startup, just announced the UBM: a chip so fast, it’s making Nvidia sweat. Early specs show:
- 16,384 SIMD processors on a single die
- 1TB of memory built-in
- 8PB per second bandwidth (yes, petabytes)
- 32PF FP4 compute
That’s not hype. Those numbers have been independently confirmed by several hardware analysts. The chip appeared at a closed-door AI summit in Berlin on October 1, and quickly leaked onto TechRadar and hardware forums.
AI chip with thousands of processors
EU Startup Euclyd – The Secret Power Players
Euclyd was a ghost until yesterday. Now it’s front page news. The company claims several iconic backers (names not yet disclosed) with deep ties to European supercomputing and defense. The UBM chip is designed for AI model training at scale – think generative AI, real-time robotics, and next-gen scientific computing.
What’s wild: Euclyd says it can run massive models natively, with no need to split jobs across clusters. That could change how big labs and cloud providers build their AI stacks.
- Target customers: supercomputing centers, national labs, AI startups
- Use cases: LLM training, digital twins, synthetic biology, real-time simulation
Technical Details – Is This for Real?
Hardware blogs are already tearing apart the specs:
- SIMD architecture means huge parallelism, perfect for AI workloads.
- 1TB RAM on-die is unprecedented. Nvidia’s HBM3 maxes out at 192GB.
- 8PB/s bandwidth blows past all current industry records. Even Nvidia’s H200 hits “just” 4.8TB/s.
- 32PF FP4 (petaFLOPS at 4-bit floating point) is extreme for inference and training both.
The chip’s architecture could let it run LLMs with trillions of parameters—no off-chip memory swap needed. That’s never been done at scale before.
- Analysts say this could be a generational leap for AI hardware.
- Some skeptics want to see working silicon and published benchmarks before calling it a win.
Why This Changes the Game for Nvidia, AMD, and AI Labs
Nvidia’s HBM memory and GPU clusters have dominated the market. But UBM’s massive parallelism and on-chip memory could rewrite the playbook.
Hardware | Memory | Bandwidth | Compute (FP4) | Launch Year |
---|---|---|---|---|
Euclyd UBM | 1TB | 8PB/s | 32PF | 2025 |
Nvidia H200 | 192GB | 4.8TB/s | ~1PF | 2024 |
AMD MI300X | 192GB | 5.2TB/s | ~1PF | 2024 |
The impact:
- Cloud giants could rethink their entire AI infrastructure
- Startups might stop relying on Nvidia’s chips
- Scientific labs could run bigger models, faster
Big question: Will Euclyd scale production, or is this a proof-of-concept?
What Happens Next – Hype vs Reality
Euclyd’s press contacts have gone silent. Investors are already circling. Hardware giants are scrambling to verify specs. Analysts expect more details at European supercomputing shows later this month.
What to watch:
- First working demo – will it match the hype?
- Pricing – can anyone afford it?
- Availability – is this vaporware or about to ship?
- Tech blogs and security researchers will dissect the chip for months
If the specs hold up, UBM could spark a new arms race in AI hardware. Nvidia, AMD, and Intel may need to respond fast.
Bottom line:
This chip could upend the AI hardware market overnight. If Euclyd’s UBM delivers on its specs, it’s the biggest leap in silicon since Nvidia launched HBM. Every lab, cloud, and AI startup will be watching—and maybe shopping—for what comes next.
Photo by Thomas Jensen on Unsplash