- Artificial Intelligence Newswire
- Posts
- NVIDIA’s D.C. Blitz: Lilly’s AI Factory, Nokia 6G, Uber AVs & DOE’s “Solstice”
NVIDIA’s D.C. Blitz: Lilly’s AI Factory, Nokia 6G, Uber AVs & DOE’s “Solstice”
Introducing the first AI-native CRM
Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.
With AI at the core, Attio lets you:
Prospect and route leads with research agents
Get real-time insights during customer calls
Build powerful automations for your complex workflows
Join industry leaders like Granola, Taskrabbit, Flatfile and more.
Lilly × NVIDIA: Pharma’s largest AI factory, >1,000 Blackwell-Ultra GPUs, to accelerate discovery-to-delivery.
Palantir × NVIDIA: CUDA-X + Nemotron models embedded in Palantir AIP for decision intelligence at the edge.
Nokia × NVIDIA: Strategic AI-RAN/6G pact + $1B NVIDIA equity investment at $6.01/share.
Uber × NVIDIA (+ Stellantis): Uber to build one of the world’s largest AV networks on NVIDIA DRIVE; initial 5,000 L4 vehicles from Stellantis.
DOE × NVIDIA (+ Oracle): DOE’s largest AI supercomputer (“Solstice”) using 100,000 Blackwell GPUs; DOE has published the partnership.
Quantum × NVIDIA: NVQLink introduced to directly connect QPUs with GPU supercomputers for microsecond-class hybrid workflows.
1) Eli Lilly x NVIDIA — Building Pharma’s Most Powerful AI Factory
What’s official:
Lilly is deploying an NVIDIA Blackwell-based DGX SuperPOD with >1,000 Blackwell Ultra GPUs to power an “AI factory” for discovery, clinical, manufacturing and enterprise agents.
Lilly’s own release frames it as “the industry’s most powerful AI supercomputer” for medicine discovery and delivery.
Why it matters: Brings foundation-model scale (molecules, modalities, imaging) inside a regulated, proprietary environment—shortening hypothesis-to-trial loops and enabling federated collaborations (via Lilly TuneLab) without exposing IP.
2) Palantir x NVIDIA — Operational AI From Data to Decisions
What’s official:
Palantir is integrating NVIDIA accelerated computing (CUDA-X) and open-source Nemotron models into the Palantir AI Platform’s Ontology core.
Palantir’s blog confirms the partnership unveiled in D.C. and how NVIDIA models will be delivered through AIP and the edge.
Why it matters: It marries Palantir’s structured, mission-critical Ontology with NVIDIA’s training/inference stack for supply chains, logistics, and real-time ops—with early adopters like Lowe’s highlighted.
3) Nokia x NVIDIA — AI-RAN on the Path to 6G (+ $1B Equity)
What’s official:
Strategic partnership to add NVIDIA-powered AI-RAN products to Nokia’s RAN portfolio for AI-native 5G-Advanced → 6G.
NVIDIA will invest $1B in Nokia via 166,389,351 new shares at $6.01 each (subject to customary conditions).
NVIDIA’s own newsroom mirrors the 6G/AI-RAN framing and the investment headline.
Why it matters: Positions AI as a first-class workload inside the RAN—enabling programmable, inference-capable basebands and a glidepath to 6G with distributed edge AI.
4) Uber x NVIDIA (with Stellantis) — Scaling a Global L4 Robotaxi Network
What’s official:
Uber IR: building one of the world’s largest AV networks on NVIDIA DRIVE AGX Hyperion 10; Stellantis to supply 5,000 L4 vehicles.
NVIDIA newsroom: partnership to support Uber’s global expansion; DRIVE AV + Hyperion 10 is the reference stack.
Note on “100,000 fleet”: The 100k figure is widely reported by media, but not in the official releases above; treat it as reported guidance, not committed inventory. (Press for official)
5) U.S. Department of Energy x NVIDIA (+ Oracle) — “Solstice” & Friends
What’s official:
DOE newsroom confirms a public-private partnership with NVIDIA and Oracle to build the DOE’s largest AI supercomputer.
NVIDIA newsroom: “Solstice” to feature ~100,000 Blackwell GPUs; also “Equinox” (~10,000).
About “seven supercomputers”: “Seven” has been stated in media coverage, but DOE has formally published the flagship partnership; additional systems have not yet been detailed on DOE’s site.
6) Quantum + NVIDIA — Meet NVQLink
What’s official:
NVIDIA introduced NVQLink, a high-speed interconnect and open platform to link quantum processors (QPUs) with GPU supercomputers (Brookhaven, Berkeley Lab, Oak Ridge, PNNL, etc.).
Why it matters: Hybrid quantum-classical workflows (error mitigation, VQE-style solvers, surface-code research) need µs-latency, synchronized data paths between QPUs↔GPUs—NVQLink is the missing bus. (Industry press echoes the positioning.)
Status Checks on Items You Listed
Samsung partnership (new): No new Samsung×NVIDIA press release in the last 48 hours. Samsung’s official most recent NVIDIA-adjacent network item is AI-RAN work (MWC’25 timeframe). I’ve linked Samsung’s newsroom entry for that context.
Hyundai partnership (new): No new Hyundai×NVIDIA release in the last 48 hours. Hyundai’s official newsroom shows a Jan 2025 NVIDIA AI collaboration announcement; NVIDIA’s partner page reflects ongoing SDV work.
“$500B in expected revenue through 2026”: Official channels don’t state this as revenue. Reuters attributes $500B to chip bookings/backlog, not top-line revenue. Treat as orders, not P&L guidance.
Strategic Takeaways
AI Factories Are Crossing Sectors: From Lilly’s drug discovery to DOE science and Nokia’s AI-RAN, the same NVIDIA compute stack (Blackwell + CUDA-X + NIM) is becoming a common substrate across healthcare, telecom, and government.
Edge & Network-Native AI Rise: Palantir AIP at the edge and AI-RAN in the network make inference a built-in capability of physical infrastructure, not an add-on.
Hybrid Quantum-Classical Is Getting Real: NVQLink formalizes the QPU↔GPU data path required for practical, near-term quantum workflows.
“What to Watch Next”
Lilly: commissioning timeline + model zoo details on TuneLab; validation studies that show cycle-time compression from hit-to-lead.
Palantir: sector-specific AIP blueprints with Nemotron; performance benchmarks vs. prior AIP stacks.
Nokia: initial AI-RAN field trials and CSP rollouts; 6G research milestones tapping NVIDIA’s stack.
Uber: pilot deployments, safety KPIs, cost/ride vs. human drivers; expansion beyond the first 5,000 Stellantis units.
DOE: “Solstice” acceptance tests, workload mix (climate, fusion, nuclear stewardship), energy envelope.
NVQLink: first lab demos with named QPU vendors (latency, fidelity, throughput).

