straced
The Picture

Frontier labs moved on capability, distribution, and compute in parallel today, suggesting the three fronts now run on the same clock. Anthropic signed its fourth major compute deal — 300MW at SpaceX's Colossus 1 — and shipped ten finance-agent templates with native Microsoft Office integration. Meta launched Muse Spark, published a four-chip MTIA roadmap through 2027 with Broadcom, and released a safety framework. SpaceX's $55 billion Terafab filing and NVIDIA's 5GW IREN partnership added capacity to the same cycle.

Signals
01

Anthropic rents all 300 megawatts of SpaceX's Colossus 1

Anthropic signed for all 300MW and 220,000 NVIDIA GPUs at SpaceX's Colossus 1 datacenter in Memphis, coming online within a month — its fourth major compute commitment after Amazon, Google/Broadcom, and Microsoft/NVIDIA.

Anthropic is renting from a direct rival's infrastructure arm because that's where the megawatts are, suggesting frontier labs now route roadmaps around whoever has capacity rather than around strategic partners.

Anthropic now operates four parallel compute supply lines totalling roughly 10GW and tens of billions of dollars in commitments. The model vendor and the cluster operator are no longer the same company, even when the cluster operator is a competitor.

Counter-readThe simpler read: Anthropic took the only 300MW available at one month's lead time. Inventory, not strategy.

02

Meta launches Muse Spark with parallel-reasoning Contemplating mode

Meta Superintelligence Labs released Muse Spark, a multimodal reasoning model whose Contemplating mode runs parallel reasoning agents and scores 58% on Humanity's Last Exam and 38% on FrontierScience Research. The model is live at meta.ai with a private API preview opening.

Meta now competes with Gemini Deep Think and GPT Pro on reasoning benchmarks, suggesting Muse Spark puts the lab back in the frontier reasoning conversation.

Meta shipped a frontier-tier reasoning model, a four-chip silicon roadmap, and a safety framework on the same day. The Superintelligence Labs rebrand now has a model and a published infrastructure plan behind it, not just a hiring story.

03

Meta and Broadcom commit to four MTIA chip generations through 2027

Meta detailed an MTIA roadmap with Broadcom covering four generations — MTIA 300, 400, 450, 500 — across two years, with HBM bandwidth growing 4.5x and compute FLOPS growing 25x from MTIA 300 to 500. MTIA 400 has finished lab testing; MTIA 450 targets mass deployment in early 2027.

Meta is committing to custom silicon as a sustained multi-generation program, not a one-off, and is willing to publish the cadence two years out.

Broadcom now sits behind another multi-year hyperscaler silicon partnership with public milestones. NVIDIA's merchant position at the top of the stack faces a larger customer building a parallel internal supply on a fixed schedule.

04

Mozilla credits Anthropic Mythos for surge in Firefox vulnerability fixes

Mozilla researchers say Anthropic's Mythos model uncovered high-severity Firefox vulnerabilities, including bugs dormant for over a decade. Firefox shipped 423 bug fixes in April 2026 against 31 in April 2025, and Mozilla has published details on 12 specific findings.

The 13x year-over-year jump suggests agentic vulnerability scanners have crossed a usability threshold for production codebases — Mozilla's engineers describe filtering false positives that previously made AI bug-finding impractical.

Anthropic unveiled Mythos in April under an unauthorised-access controversy. Two weeks later, Mozilla is publicly using its output, giving Anthropic's Claude Security product a real third-party deployment to point to instead of only Anthropic's own claims.

Tool Worth Knowing

Mistral Medium 3.5

worth attention

Mistral released Medium 3.5 — a dense 128B model with a 256k context window scoring 77.6% on SWE-Bench Verified. It is open-weight under a modified MIT license and runs self-hosted on as few as four GPUs, and the matching Vibe coding agent can now run remotely in the cloud while the developer is away.

Use this when you need a strong coding model on your own hardware — four GPUs is a small-team budget, not a hyperscaler one.

Source: Mistral AI Blog
Friction Point

Every lab is shipping its own agent runtime

Anthropic shipped ten finance agent templates that run natively inside Microsoft Office. Mistral added remote agents to its Vibe coding tool, Perplexity opened its Personal Computer agent to all Mac users with 400+ connectors, xAI launched Grok Connectors, and OpenAI shipped three voice API models. Each runs inside its own vendor's runtime — different connectors, different memory, different deployment surface. A buyer evaluating 'an AI agent for the team' is now picking a vendor's runtime, not a capability layer.

Source: Anthropic News
The Number

$55 billion

SpaceX disclosed this as the minimum to build Terafab, its planned AI chip plant in Texas — expandable to $119 billion. The plant targets up to 200 GW/year of computing power on Earth and 1 TW in space.

Source: The Verge (AI)
The Thread

Anthropic, Meta, and SpaceX each made a different bet today on the same capacity problem. Anthropic kept renting — its fourth major compute deal, this time 300MW from a rival's data center. Meta and SpaceX kept building — Meta committed to four custom chip generations through 2027, and SpaceX disclosed a $55 billion fab plan. Straced has tracked an infrastructure lock-in pattern since April; today it ran in two directions at once.

Sources
  1. 01Anthropic News×2
  2. 02Meta AI Blog×3
  3. 03TechCrunch
  4. 04The Verge (AI)
  5. 05NVIDIA Newsroom
  6. 06Mistral AI Blog