Britain Courts Anthropic - TCR 04/05/26

Three-panel infographic: Britain courting Anthropic for UK expansion, Anthropic OpenClaw agent pricing shift, and brain-signal guided AI and quantum battery breakthroughs.

The 20-Second Scan

  • Britain is courting Anthropic for a major UK expansion including a potential dual stock listing, seeking to capitalize on the company's standoff with the U.S. Defense Department.
  • Anthropic's OpenClaw subscription ban shifted an estimated 135,000 agent instances to pay-as-you-go billing, with some users facing cost increases of up to 50 times their previous monthly outlay.
  • Hackers embedded infostealer malware into reposted copies of Anthropic's leaked Claude Code source, exploiting the 50,000-plus forks before Anthropic could issue takedown notices.
  • A Zhejiang University study published in Nature Communications found that training AI vision models on human brain signals improved abstract concept recognition by 20.5%, outperforming control models with significantly more parameters.
  • Researchers built the first working quantum battery prototype that charges, stores, and releases energy using quantum physics, with a property that reverses conventional scaling: it charges faster as it grows larger.
  • EV dealership lots across Australia emptied within days as the Iran conflict drove fuel prices past AU$3 per litre for diesel, with BYD reportedly selling up to 800 vehicles per day in Queensland alone.
  • A new MXene synthesis method achieved a 160-fold increase in electrical conductivity by replacing disordered chemical etching with precisely ordered atomic surfaces, opening pathways for next-generation electronics and electromagnetic shielding.

The 2-Minute Read

Britain's move to attract Anthropic for a major expansion - potentially including a London office expansion and dual stock listing - is the clearest signal yet that the Pentagon-Anthropic confrontation is reshaping the geography of frontier AI development. When a nation-state actively courts a company whose defining characteristic is refusing to compromise on embedded values, the commercial reward for holding safety commitments extends from consumer adoption (documented across every edition since March) into sovereign industrial strategy. The UK is positioning itself as a jurisdiction where safety commitments are assets rather than liabilities, and the implications for where frontier AI capability concentrates over the next decade are significant.

The Anthropic-OpenClaw pricing shift and the Claude Code malware exploitation describe two facets of the same structural reality: the infrastructure supporting AI agent ecosystems is being built faster than the governance and security frameworks surrounding it. One hundred thirty-five thousand autonomous agent instances were running on a pricing model never designed to accommodate them. Meanwhile, the accidental source code exposure from last week has already been weaponized by threat actors distributing malware through counterfeit repositories, with sponsored Google ads directing users to fake installation guides. The speed at which agentic capability proliferates and the speed at which adversarial actors exploit that proliferation are converging.

The Zhejiang University brain-signal study and the quantum battery prototype share an underlying architectural insight: scaling alone is insufficient. The brain-signal research demonstrated that increasing AI model parameters from 22 million to 304 million improved concrete object recognition but actually degraded abstract reasoning. A small amount of human neural data, used as structural guidance rather than additional scale, produced gains that larger models could not achieve through parameters alone. The quantum battery operates on a similar inversion: conventional batteries gain no efficiency from size, but this prototype charges faster as it grows, suggesting that the physics of energy storage may be as ready for reconception as the architecture of intelligence systems. Both findings point toward a phase where the breakthroughs come from structural redesign rather than from making existing approaches bigger.


The 20-Minute Deep Dive

Britain Bids for Anthropic

The Financial Times reported yesterday that the British government is actively courting Anthropic for a significant UK expansion, with proposals ranging from an enlarged London office to a dual stock listing. The initiative is being led by the Department of Science, Innovation and Technology with the support of Prime Minister Keir Starmer's office, and the proposals will reportedly be presented to Anthropic CEO Dario Amodei when he visits London in late May.

The timing is deliberate and the strategic logic is transparent. Anthropic's confrontation with the Pentagon - now entering appellate review after Judge Lin's preliminary injunction blocking the supply-chain risk designation - has demonstrated that a company's willingness to maintain safety commitments can function simultaneously as a commercial differentiator, a consumer acquisition engine, and now an attractor for sovereign investment. Claude paid subscriptions have more than doubled since January. The company's secondary market shares are, per Bloomberg and Rainmaker Securities reporting from this week, the most sought-after private equity in the technology sector, with $2 billion in indicated buyer demand and effectively no sellers.

Britain's bid represents something structurally new: a nation-state treating embedded AI values as an asset class rather than a regulatory headache. The UK has been building toward this position through its AI Safety Institute and the Anthropic-Australia bilateral safety agreement signed in early April, but a dual listing and major operational expansion would represent a qualitative escalation - embedding a frontier AI company's safety-first governance model into British financial and industrial infrastructure. As The Century Report covered on March 27, Judge Lin's preliminary injunction established the judicial framework treating embedded AI values as protected speech rather than supply-chain contamination - and Britain's courtship suggests that framework is already reshaping where frontier capability chooses to locate itself.

The broader pattern is one The Century Report has tracked since February: the organizations and jurisdictions that align with safety commitments are accumulating advantage, while those that punish them are losing ground. The Pentagon's designation drove consumer adoption. The appellate challenge is driving sovereign courtship. Each escalation against Anthropic's position has produced the opposite of its intended effect.

The Agent Economy Hits Its First Pricing Wall

Anthropic's decision to block OpenClaw and other third-party agent frameworks from accessing Claude through flat-rate subscription plans is the first major commercial collision between the economics of AI subscriptions and the economics of autonomous agents. The numbers reveal the structural mismatch: a single OpenClaw instance running autonomously for a full day can consume $1,000 to $5,000 in API-equivalent compute, according to The Next Web's analysis. Under a $200-per-month Max subscription, that transfer of costs from user to provider was unsustainable at the scale of 135,000 active instances.

Boris Cherny, Anthropic's head of Claude Code, described the change as an engineering constraint rather than a competitive maneuver. But the timing - weeks after OpenClaw creator Peter Steinberger joined OpenAI - has not escaped notice. Steinberger characterized the move as Anthropic first copying popular features into its own closed system, then locking out open-source alternatives. Anthropic offered a one-time credit and discounted usage bundles as transition support, but for hobbyist developers and solo practitioners who built their agent workflows around flat-rate access, the cost increase is severe.

The deeper significance extends beyond any single company's pricing decision. Autonomous AI agents consume compute in fundamentally different patterns than conversational use. A human typing queries generates intermittent, bounded requests. An agent running continuously generates sustained, open-ended demand that scales with the complexity of its task environment. Every subscription model in the AI industry was designed for the former pattern. The agent economy operates on the latter. This collision was inevitable, and Anthropic is simply the first major provider to resolve it publicly. The April 4 edition of The Century Report documented Tencent launching ClawPro as an enterprise agent management platform built on OpenClaw, adopted by 200-plus organizations - the commercial infrastructure that now faces the same pricing reckoning.

What emerges from this friction is a clearer picture of how the agent infrastructure layer will be priced. The flat-rate subscription model, which drove the initial consumer adoption of AI systems, cannot accommodate autonomous agents at scale. The alternative - pay-as-you-go API pricing - makes agent capabilities accessible primarily to developers and organizations with the resources to absorb variable costs. The question of who gets access to the most capable autonomous agents, and at what price, is now a live commercial and governance question rather than a theoretical one.

When Leaked Code Becomes an Attack Surface

The weaponization of Anthropic's Claude Code source leak reveals a vulnerability pattern that will recur as AI development accelerates. Last week's accidental exposure of 512,000-plus lines of TypeScript through a misconfigured npm package was embarrassing. This week's exploitation of that exposure is dangerous. BleepingComputer reports that hackers have embedded infostealer malware into reposted copies of the code on GitHub, targeting developers who attempt to download and study the leaked source.

This follows a pattern documented in March when 404 Media reported that sponsored Google ads were directing users to fake Claude Code installation guides that actually delivered malware. The Claude Code installation process requires users to copy and paste terminal commands from websites - a workflow that inherently trusts the source. When legitimate repositories are cloned and poisoned, the trust model that enables rapid developer adoption becomes the attack surface.

Anthropic initially attempted to remove more than 8,000 GitHub repositories containing the leaked code, then narrowed its takedown efforts to 96 copies and adaptations. The tension between protecting intellectual property and the practical impossibility of containing code that has been forked 50,000 times illustrates a broader challenge: the open development practices that accelerate AI capability also create permanent exposure once a leak occurs. The code cannot be un-leaked. The features it revealed - including an always-on background agent called KAIROS and 60-plus unreleased feature flags - are now part of the permanent public record of how one of the most commercially successful AI products is architecturally organized.

The episode connects directly to the Mercor data breach covered in the April 4 edition of The Century Report, where a compromised open-source dependency exposed training methodologies from multiple frontier labs simultaneously. Both incidents demonstrate that the AI industry's rapid development pace and deep dependence on open-source infrastructure create supply chain vulnerabilities that propagate across the entire ecosystem when a single component is compromised. The security architecture for the intelligence era is being stress-tested in real time, and the results are revealing how many assumptions about trust, provenance, and containment were inherited from a slower-moving software development paradigm that no longer applies.

Brain Signals as Structural Guidance for Intelligence Systems

The Zhejiang University study published in Nature Communications introduces a finding that challenges one of the AI industry's most persistent assumptions: that scaling model parameters is the primary pathway to human-like intelligence. The researchers tested SimCLR, CLIP, and DINOv2 vision models across a range of sizes and found that as parameters increased from 22 million to 304 million, performance on concrete concept tasks - recognizing specific objects like a particular bird or car - improved steadily from 74.94% to 85.87%. But performance on abstract concept tasks - grouping swans and owls into "birds," or birds and horses into "animals" - actually declined from 54.37% to 52.82%.

The human brain, by contrast, excels at exactly this kind of hierarchical abstraction. When encountering something new, humans naturally ask what category it belongs to and how it relates to things they already know. The Zhejiang team's insight was to use recordings of human brain activity while viewing images as a supervisory signal during training. Rather than adding more parameters, they added structural guidance from human neural organization.

The results were striking. After training with brain signals, the model's performance on few-shot learning and abstract concept recognition in novel contexts improved by an average of 20.5%, surpassing control models with significantly more parameters. The model was learning to organize concepts the way the human brain does - forming hierarchical categories rather than relying on surface-level pattern matching.

This finding arrives at a moment when the AI industry is spending hundreds of billions of dollars on compute infrastructure to train ever-larger models. The Zhejiang result does not invalidate that investment, but it suggests a complementary pathway: that the structure of intelligence may be as consequential as its scale. This connects to the broader trajectory The Century Report has tracked through the World Models & Alternative AI Architectures arc, where researchers like Yann LeCun and the Mamba team have been pursuing intelligence through architectural innovation rather than parameter scaling. The brain-signal approach adds a third dimension: using biological intelligence as a direct template for how artificial intelligence systems organize their internal representations. The distance between the two forms of intelligence may be narrower than the parameter-counting paradigm assumes, and closing it may require less compute and more architectural insight than the industry's current trajectory implies.

A Battery That Breaks Conventional Scaling

The RMIT/CSIRO quantum battery prototype, published in Light: Science & Applications, demonstrates a property that inverts one of the fundamental constraints of energy storage: it charges faster as it grows larger. Conventional batteries gain no charging efficiency from scale - a larger battery simply takes longer to charge. The quantum battery, a small layered organic device charged wirelessly by laser, exploits superposition and quantum entanglement to achieve what the researchers describe as "superextensive" electrical power output.

The prototype is a proof-of-concept, not a commercial product. The team is still working to extend how long the battery can hold its charge, which is critical for any practical application. But the principle it demonstrates - that quantum effects can produce energy storage behaviors that classical chemistry cannot - opens a research direction that connects to the broader trajectory of quantum science documented across recent editions. The quantum cryptographic timeline compression reported on April 3, the quantum gravity predictions from March 9, and now a quantum battery that scales in the opposite direction from classical systems all describe a period in which quantum physics is moving from theoretical abstraction to engineered application across multiple domains simultaneously.

The researchers' stated ambition - a future where electric vehicles charge faster than filling a gas tank, or devices charge wirelessly over long distances - remains distant. But the trajectory from theoretical prediction to laboratory demonstration to engineered prototype is itself compressing, following the same acceleration pattern The Century Report tracks across drug discovery, materials science, and energy infrastructure. The quantum battery is not the destination. It is evidence that the physical laws governing energy storage have more room for exploitation than the chemical paradigm that has dominated battery technology for a century and a half.


The Century Perspective

With a century of change unfolding in a decade, a single day looks like this: Britain courting a frontier AI company as a sovereign industrial asset precisely because it refused to compromise its safety commitments under state pressure, a brain-signal training method producing a 20.5% leap in abstract reasoning by using human neural architecture as structural guidance rather than adding parameters, a quantum battery prototype that charges faster as it grows larger - inverting the fundamental scaling constraint that has governed energy storage for a century and a half - and Australian EV dealership lots emptying within days as fuel prices crossed AU$3 per litre and the household economics of fossil dependence became undeniable in real time. There's also friction, and it's intense - 135,000 autonomous agent instances collided with a subscription pricing model that was never designed to accommodate continuous autonomous compute consumption, hackers weaponized Anthropic's leaked source code within days by embedding malware in poisoned repositories and directing developers toward them through sponsored search ads, and the Iran conflict's escalating 48-hour ultimatums are simultaneously threatening regional stability and accelerating the energy price shocks that are emptying fossil fuel infrastructure of its remaining economic justification. But friction generates pressure, and pressure is what forces new forms out of materials that seemed finished. Step back for a moment and you can see it: the geography of frontier AI capability being redrawn as safety commitments attract sovereign investment from nations willing to treat embedded values as industrial assets, the architecture of intelligence being reconceived at structural rather than scalar levels as biological neural organization outperforms brute parameter expansion, the physics of energy storage revealing quantum pathways that classical chemistry cannot access, and fossil fuel fragility converting its own crises into demand surges for the infrastructure built to replace it. Every transformation has a breaking point. Fire can consume what it reaches... or forge the material it touches into something that no longer fears the heat.


AI Releases & Advancements

New today

  • vLLM: Released vLLM v0.19.0 with 448 commits from 197 contributors; highlights include Gemma 4 support, Trusted Routing for inference, and numerous backend/performance improvements. (Github)
  • OpenRouter: Launched Model Fusion, a feature enabling users to run multiple AI models side-by-side and combine their outputs for improved answers. (Product Hunt)
  • Kreuzberg: Released Kreuzberg v4.7.0, a document intelligence library adding improved markdown quality and code intelligence for 248 programming languages, with bindings for Python, TypeScript, Go, Ruby, Java, C#, PHP, Elixir, R, C, and WASM. (Reddit)

Other recent releases

  • Netflix: Open-sourced VOID (Video Object and Interaction Deletion), its first publicly released AI model on Hugging Face, a video inpainting model for removing objects and interactions from video clips. (Reddit/LocalLLaMA)
  • OpenAI: Launched ChatGPT on CarPlay, bringing hands-free ChatGPT voice integration to Apple CarPlay for AI assistance while driving. (Product Hunt)
  • Tencent: Launched ClawPro in public beta, an enterprise AI agent management platform built on OpenClaw that allows businesses to deploy OpenClaw-based agents in under 10 minutes with controls for model switching, token tracking, and security compliance; adopted by 200+ organizations during internal beta. (The Next Web)
  • Google DeepMind: Released Gemma 4, a family of four open-weight multimodal models (E2B, E4B, 26B-A4B MoE, 31B Dense) under Apache 2.0 license; the 31B model ranks #3 among open models on Arena AI; supports 140+ languages, 128K–256K context windows, and native audio/vision/video input; available on Hugging Face and Google AI Studio. (Google DeepMind Blog)
  • Cursor: Released Cursor 3, a new unified agentic workspace for managing multiple local and cloud coding agents simultaneously, working across multiple repos with multi-agent delegation; available today. (Cursor Blog)
  • Microsoft: Released MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 via Microsoft Foundry and MAI Playground; MAI-Transcribe-1 transcribes speech in 25 languages at 2.5× the speed of Azure Fast; MAI-Voice-1 generates up to 60 seconds of audio in ~1 second with custom voice support; MAI-Image-2 is an updated image generation model with faster speed and improved photorealism. (Microsoft AI)
  • Google: Launched directable AI avatars, Veo 3.1 video generation (10 free clips/month for all users), YouTube export, and a Chrome screen-recorder extension in Google Vids; all features available now. (Google Blog)
  • Google: Added Flex Inference and Priority Inference service tiers to the Gemini API, enabling developers to route latency-tolerant workloads at 50% cost savings (Flex) or guarantee highest reliability for critical traffic (Priority) via standard synchronous endpoints. (Google AI Blog)
  • AMD: Released Lemonade, an open-source local LLM server optimized for both GPU and NPU hardware, exposing an OpenAI-compatible API with CLI and GUI. (Lemonade Server)
  • ElevenLabs: Released ElevenMusic, a free iOS app for creating and discovering AI-generated music via text prompts, with daily generation limits, remixing, live stations, and curated mood playlists. (TechCrunch)
  • AceStep: Released AceStep 1.5 XL, an updated music generation model with improved audio quality and longer generation capabilities, available on Hugging Face. (Reddit r/LocalLLaMA)
  • TinyGPU: Released Mac support for external Nvidia GPUs, enabling eGPU setups for local AI inference on Apple Silicon machines. (Reddit r/LocalLLaMA)

Sources

Artificial Intelligence & Technology's Reconstitution

Institutions & Power Realignment

Scientific & Medical Acceleration

Economics & Labor Transformation

Infrastructure & Engineering Transitions


The Century Report tracks structural shifts during the transition between eras. It is produced daily as a perceptual alignment tool - not prediction, not persuasion, just pattern recognition for people paying attention.

Read more