Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Seven signals to understand AI this week: model leaks, code engines, personnel management
Anthropic’s overall revenue run rate is estimated to reach $14 billion, with Claude Code’s individual run rate around $2.5 billion.
Author: Tara Tan / StrangeVC
Compiled by: Deep Tide TechFlow
Deep Tide Overview: This week’s report is dense, covering seven independent signals that highlight the most critical trends in the AI industry.
Most notably: Anthropic accidentally leaked details of a new model codenamed “Capybara” due to a CMS configuration error, which is positioned above Opus.
Full text as follows:
In the past few months, we have certainly crossed some agentic threshold. What used to take four to six weeks to build five years ago now takes less than five minutes. Six months ago, the same task still took one to two hours plus significant debugging.
This is a significant phase change that we may not yet have fully digested. The collapse of the distance between ideas and runnable products will rewrite the entire industry. This represents a leap in the tools humans use to build, create, and solve problems.
Relatedly, OpenClaw has become noticeably more stable since its acquisition by OpenAI. It has a clear path to becoming one of the most important open-source projects in the AI space.
Moving on to this week’s content.
Anthropic’s Claude Mythos leak reveals new model hierarchy
Anthropic accidentally exposed details of an unreleased model named Claude Mythos due to a CMS configuration error. The leaked draft describes a new “Capybara” tier, positioned above Opus, with significant breakthroughs in programming, reasoning, and cybersecurity capabilities. Anthropic confirmed that it is testing this model with early access customers, calling it a “leap change” and “the most powerful model built to date.” (Fortune, The Decoder)
Why it matters: Beyond the model itself, there are two other noteworthy points. First, the leaked draft warns that the model’s cybersecurity capabilities “far exceed any other AI model,” which drove the movement of cybersecurity stocks in a single trading day. Second, the introduction of a fourth model tier (Capybara above Opus) indicates that Anthropic is building pricing space for enterprise customers, not just performance space for benchmark testing.
Claude Code is becoming Anthropic’s core growth engine
Claude Code currently accounts for about 4% of all public GitHub submissions, expected to reach over 20% by the end of the year. Anthropic’s overall revenue run rate is estimated to reach $14 billion, with Claude Code’s individual run rate around $2.5 billion. Users of this tool have expanded from developers to non-technical users, the latter of whom are learning terminal commands to use it for building projects. (SemiAnalysis, Uncover Alpha, VentureBeat)
Why it matters: Claude Code has compressed customer acquisition costs to nearly zero through organic developer adoption. By expanding to non-developer roles via Cowork, the addressable market has vastly increased beyond the 28 million professional developers worldwide.
Cheng Lou’s Pretext: Text layout without CSS
Cheng Lou is one of the most influential UI engineers of the past decade (React, ReasonML, Midjourney), and he has released Pretext, a pure TypeScript text measurement algorithm that completely bypasses CSS, DOM measurements, and browser reflows. Demonstration effects include: virtualizing rendering hundreds of thousands of text boxes at 120 frames per second, tightly packed chat bubbles with zero pixel waste, responsive multi-column magazine layouts, and variable-width ASCII art. (X post)
Why it matters: Text layout and measurement have long been hidden bottlenecks hindering the next generation of UIs. CSS was designed for static document design, not for the fluid, AI-generated, real-time interface design that has become mainstream today. If Pretext delivers on its demonstration effects, it will eliminate one of the last fundamental constraints on the appearance and experience of AI-native interfaces.
Arm ships self-developed chips for the first time in 35 years
Arm has released the AGI CPU, a 136-core data center processor based on TSMC’s 3nm process and co-developed with Meta. This marks the company’s first sale of a finished chip rather than licensed IP. OpenAI, Cerebras, and Cloudflare are among the first partners, with bulk shipments expected to begin by the end of the year. (Arm Newsroom, EE Times)
Why it matters: Current AI data centers are predominantly GPU-based. GPUs handle training and running models, while CPUs primarily process data flows and scheduling. However, agentic workloads differ. When thousands of AI agents are running simultaneously, each coordinating tasks, calling APIs, managing memory, and routing data across systems, this orchestration work falls on the CPU. Arm claims this will drive a fourfold increase in CPU demand for every gigawatt of data center capacity. (HPCwire, Futurum Group)
NVIDIA and Emerald AI turn data centers into grid assets
NVIDIA and Emerald AI announced the formation of an alliance with AES, Constellation, Invenergy, NextEra, and Vistra to build a “flexible AI factory” that participates in grid balancing services by adjusting computational loads. The first facility, Aurora, is located in Manassas, Virginia, and is set to open in the first half of 2026. (NVIDIA Newsroom, Axios)
Why it matters: The biggest bottleneck in the expansion of AI infrastructure is not chips, but the timeline for grid access, which takes 3 to 5 years in most regions. Data centers that can demonstrate grid flexibility can gain faster access and face less regulatory resistance. This redefines the energy proposition for AI infrastructure investors: the winning argument is not “more power,” but “smarter power.”
China restricts Manus AI executives from leaving the country
Chinese authorities have restricted Manus CEO Xiao Hong and Chief Scientist Ji Yichao from leaving the country following Meta’s $2 billion acquisition of the Singapore-registered AI startup. The National Development and Reform Commission summoned the two executives to Beijing this month and imposed travel restrictions during the regulatory review period. (Reuters, Washington Post)
Why it matters: This is not a trade restriction, but a personnel restriction. China may be sending a signal that AI talent with mainland backgrounds is a controlled asset, regardless of where the company is registered.
A 400 billion parameter large model runs locally on the iPhone 17 Pro
An open-source project called Flash-MoE demonstrated a 400 billion parameter mixture of experts model running entirely on-device, using the A19 Pro chip of the iPhone 17 Pro, streaming weights from SSD to GPU. The model (Qwen 3.5-397B, 2-bit quantization, 17 billion active parameters) operates at a speed of 0.6 tokens per second, with 5.5GB of RAM remaining. (WCCFTech, TweakTown, Hacker News)
Why it matters: This is a proof of concept, not a product. The 400 billion parameter model can run on a phone with 12GB of memory because only a small portion of the model is active at any given time (mixture of experts), while the rest is streamed from the phone’s built-in SSD rather than resident in memory. However, applying the same technique to much smaller models—such as 7 billion or 14 billion parameters—on next-generation storage and faster mobile chips could yield truly usable, conversational-speed AI running entirely on-device without the cloud.
AI agents autonomously complete a full set of particle physics experiments
MIT researchers have published a framework called JFC (Just Furnish Context), demonstrating that an LLM agent built on Claude Code can autonomously execute a complete high-energy physics analysis pipeline: event selection, background estimation, uncertainty quantification, statistical inference, and paper writing. The system operates on open data from the ALEPH, DELPHI, and CMS detectors. (arXiv 2603.20179)
Why it matters: This is one of the clearest demonstrations of agentic AI’s ability to automate end-to-end scientific workflows in fields with high methodological rigor. The direct investment implications point towards re-analyzing legacy datasets in fields such as physics, genomics, and materials science—decades of archived data that remain underutilized.