Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Concentrated Intelligence: New Bonsai AI Model Family Enables High-Performance AI Beyond The Data Center
In Brief
PrismML emerged from stealth and launched Bonsai, a tiny open-source AI model that shows strong intelligence for its size and is able to run on consumer hardware.
Emerging from research conducted at Caltech, PrismML said its work focuses on maximizing “intelligence density,” a measure of the useful capability a model can deliver per unit of size and deployment footprint. This approach contrasts with traditional AI development, which typically emphasizes increasing model size and parameter count at the cost of deployability and efficiency.
The lab’s flagship model, 1-bit Bonsai 8B, features a full 1-bit design across all components, including embeddings, attention layers, MLP layers, and the output head, with no higher-precision fallback layers. At 1.15 GB, the model is approximately 14 times smaller than comparable 16-bit models in the same parameter class, yet PrismML reports that it maintains competitive performance across standard benchmarks. The reduced size enables deployment on devices such as iPhones, iPads, and Macs, as well as standard GPUs, delivering faster inference and lower memory usage than traditional large-scale models.
PrismML emphasizes that the breakthrough is not only about performance but also about where AI can operate. Smaller, efficient models allow for lower-latency applications, enhanced privacy through on-device computation, and continued functionality in offline or bandwidth-constrained environments
Potential applications include persistent on-device agents, real-time robotics, enterprise copilots, and AI-native tools designed for secure or resource-limited settings. PrismML argues that concentrated intelligence expands the design space for AI, making systems more responsive, reliable, and broadly deployable.
Expanding Bonsai: Smaller 1-Bit Models Extend Efficiency And Intelligence To Edge Devices
In addition to Bonsai 8B, PrismML has introduced smaller models, 1-bit Bonsai 4B and 1.7B, which extend the same efficiency and intelligence density principles to reduced model sizes. Early demonstrations show high throughput, energy efficiency, and competitive benchmark accuracy across the family. The lab also noted that the models run effectively on current commercial hardware and that future devices optimized for 1-bit inference could deliver even greater efficiency gains.
PrismML’s release represents a broader shift in AI development, emphasizing concentrated intelligence and portability over sheer scale. The lab envisions a future in which advanced AI operates seamlessly across cloud and edge devices, making intelligent systems accessible wherever they are needed. The 1-bit Bonsai models are available under the Apache 2.0 license, supporting deployment across Apple devices, NVIDIA GPUs, and a range of other platforms.