Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Ethan Mollick on Superintelligence: If it can't find what humans can't, then it's not superintelligent.
Headline
Ethan Mollick on Superintelligence: If it can’t find things that humans can’t, then it’s not superintelligence.
Summary
Wharton School professor Ethan Mollick responded to a common criticism of superintelligence (ASI) — “ASI also can’t find things that humans or algorithms miss.” His reply is straightforward: this is precisely the definition of superintelligence. If a system is not cognitively superior to humans, it shouldn’t be called superintelligent.
This clarification addresses the confusion between AGI and ASI. AGI refers to general intelligence comparable to human levels, while ASI surpasses human capabilities. This distinction frequently arises in AI safety discussions because the risks associated with “as smart as humans” and “smarter in every aspect” are fundamentally different.
Analysis
Mollick leads the Generative AI Labs at Wharton and published “Co-Intelligence” in 2024, having long focused on this field. He points out that the public often misuses cutting-edge AI concepts:
The discussions around AGI and ASI thus have structural differences:
This represents a leap in capabilities and a watershed moment for risks. Some believe this is the point where AI surpasses human problem-setting capabilities; others worry that by then humans may not understand or control the goals and actions of the systems.
Related tweets seem to respond to specific discussions about trading algorithms (the context is incomplete), but the core point remains unchanged:
Impact Assessment
Conclusion: This discussion is still in the early stages of “clarifying concepts and building consensus.” It is most valuable for researchers and policy teams, as they need to define capability assumptions and safety requirements. Traders and long-term investors are currently focused on observation and extrapolation, waiting for clearer technological and regulatory signals.