Ethan Mollick on Superintelligence: If it can't find what humans can't, then it's not superintelligent.

robot
Abstract generation in progress

Headline

Ethan Mollick on Superintelligence: If it can’t find things that humans can’t, then it’s not superintelligence.

Summary

Wharton School professor Ethan Mollick responded to a common criticism of superintelligence (ASI) — “ASI also can’t find things that humans or algorithms miss.” His reply is straightforward: this is precisely the definition of superintelligence. If a system is not cognitively superior to humans, it shouldn’t be called superintelligent.

This clarification addresses the confusion between AGI and ASI. AGI refers to general intelligence comparable to human levels, while ASI surpasses human capabilities. This distinction frequently arises in AI safety discussions because the risks associated with “as smart as humans” and “smarter in every aspect” are fundamentally different.

Analysis

Mollick leads the Generative AI Labs at Wharton and published “Co-Intelligence” in 2024, having long focused on this field. He points out that the public often misuses cutting-edge AI concepts:

  • Many people questioning “AI can’t find what humans miss” are actually referring to current systems, which are at best AGI level.
  • But the definition of ASI is to outperform humans in all cognitive tasks — of course including making new discoveries and unconventional solutions.

The discussions around AGI and ASI thus have structural differences:

  • AGI is more likely to collaborate with human researchers or traders, augmenting existing workflows.
  • ASI, in theory, can advance problems without human involvement, even changing the way we ask questions.

This represents a leap in capabilities and a watershed moment for risks. Some believe this is the point where AI surpasses human problem-setting capabilities; others worry that by then humans may not understand or control the goals and actions of the systems.

Related tweets seem to respond to specific discussions about trading algorithms (the context is incomplete), but the core point remains unchanged:

  • If you doubt that superintelligence can outperform specialized AI and human experts, you are questioning the concept of ASI itself, not a specific assertion.

Impact Assessment

  • Importance: High
  • Category: Technical Insight, AI Research, AI Safety

Conclusion: This discussion is still in the early stages of “clarifying concepts and building consensus.” It is most valuable for researchers and policy teams, as they need to define capability assumptions and safety requirements. Traders and long-term investors are currently focused on observation and extrapolation, waiting for clearer technological and regulatory signals.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin