Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Ethan Mollick wants AI to play the role of a Victorian scientist who believes in luminous ether.
Ethan Mollick proposes using synthetic data to simulate Victorian scientists searching for luminous ether
Summary
Core idea: Fill in historical gaps with synthetic data and then run counterfactual historical simulations. Ethan Mollick (Wharton School, researching how AI changes work) presented a concept on social media: to use AI-generated synthetic data to construct “missing historical records,” allowing models to simulate counterfactual historical scenarios. His example is having an AI agent portray a Victorian scientist still in pursuit of “luminous ether,” employing techniques reminiscent of Neal Stephenson’s weaving together of real history and fictional narratives.
The background issue is simple: some historical periods have sparse documentary records. If synthetic data can reasonably fill in these gaps, AI can simulate paths that did not occur but “could have occurred.” For the AI industry, this points to education and the humanities—fields where generative models are still seeking practical applications.
Analysis
In his book “Co-Intelligence,” Mollick positions AI as a collaborative tool. This new concept pushes this idea into the realms of historical and history of science modeling:
Luminous ether is a compelling topic. The mainstream hypothesis of the 19th century posited that light needed an invisible medium to propagate through the universe. The 1887 Michelson-Morley experiment proved this medium did not exist, clearing the way for Einstein’s theory of relativity. If AI were to explore physical problems with the a priori assumption that “ether definitely exists” from before 1887, it could retrace the chain of scientific reasoning before the paradigm shift instead of applying modern answers in a “dimensionality reduction” critique.
Referencing Stephenson is not arbitrary. His “Baroque Cycle” integrates historical figures like Newton and Leibniz into fictional narratives surrounding science, cryptography, and finance. Mollick’s vision is similar: to have AI generate reasonable historical trajectories of events that “could have happened but did not,” and then examine how these trajectories differ from actual history.
The risks are also evident: synthetic data may lead to illusions and biases. If the historical context being “filled in” is itself fictional, how can one distinguish between useful inferences and misleading fabrications? Currently, there are no recognized validation and labeling standards, which is an obstacle that cannot be avoided before practical implementation.
Implications
Key Takeaways
Related Developments
These trends align with Mollick’s ideas: using AI to expand our capacity to understand the past (and its potential branches).
Further Reading
Bottom line assessment: This is an “early, methodology-driven” narrative. It is currently more suitable for builders willing to refine data and evaluation pipelines—edtech/digital humanities tool developers and research institutions. Those looking for short-term monetization have little to do for now; long-term funding can be tracked with small positions, but the true advantage will still belong to the Builders who are establishing standards and products.