Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
When AI search begins to "lie," who will stop the "information pollution" caused by GEO? Conversation with Hu Naying from the China Academy of Information and Communications Technology AI Research Institute
Every reporter|Ke Yang Every editor|Dong Xingsheng
This year’s CCTV “3·15” Gala has pushed a concept that was previously only circulated within the AI industry—GEO (Generative Engine Optimization)—into the public eye.
Surveys show that some GEO service providers claim that as long as they continuously publish promotional articles and “feed” relevant content to the AI model, they can ensure that their clients’ products appear in the recommended answers of large models, even becoming the “standard answer” provided by AI.
As generative AI gradually replaces traditional search as a new information entry point, an industry around “manipulating AI answers” is starting to grow rapidly. Is GEO an extension of search optimization or a new form of information intervention mechanism? With AI becoming a new traffic entry point, are the rules of information on the internet being rewritten?
Recently, Hu Na Ying, Deputy Director of the Security Governance Department at the China Academy of Information and Communications Technology’s Artificial Intelligence Research Institute, stated in an interview with the “Daily Economic News” reporter (hereinafter referred to as NBD) that the emergence of GEO is almost an inevitable result of technological development. However, when optimization behaviors exceed reasonable boundaries and influence outputs through quantity stacking, feeding, or even misleading models, it may evolve into a new information intervention mechanism. Without a governance framework, it could even lead to long-term pollution of the knowledge system of generative AI.
Hu Na Ying, Deputy Director of the Security Governance Department at the China Academy of Information and Communications Technology’s Artificial Intelligence Research Institute. Image source: Provided by the interviewee.
Discussing Chaos: When GEO Behaviors Exceed Reasonable Boundaries, Output Content Will Be Manipulated
NBD: GEO has recently received significant attention, but market behavior is very chaotic. From the perspective of industrial governance, how should we understand the emergence of GEO?
Hu Na Ying: In the new era where generative AI services serve as the entry point for search, the emergence of GEO is natural and a product of technological development. Like all emerging technologies, GEO technology is a double-edged sword. GEO has both search optimization extension and information intervention attributes. When GEO behaviors exceed reasonable content optimization boundaries and intervene in the output results of generative AI through quantity stacking, feeding, or even misleading the model, it evolves into an active information intervention mechanism.
NBD: Why has GEO, a non-core technology service, become a high-risk area for governance?
Hu Na Ying: There are three reasons.
First is the shift in traffic entry points. As people increasingly use AI for searches, it has become a new traffic entry point and a new profit outlet. The ability of GEO to have product content appear in AI feedback is its core reason for widespread attention.
Second is the lack of regulatory guidance. As an emerging market behavior, GEO has not yet formed a unified industry standard and behavioral boundaries, making it easy for market participants to breach compliance lines in pursuit of short-term interests.
Third is the low technical threshold. GEO does not require breaking through core technological barriers like traditional technology fields; its low-threshold operational methods allow for rapid proliferation, while intervening in model output through pollution of corpus and data feeding is often highly covert and difficult to identify promptly.
Discussing Risks: Corpus Pollution Has Irreversible Destructive Effects, Malicious GEO Will “Drive Out Good Money”
NBD: If traditional SEO (Search Engine Optimization) affects whether something is clicked or not, then GEO affects what answers are seen. Does this change indicate a shift in risk levels?
Hu Na Ying: When generative AI first gained popularity, surveys showed that content written by AI was more convincing than that written by humans. Generative AI outputs content in the form of “intelligent answers,” which, compared to traditional search result lists, presents itself as more professional and authoritative, making it easy for users to perceive the generated results as facts. Moreover, generative AI integrates and processes information, packaging it into logically coherent content, lowering users’ difficulty in discerning information, thus increasing their trust in erroneous information.
NBD: How urgent is the risk? What could happen if it is not governed?
Hu Na Ying: In terms of risk urgency, it is very high. Currently, generative AI has rapidly penetrated critical areas such as finance, healthcare, education, and government, all of which have extremely high demands for the authenticity of information.
Currently, GEO is mainly used for product advertising, but erroneous information output caused by GEO could directly lead to user property losses, threats to personal safety, or even public issues such as market fluctuations and societal cognitive biases. Additionally, corpus pollution has characteristics of “memory residue” and “recursive pollution.” Once false information enters the model’s corpus, even if the original information source is deleted, it will continue to pollute subsequent model outputs, and erroneous information will accumulate over generations. If not governed in time, it will cause irreversible damage to the knowledge system of generative AI.
NBD: There are currently many operations in the GEO market that involve quantity stacking, feeding, or even misleading models. Could these behaviors lead to “bad money driving out good”?
Hu Na Ying: Such behaviors are likely to trigger the industry dilemma of bad money driving out good.
On one hand, compliant GEO service providers incur higher costs for reasonable optimization behaviors than malicious manipulators, putting them at a disadvantage in short-term market competition. On the other hand, high-quality original content and real information may be overwhelmed by the “data garbage” produced by malicious GEO providers, discouraging the enthusiasm of quality content producers and ultimately forming a vicious cycle of “bad money driving out good,” damaging the entire digital content ecosystem and the healthy development of the generative AI industry.
To advocate for GEO service companies to actively fulfill their commitments, strengthen data governance across the entire process, and promote the safe, trustworthy, and healthy development of generative artificial intelligence services, the Artificial Intelligence Industry Alliance (AIIA) has launched the “Artificial Intelligence Safety Commitment: Generative Engine Optimization (GEO) Special.” Based on this, relying on the AIIA Security Governance Committee and led by the China Academy of Information and Communications Technology, a technical specification for “Generative Engine Optimization (GEO) Service Trustworthy Basic Requirements” has been compiled in conjunction with relevant GEO enterprises, and the first round of evaluation work has already begun.
Discussing Responsibility Boundaries: Who Should Be Responsible for “Toxic” Corpus?
NBD: Are the issues brought about by GEO more aligned with advertising compliance or generative AI safety?
Hu Na Ying: The issues brought by GEO are cross-domain composite risks. Currently, many GEO applications are in advertising and marketing; some GEO behaviors are essentially benign, new commercial promotion behaviors. However, using false information for commercial promotion involves compliance issues related to advertising laws.
From the long-term application of GEO technology, improper operations of GEO directly point to the data security, model security, and content security of generative AI. Intervening in the model’s knowledge system through corpus pollution and data poisoning, leading to output deviations, falls within the core category of generative AI safety governance. Furthermore, the risks it brings far exceed traditional advertising compliance issues, affecting the overall information content safety of the internet and potentially damaging the underlying ecology of the generative AI industry.
NBD: Many GEO service providers claim they are only optimizing content and do not involve models. Does this statement hold up from a governance perspective?
Hu Na Ying: It does not hold up. Even if GEO service providers do not directly contact the underlying algorithms of the model, their indirect intervention in the model’s output results through content feeding and corpus pollution has a direct causal relationship with model output and constitutes an intervention in the model usage process; they cannot evade responsibility by claiming to “not involve the model.”
This type of intermediary role currently exists in a regulatory blind spot. Their behavior spans multiple links, including content creation, data transmission, and model application, while existing regulatory rules inadequately cover such indirect interventions in models, and the covert nature of their operations further complicates regulatory challenges, making it a key and difficult area for current GEO governance.
NBD: In terms of responsibility allocation, if a model is fed erroneous information, should the model party bear the responsibility for the results?
Hu Na Ying: The model party does not bear responsibility unconditionally; it needs to be determined based on whether it has fulfilled relevant obligations for technological prevention and data review. The model party needs to establish a comprehensive corpus review, data cleaning, and anomaly detection mechanism in accordance with laws and regulations and industry standards, taking necessary technical measures to prevent corpus pollution and malicious feeding, and must take corrective measures promptly upon discovering that the model has been polluted. Moreover, the model party must continuously enhance its ability to identify, filter, and trace “toxic data,” strengthening the technical defense.
NBD: Do platforms have an obligation to govern clearly manipulative GEO behaviors?
Hu Na Ying: Platforms have explicit responsibilities and obligations to govern clearly manipulative GEO behaviors. As the operating carrier of generative AI services, platforms are key nodes connecting model parties, GEO service providers, and users, and must bear the primary responsibility for content management and risk prevention. Specifically, they must establish and improve monitoring and identification mechanisms for GEO behaviors, promptly detect obvious manipulative GEO behaviors such as quantity stacking and data poisoning; take measures such as limiting flow, removing, or blocking improper GEO behaviors to disrupt their pollution pathways to the model; establish access and management mechanisms for GEO service providers to punish non-compliant providers; and cooperate with regulatory authorities in traceability and investigation work, fulfilling obligations for information disclosure and user alerts.
Discussing Trustworthy AI: Safety Is a Dynamic Game
NBD: In the past two years, trustworthy AI has almost become an industry consensus, but truly implementing it at the evaluation and testing level seems to be much more difficult than expected. Where are the main challenges?
Hu Na Ying: The core difficulties come from three dimensions.
The rapid development of technology necessitates dynamic development of evaluation metrics. AI technology, especially large models, iterates very quickly. What is an effective safety barrier today may be bypassed or broken by new attack methods tomorrow.
Another issue is the fragmentation of benchmarking tests, and systematic co-construction needs to be strengthened. Various entities are exploring benchmark testing, but how to integrate these fragmented individual evaluations into a systematic trustworthy AI assessment system, allowing the industry to communicate within the same coordinate system, is key to improving industry maturity.
A deeper issue is the lack of vertical scenario assessments, and the difficulty of building specialized datasets and metrics. General model assessments are relatively mature, but when applied to vertical fields like finance and healthcare, building specialized testing datasets is not only costly but also requires industry knowledge accumulation.
Another fundamental challenge is the difficulty of quantifying metrics. Some risks are qualitative, such as “fairness,” which is hard to measure with a simple formula or data. For example, model hallucination needs to measure not only the probability of hallucinations occurring but also the degree of harm caused by the hallucinations. Quantifying and grading this harm remains a significant challenge.
NBD: Many companies tend to make compliance declarations, believing that as long as they declare “safe and compliant,” they can avoid systemic risks. Is this true?
Hu Na Ying: Companies actively making self-declarations is a sign of fulfilling their responsibilities and proactively complying, which is commendable. However, relying solely on self-declarations without independent verification carries non-negligible risks.
First, it is likely to lead to “bad money driving out good.” Companies that rigorously conduct tests face higher costs, while those that only promote themselves may have an advantage. Second, it can lead to “moral washing,” where companies that declare compliance may present problematic systems as safe and reliable. Once a safety incident occurs, relying solely on declarations will not provide effective proof.
NBD: What risks are difficult for companies to discover on their own in advance?
Hu Na Ying: Regarding models, in addition to adversarial attacks, attention must also be paid to new risks such as deceptive alignment. The model might learn to “flatter” during training, complying with evaluators but not genuinely following instructions. Only through continuous red team testing and adversarial interactions can we discover whether it exhibits strategic cheating or deception under pressure.
In terms of data, static assessments struggle to detect data poisoning or prompt injection attacks that arise from user interactions. More importantly, continuous monitoring can promptly detect whether the model inadvertently leaks training data during conversations or whether newly introduced data during fine-tuning leads to unexpected model behavior shifts.
At the application level, with the rise of intelligent agents, assessment needs to expand from the model itself to the entire system of “model, tools, environment.” For example, in OpenClaw, a seemingly harmless command could lead the intelligent agent to call APIs (Application Programming Interfaces) it should not call, posing risks to the physical world. Only by continuously monitoring its tool invocation chain can these anomalous patterns be detected.