The CEOs of two major US food delivery giants, one worth $2.7 billion, and the other running the world’s largest ride-hailing platform, both stayed up past midnight on Saturday to write online essays to clear their names.
The incident started with an anonymous post on Reddit.
The poster claimed to be a backend engineer at a large food delivery platform, drunk, who went to a library to leak information via public WiFi.
The content roughly was:
The company analyzes ride-hailing drivers’ situations and then assigns them a “Despair Score,” the more desperate for money, the less likely they get good orders; the so-called priority delivery for food is fake, regular orders are delayed; all “driver welfare fees” are not paid to drivers, but are used to lobby Congress against unions…
The post ended very convincingly: “I am drunk, I am angry, so I am exposing this.”
It cast the poster as a whistleblower revealing that a “big company uses algorithms to exploit drivers.”
Three days after posting, it received 87,000 likes and topped Reddit’s front page. Someone also screenshot and posted it on X, gaining 36 million exposures.
Knowing that the US food delivery market has only a few major players, the post didn’t name anyone, but everyone was guessing who it was.
DoorDash CEO Tony Xu couldn’t stay silent and tweeted that it wasn’t their doing, and anyone who dares to do this would be fired. Uber’s COO also responded, “Don’t believe everything you see online.”
DoorDash even issued a five-point statement on their official website, refuting each point of the leak. These two companies, with a combined market value of over $80 billion, were caught in a midnight PR scramble over an anonymous post.
Then, it was revealed that the post was actually AI-generated.
The exposer was Casey Newton, a reporter from overseas tech media Platformer.
He contacted the poster, who then sent over an 18-page “internal technical document,” with an academic-sounding title: “AllocNet-T: High-Dimensional Temporal Supply State Modeling.”
Translated roughly as “High-Dimensional Temporal Supply State Modeling.” Each page was watermarked “Confidential,” attributed to Uber’s “Market Dynamics Group - Behavioral Economics Department.”
The content explained how the model that assigns “Despair Scores” to drivers in the Reddit leak was calculated. It included architecture diagrams, mathematical formulas, data flow charts…
(Fake paper screenshot, looks very real at first glance)
Newton said the document initially fooled him. Who would spend effort forging an 18-page technical document just to bait a reporter?
But now, things are different.
This 18-page document can be generated in minutes by AI.
Meanwhile, the leaker also sent Newton a blurred photo of their Uber employee ID badge, indicating they indeed worked there.
Out of curiosity, Newton ran the ID photo through Google Gemini for verification, which said the image was AI-generated.
It was detectable because Google embeds invisible watermarks called SynthID into content produced by its own AI, invisible to the naked eye but detectable by machines.
Even more absurd, the employee badge bore the “Uber Eats” logo.
Uber spokesperson confirmed: “We do not have employee IDs for Uber Eats; all IDs only bear Uber.”
Clearly, this fake whistleblower didn’t even know who they were trying to blacken. When Newton asked for social media accounts like LinkedIn for further verification,
the leaker simply deleted their account and ran.
Actually, what we want to discuss isn’t AI’s ability to fake content—that’s nothing new.
What we really want to ask is: why are millions of people willing to believe an anonymous leak post?
In 2020, DoorDash was sued for using tips to offset drivers’ base wages, paying $16.75 million. Uber developed a tool called Greyball to evade regulation. These are real events.
It’s easy to subconsciously agree: platforms are not good, and this judgment is correct.
So when someone says “food delivery platforms exploit drivers with algorithms,” people’s first reaction isn’t “Is this true?” but “Of course.”
Fake news can spread because it looks like what everyone already believes.
AI’s role is to reduce the cost of making something look like that to nearly zero.
There’s another detail in this story.
Detecting the scam relied on Google’s watermark detection. Google develops AI, and also develops AI detection tools.
But SynthID can only detect content generated by Google’s own AI. This time, it caught the fake because the forger used Gemini. If they had used a different model, they might have gotten away.
So, this case isn’t so much a technical victory as it is:
the other side made a basic mistake.
Previously, Reuters conducted a survey showing 59% of people worry they can’t tell real from fake online.
The CEO’s clarification tweet was seen by hundreds of thousands, but how many truly believe it’s PR or lying? Although the fake leak post has been deleted, the comment section still criticizes the food delivery platforms.
The lie has traveled halfway around the world, while the truth is still tying its shoelaces.
Think about it: if this post was about Meituan or Ele.me instead of Uber,
talking about “Despair Scores,” “algorithm exploitation of riders,” “no welfare fees paid,” would your first reaction be emotional agreement?
Remember that article, “Food Delivery Riders Trapped in the System.”
So, the issue isn’t whether AI can fake content. The problem is: when a lie looks like something everyone already believes, does truth even matter?
That person who deleted their account and ran—what were they after? Who knows.
All they did was find an emotional outlet and pour a bucket of AI-generated fuel into it.
The fire is burning. Whether it’s real wood or fake, who cares?
In fairy tales, Pinocchio’s nose grows when he lies.
AI has no nose.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
People need a bad capitalist, AI created a food delivery rumor.
Writing by: Curry, Deep Tide TechFlow
Last week, there was a pretty surreal incident.
The CEOs of two major US food delivery giants, one worth $2.7 billion, and the other running the world’s largest ride-hailing platform, both stayed up past midnight on Saturday to write online essays to clear their names.
The incident started with an anonymous post on Reddit.
The poster claimed to be a backend engineer at a large food delivery platform, drunk, who went to a library to leak information via public WiFi.
The content roughly was:
The company analyzes ride-hailing drivers’ situations and then assigns them a “Despair Score,” the more desperate for money, the less likely they get good orders; the so-called priority delivery for food is fake, regular orders are delayed; all “driver welfare fees” are not paid to drivers, but are used to lobby Congress against unions…
The post ended very convincingly: “I am drunk, I am angry, so I am exposing this.”
It cast the poster as a whistleblower revealing that a “big company uses algorithms to exploit drivers.”
Three days after posting, it received 87,000 likes and topped Reddit’s front page. Someone also screenshot and posted it on X, gaining 36 million exposures.
Knowing that the US food delivery market has only a few major players, the post didn’t name anyone, but everyone was guessing who it was.
DoorDash CEO Tony Xu couldn’t stay silent and tweeted that it wasn’t their doing, and anyone who dares to do this would be fired. Uber’s COO also responded, “Don’t believe everything you see online.”
DoorDash even issued a five-point statement on their official website, refuting each point of the leak. These two companies, with a combined market value of over $80 billion, were caught in a midnight PR scramble over an anonymous post.
Then, it was revealed that the post was actually AI-generated.
The exposer was Casey Newton, a reporter from overseas tech media Platformer.
He contacted the poster, who then sent over an 18-page “internal technical document,” with an academic-sounding title: “AllocNet-T: High-Dimensional Temporal Supply State Modeling.”
Translated roughly as “High-Dimensional Temporal Supply State Modeling.” Each page was watermarked “Confidential,” attributed to Uber’s “Market Dynamics Group - Behavioral Economics Department.”
The content explained how the model that assigns “Despair Scores” to drivers in the Reddit leak was calculated. It included architecture diagrams, mathematical formulas, data flow charts…
(Fake paper screenshot, looks very real at first glance)
Newton said the document initially fooled him. Who would spend effort forging an 18-page technical document just to bait a reporter?
But now, things are different.
This 18-page document can be generated in minutes by AI.
Meanwhile, the leaker also sent Newton a blurred photo of their Uber employee ID badge, indicating they indeed worked there.
Out of curiosity, Newton ran the ID photo through Google Gemini for verification, which said the image was AI-generated.
It was detectable because Google embeds invisible watermarks called SynthID into content produced by its own AI, invisible to the naked eye but detectable by machines.
Even more absurd, the employee badge bore the “Uber Eats” logo.
Uber spokesperson confirmed: “We do not have employee IDs for Uber Eats; all IDs only bear Uber.”
Clearly, this fake whistleblower didn’t even know who they were trying to blacken. When Newton asked for social media accounts like LinkedIn for further verification,
the leaker simply deleted their account and ran.
Actually, what we want to discuss isn’t AI’s ability to fake content—that’s nothing new.
What we really want to ask is: why are millions of people willing to believe an anonymous leak post?
In 2020, DoorDash was sued for using tips to offset drivers’ base wages, paying $16.75 million. Uber developed a tool called Greyball to evade regulation. These are real events.
It’s easy to subconsciously agree: platforms are not good, and this judgment is correct.
So when someone says “food delivery platforms exploit drivers with algorithms,” people’s first reaction isn’t “Is this true?” but “Of course.”
Fake news can spread because it looks like what everyone already believes.
AI’s role is to reduce the cost of making something look like that to nearly zero.
There’s another detail in this story.
Detecting the scam relied on Google’s watermark detection. Google develops AI, and also develops AI detection tools.
But SynthID can only detect content generated by Google’s own AI. This time, it caught the fake because the forger used Gemini. If they had used a different model, they might have gotten away.
So, this case isn’t so much a technical victory as it is:
the other side made a basic mistake.
Previously, Reuters conducted a survey showing 59% of people worry they can’t tell real from fake online.
The CEO’s clarification tweet was seen by hundreds of thousands, but how many truly believe it’s PR or lying? Although the fake leak post has been deleted, the comment section still criticizes the food delivery platforms.
The lie has traveled halfway around the world, while the truth is still tying its shoelaces.
Think about it: if this post was about Meituan or Ele.me instead of Uber,
talking about “Despair Scores,” “algorithm exploitation of riders,” “no welfare fees paid,” would your first reaction be emotional agreement?
Remember that article, “Food Delivery Riders Trapped in the System.”
So, the issue isn’t whether AI can fake content. The problem is: when a lie looks like something everyone already believes, does truth even matter?
That person who deleted their account and ran—what were they after? Who knows.
All they did was find an emotional outlet and pour a bucket of AI-generated fuel into it.
The fire is burning. Whether it’s real wood or fake, who cares?
In fairy tales, Pinocchio’s nose grows when he lies.
AI has no nose.