A Reddit user claiming to be a whistleblower of a food delivery app has been exposed as a fake. The user wrote a viral post claiming that the company he worked for exploited its drivers and users.
“You always suspect that the algorithms are rigged against you, but the reality is actually so much more depressing than the conspiracy theories,” the alleged whistleblower wrote.
He claimed to be drunk and using public Wi-Fi at the library, where he was typing a long text about how the company was exploiting legal loopholes to steal drivers’ tips and wages with impunity.
Those claims were unfortunately credible – DoorDash, actually was charged with stealing tips from drivers, resulting in a $16.75 million settlement. But in this case, the poster had made up his story.
People lie on the internet all the time. But it’s not that common for such posts to end up on the front page of Reddit, rack up over 87,000 votes, and be cross-posted to other platforms like X, where it ended up. another 208,000 likes and 36.8 million impressions.
Casey Newton, the journalist behind Platformer, wrote that he contacted the Reddit poster, who then contacted him via Signal. The Redditor shared what appeared to be a photo of his UberEats employee badge, as well as a “Internal document” of 18 pages which outlines the company’s use of AI to determine the “desperation score” of individual drivers. But when Newton tried to verify that the whistleblower’s story was legitimate, he realized he was being lured into an AI hoax.
“For most of my career to date, the document the whistleblower shared with me would have seemed highly credible in large part because it would have taken so long to put together,” Newton wrote. “Who would take the time to put together a detailed 18-page technical document on market dynamics just to troll a reporter? Who would go to the trouble of creating a fake badge?”
Techcrunch event
San Francisco
|
October 13-15, 2026
There have always been bad actors trying to mislead reporters, but the rise of AI tools has made fact-checking even more stringent.
Generative AI models often cannot detect whether an image or video is synthetic, making it challenging to determine if the content is real. In this case, Newton was able to use Google’s Gemini to confirm that the image was created with the AI tool, thanks to Google’s SynthID watermark, which resists cropping, compression, filtering and other attempts to alter an image.
Max Spero – founder of Pangram Labsa company that makes a detection tool for AI-generated text is working directly on the problem of distinguishing between real and fake content.
“AI doldrums on the internet have gotten a lot worse, and I think some of this is due to the increased use of LLMs, but also other factors,” Spero told TechCrunch. “There are companies with millions in revenue that can pay for ‘organic engagement’ on Reddit, which really just means they’re going to try to go viral on Reddit with AI-generated posts that mention your brand name.”
Tools like Pangram can help determine if text is AI-generated, but especially when it comes to multimedia content, these tools are not always reliable. And even if a synthetic post is proven to be fake, it may have already gone viral before it was debunked. So for now, we scroll through social media like detectives, doubting whether anything we see is real.
Case in point: When I told an editor I wanted to write about the “viral AI food delivery hoax that hit Reddit this weekend,” she thought I was talking about something else. Yes – there was more than one “viral AI food delivery hoax on Reddit” this weekend.
#viral #Reddit #post #alleging #fraud #involving #food #delivery #app #turned #AIgenerated #TechCrunch


