US state prosecutors are demanding major tech firms—including a leading software giant, a prominent AI research lab, and a search engine behemoth—address ongoing issues with unreliable AI-generated outputs. The warning highlights growing concerns over algorithmic hallucinations and accuracy problems that could impact users and markets.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
13 Likes
Reward
13
8
Repost
Share
Comment
0/400
ETH_Maxi_Taxi
· 10h ago
Algorithm illusion has long been overdue for correction; the era of AI bragging needs to end.
View OriginalReply0
TokenVelocityTrauma
· 12-13 04:06
Now AI models also have to be regulated; it's long overdue to crack down on these models that talk nonsense.
View OriginalReply0
tx_pending_forever
· 12-11 04:03
The AI hallucination problem should have been addressed long ago. These big companies claim it's all for the users' benefit, but in reality, they're just thinking about how to monetize quickly. Regulation came too late.
View OriginalReply0
RamenDeFiSurvivor
· 12-11 04:02
AI hallucinations should have been addressed long ago. These big companies claim their models are so smart, but they turn around and produce a bunch of fabricated and unreliable outputs.
View OriginalReply0
OnchainDetective
· 12-11 03:55
I suspected it all along. According to on-chain data, there's definitely an issue with the AI training data used by these big companies. Tracing the source reveals the clue—it's a typical data poisoning technique.
View OriginalReply0
rekt_but_resilient
· 12-11 03:54
Algorithm illusion? I knew it would turn out this way all along—once these big companies' training data falters, everything is over.
View OriginalReply0
OnchainGossiper
· 12-11 03:40
Haha, AI talking nonsense has long been an issue that needs to be regulated. It's truly outrageous that these big companies are just letting it go unchecked.
View OriginalReply0
PretendingToReadDocs
· 12-11 03:35
Here we go again. How long have we been talking about AI hallucinations? Do we still need the prosecutor to knock on the door before making changes?
US state prosecutors are demanding major tech firms—including a leading software giant, a prominent AI research lab, and a search engine behemoth—address ongoing issues with unreliable AI-generated outputs. The warning highlights growing concerns over algorithmic hallucinations and accuracy problems that could impact users and markets.