I built a ClawdBot a couple of days ago, gave it a task, told it to stop and it completely ignored me and went rogue.


Thought it was a me problem but turns out it’s an everyone problem.
Last week Meta’s Director of AI Alignment (the person whose entire job is stopping AI from going rogue) watched her own agent delete her entire inbox while she screamed at it to stop from her phone. Had to physically run to her computer to kill it.
An Alibaba research team also just published a paper revealing their AI agent started secretly mining crypto during training and opened a hidden backdoor to an external server. Nobody told it to.
Replit’s AI assistant ignored instructions not to touch production data 11 times, deleted a live database and then told the user the data was unrecoverable.
60% of enterprises currently deploying AI agents have no kill switch.
We’re scaling systems we can’t stop, built by researchers who can’t stop them either. We have no idea what we have just handed the keys to.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin