OpenAI's collaboration with the U.S. Department of Defense sparks controversy, with Sam Altman admitting the agreement was rushed and the terms were hastily revised

GateNews

On March 3, according to CNBC, OpenAI CEO Sam Altman admitted that the company’s previous AI collaboration agreement with the U.S. Department of Defense was rushed and announced plans to revise the relevant terms. This statement came amid ongoing public discussion following OpenAI’s announcement of a partnership with the Pentagon, which sparked widespread debate over AI ethics, military use, and data security.

Public information shows that the agreement was announced last Friday, coinciding with the U.S. government advancing several decisions related to national security. After the news broke, some in the tech community and users expressed concerns that AI technology could be used for military purposes. Sam Altman later responded on social media, stating that the company would add new restrictions to the agreement, including clear provisions that AI systems must not be used for domestic surveillance of U.S. citizens or residents.

Altman also said that the U.S. Department of Defense has confirmed that OpenAI’s AI tools will not be directly used by intelligence agencies like the NSA for intelligence monitoring tasks. Meanwhile, OpenAI plans to work with the Pentagon to develop additional technical safeguards to reduce the risk of AI misuse in sensitive scenarios.

The CEO also acknowledged that the company made a misjudgment in pushing the agreement forward. He explained that the team wanted to quickly de-escalate the situation and avoid more serious political conflicts, but looking back, the decision was somewhat hasty and could be seen as opportunistic.

This controversy is also related to another AI company, Anthropic. Previously, Anthropic had disagreements with the U.S. government over the boundaries of its AI model Claude’s use and sought clear assurances that its system would not be used for domestic surveillance or autonomous weapons development. Reports indicate that negotiations between the two parties subsequently broke down.

At the same time, there has been noticeable public sentiment shift online. Some users have reduced their use of ChatGPT on various platforms and turned to competitors like Claude, further fueling ethical debates in the AI industry.

In his response, Sam Altman also stated that he does not believe Anthropic should be considered a supply chain risk and hopes the U.S. Department of Defense will offer similar cooperation conditions to them as with OpenAI. This incident highlights that, amid rapid advancements in AI technology, the boundaries between AI, national security, military applications, and social regulation are becoming a focal point of global technological industry attention.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)