HOME

Exclusive Content:

 Iran Uses the War’s One-Month Mark to Intensify Pressure on Gulf States

Iranian President Masoud Pezeshkian has used the one-month anniversary...

Microsoft’s Support for Anthropic Signals Tech Industry’s Refusal to Let Pentagon Dictate AI Ethics

Microsoft’s decision to file a supporting court brief for Anthropic in its battle against the Pentagon’s supply-chain risk designation signals the technology industry’s collective refusal to allow the military to dictate the ethical terms on which AI companies operate. The brief was submitted to a federal court in San Francisco and called for a temporary restraining order against the designation. The filing was joined by a separate brief from Amazon, Google, Apple, and OpenAI, making this a comprehensive display of industry solidarity.

Anthropic’s legal challenge began after the company refused to sign a $200 million contract without protections preventing the use of its Claude AI for mass surveillance of US citizens or autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk following the collapse of those negotiations, and the Pentagon’s technology chief publicly ruled out any prospect of renegotiation. Anthropic filed two simultaneous lawsuits challenging the designation in California and Washington DC.

Microsoft’s brief is grounded in its direct use of Anthropic’s AI in systems it builds for the US military and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional federal agreements with defense, intelligence, and civilian agencies worth several billion dollars more. Microsoft publicly argued that responsible AI governance and robust national defense were complementary rather than competing priorities that required partnership between government and industry.

Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of ideological retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the legitimate technical and ethical basis for its contract demands. Anthropic noted that the designation had never before been applied to a US company.

Congressional Democrats are separately pressing the Pentagon for information about whether AI was involved in a strike in Iran that reportedly killed more than 175 civilians at an elementary school. Their formal inquiries focus on AI targeting systems, human oversight, and the potential role of specific AI tools in the attack. The convergence of these legislative inquiries with Anthropic’s lawsuits and the industry’s unified legal response is creating a defining moment for the governance of artificial intelligence in American national security.

Don't miss

In a first, a Strange Quantum Object created in Lab

In the realm of quantum mechanics, which governs the...

Tech trends 2022: Web 3.0, big tech battles

Following a year that saw WFH (work from home)...

MIT develops New Programming Language for High-Performance Computers

In the realm of computing, the demand for high...

Metaverse, an online virtual world

The term "metaverse" was coined by author Neal Stephenson...

Newsletter

Mark Zuckerberg Spent $80 Billion Building a Second Life That Nobody Wanted to Live

Second Life, the virtual world that predated the metaverse by two decades, offered the same fundamental promise: a persistent digital world where users could...

Instagram Encryption Removed: The Story the Help Page Didn’t Tell

Meta confirmed the removal of end-to-end encryption from Instagram direct messages through an update to its help page — but the full story is...

Google Removes AI Feature That Used Community Health Opinions to Answer Medical Questions

  Google has confirmed that a search feature designed to aggregate health opinions from internet users through AI has been removed from its platform. Named...