top of page

Pentagone to another level!

In the last week the spat between the Pentagon and a former AI supplier has gotten a lot more complicated, and dangerous.




(first published on my substack where you can get #NerdNews, marvellous maths and general geekery)


Last week NerdNews reported on a massive story marrying AI and the ethics of modern warfare.

 

The story has only gotten bigger.

 

If you thought the US$200 million divorce between Anthropic and the Pentagon was messy, grab your popcorn.

 

A history-making corporate blacklisting has now landed alongside internal revolt at OpenAI and a half-trillion-dollar legal grenade from Anthropic.

 

Yep. We're gonna need more popcorn.

 

Crossing the red line.

 

The drama kicked off when Anthropic drew two “bright red lines” regarding its work with the Department of War.


They refused to allow mass domestic surveillance of Americans or the deployment of fully autonomous weapons that fire without a human in the loop.

 

They specifically rejected a demand to allow the analysis of bulk acquired data including GPS records and browsing history, along with personal financial information bought from data brokers.

 

Secretary of War Pete Hegseth was not a fan. He mocked these safety commitments as “woke AI” before shredding the $200 million contract.

 

The government then designated Anthropic a “supply chain risk,” a blacklisting usually reserved for foreign adversaries like Huawei.

 

In fact, it’s the first time a domestic US company has been branded a national-security threat this way.

 

Within hours of the split, Sam Altman and OpenAI swooped in to sign their own classified military deal, the details of which remain largely undisclosed.

 

Digital Riot.

 

Inside OpenAI, the vibe is less victory lap and more internal meltdown.

 

Senior voices are already walking.

 

OpenAI Robotics Director Caitlin Kalinowski just logged out, resigning on principle over the deal.

 

Her parting shot on X was a cracker:

 

“Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” Caitlin Kalinowski, ex OpenAI.

 

Nearly 900 employees across OpenAI and Google have since signed an open letter refusing to build tools for autonomous killing.

 

The strain appears to be showing.

 

Theatre of the absurd.

 

Anthropic CEO Dario Amodei went one (perhaps two) better than OpenAI’s dissenters.

 

In a leaked 1,600-word memo, he called Altman “mendacious” and dismissed OpenAI’s safeguards as “80% safety theater” designed to keep employees quiet.

 

He even suggested the deal was greased by US$25 million in political donations from OpenAI president Greg Brockman and his wife.

 

Altman admitted to staff that the timing of OpenAI's deal looked “opportunistic and sloppy.”

 

“To try so hard to do the right thing and get so absolutely like, personally crushed for it … is really painful.” Sam Altman, Internal All-Hands

 

The great uninstall.

 

The public is hitting the nuclear delete button.

 

ChatGPT uninstalls surged by 295% over the weekend while the #QuitGPT campaign exploded on Reddit and Instagram.

 

Anthropic’s Claude, the one that said no to the nukes, is now the number one free app on the Apple App Store.


At least for now, their position as the world's ethical AI seems to be paying off.

 

All’s fair in law and war.

 

Things went to another level on Monday just gone.


Anthropic sued the Department of War for a tidy $537 billion (AUD), their entire company valuation, labelling the government’s blacklisting “unprecedented and unlawful.”

 

Their filing argues that constitutionally, the government cannot punish a company for protected speech.

 

Not content with just one court case, they have also hit the appellate courts in Washington DC, claiming the Pentagon’s moves were arbitrary, capricious and an abuse of discretion.

 

And just when the plot could not get more complicated, 30 OpenAI and Google staff have filed a legal brief SUPPORTING Anthropic in this fight.

 

The war rolls on.

 

While Anthropic and the Pentagon might soon muscle up in the courts, the conflict in Iran has become what Matt Honan in MIT Technology Review calls the US’s first war largely mediated by AI.

 

The Maven Smart System identifies and prioritizes targets at speeds humans cannot match.

 

Reportedly it allows 20 people to do the work of 2,000.


Chillingly, the military is still using Claude for these strikes via Palantir, even as they attempt to blacklist the company.

 

They are basically trying to kill the company that provides the backbone of their current intelligence while using its tools to pick targets.

 

Stephen B goes deeper.

 

SMH's Stephen Bartholomeusz, always sharp on this stuff, casts Trump as the wildcard who could blow up AI.

 

Looking ahead, he notes the administration wants to micromanage development.

 

If you want access to top-tier chips, you might have to pay for the privilege through matching investments in American infrastructure.

 

It is a transactional approach that injects massive unpredictability into a sector that usually runs on precision.

 

Any major disruption to this capital flow or execution would be traumatic for the US and global economy.

 

Well that was a week!

 

In this space a lot can happen in seven days.

 

The AI sector is already frantic even when left alone to evolve at pace. Add a live military conflict and an administration that seems comfortable with chaos and volatility rises quickly.

When global stock market highs are also riding on a handful of tech companies, plenty of people think we may be entering a worrying phase.

 

I've got no witty ending here.

 

But I hope this primer helped you get your head around a massive story.

 

 

Hey I'm now also on substack.

 


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page