Two weeks ago, the Pentagon blacklisted Anthropic (the company behind Claude) after it refused to let the military use its AI for mass surveillance or autonomous weapons. Hours later, OpenAI signed the deal instead.

The consumer response was immediate. ChatGPT uninstalls spiked 295% in a single day. Claude hit #1 on the App Store. Over 1.5 million users, (some estimate as high as 2.5 million), reportedly cancelled their ChatGPT subscriptions. Turns out people actually care who’s handling their data — and what the fine print says.

Not here to referee that one, but if you’re running a company, sitting on a board, or advising on a deal — there’s a question hiding in plain sight that nobody’s asking yet:
What are your AI vendors actually doing with your data? And what happens when someone decides the guardrails are inconvenient?

One company drew a hard line and the other signed a different contract. Completely different risk profiles.

People are plugging AI into everything; legal review, HR, financials, etc., and almost nobody’s reading the fine print. If asked whether their AI is secure, one should be able to say, “We use an enterprise AI platform with data isolation and zero training on client inputs.”

That’s a due diligence gap we keep seeing at RCS.