30+ sources. Zero spin.
Cross-referenced, unbiased news. Both sides of every story.
New Yorker Investigation Raises Hard Questions About Sam Altman's Trustworthiness as Trump White House Reverses Course on AI Oversight

The Altman Investigation Everyone Is Talking About
The New Yorker published an investigative piece on Sam Altman this week. Writers Ronan Farrow and Andrew Marantz interviewed more than 100 people and spoke with Altman himself multiple times.
The comparison they landed on: Altman is this generation's J. Robert Oppenheimer. The man who built the bomb. The man who couldn't stop it.
The specific incident that triggered the investigation centers on a contradiction. At the end of February, Anthropic refused a Trump administration bid to use its AI for autonomous weapons systems and mass surveillance. Altman publicly supported Anthropic's position. Days later, he quietly cut a deal with the Pentagon to have OpenAI fill that exact role instead.
He said one thing. He did the opposite. For a weapons contract.
According to the New Yorker, there's no single smoking gun — but a pattern of conduct that keeps surfacing, reported by former colleagues across different organizations and time periods. The magazine's framing is careful. But the accumulation of accounts points to a man who says whatever the room needs to hear.
Altman isn't just a tech CEO. He's arguably the most consequential private citizen on the planet right now.
The Washington Post Piles On
According to the Washington Post, former colleagues are raising similar concerns about Altman's trustworthiness — independently corroborating the New Yorker's findings. The Post framed it around the AI boom's acceleration and whether the person steering its most powerful company can be held accountable.
One gap in the mainstream coverage: neither outlet directly connects this to the Pentagon weapons deal in concrete terms. They're treating the trustworthiness question as character study. If Altman will flip on autonomous weapons development in days — after publicly opposing it — what else will he flip on?
Trump's AI Regulation Reversal Is Bigger News Than It's Getting
The Trump administration came into office torching everything Biden did on AI policy. Former "AI and crypto czar" David Sacks made a career out of calling Biden's AI safety framework overreach. The administration's entire brand on tech was: get out of the way, let America innovate, no red tape.
That lasted about 16 months.
According to Fortune's reporting by Sharon Goldman on May 6, 2026, the administration is now actively considering an executive order that would require government review of advanced AI models before public release — essentially an FDA-style approval process for frontier AI.
What forced the reversal? Anthropic's new "Mythos" model — the same company OpenAI just tried to replace at the Pentagon. Mythos can apparently identify and exploit cybersecurity vulnerabilities at a level that spooked national security officials.
White House National Economic Council Director Kevin Hassett said it plainly on Fox Business: "We're studying possibly an executive order to give a clear road map to everybody about how this is going to go and how future AIs that also could potentially create vulnerabilities should go through a process so that they're released to the wild after they've been proven safe — just like an FDA drug."
Meanwhile, the Center for AI Standards and Innovation (CAISI) — which is just the Biden-era AI Safety Institute with a new name — announced evaluation partnerships with Google, Microsoft, and xAI. According to the agency's own press release, CAISI has already completed more than 40 evaluations, including on unreleased state-of-the-art models.
So the Trump administration renamed Biden's AI safety office, kept doing Biden's AI safety work, and is now considering Biden's AI safety executive order framework. The reversal reflects a shift in how national security officials view the risks.
The China Problem Nobody Wants to Say Out Loud
A Washington Post opinion piece by Russell Wald, executive director of the Stanford University Institute for Human-Centered AI, and Sha Sajadieh, the AI Index lead at the same institution, lays out a striking number.
Americans distrust AI. Chinese citizens embrace it.
This isn't a culture war point. It's a strategic vulnerability. If the American public is skeptical of AI deployment while China's population actively supports it, China builds faster, deploys faster, and adopts faster. That gap compounds.
American concern about AI safety — legitimate in many respects — may be giving China a structural advantage in the technology that will define the next century.
This is why the Trump reversal on oversight carries real weight. Blanket deregulation makes American citizens distrust AI more. Blanket regulation slows deployment. The needle is genuinely hard to thread. The administration is finally acknowledging that.
What Regular People Should Know
Three things are converging right now.
First, the man running the world's most powerful AI company has a documented pattern of saying one thing and doing another — documented by more than 100 sources and two of the best investigative journalists in the business.
Second, the U.S. government is finally — quietly, without admitting it — implementing the AI safety framework it spent a year and a half mocking.
Third, Americans are losing the public trust battle on AI to a country that runs concentration camps and censors its internet.
The technology isn't waiting for any of this to get sorted out. It doesn't care about the politics. And right now, the people in charge of it — in both the private sector and the government — are making it up as they go.