30+ sources. Zero spin.
Cross-referenced, unbiased news. Both sides of every story.
AI Can Process Information. It Cannot Think. Here's Why That Distinction Actually Matters.

The Question Tech Billionaires Won't Touch
OpenAI, Google, Anthropic — they'll flood you with benchmarks, demo videos, and breathless press releases.
What they won't do is answer Alan Turing's original question from 1950: Can a machine actually think?
Not simulate thinking. Not produce outputs that look like thinking. Actually think.
This distinction shapes everything that follows.
What the Philosophers Actually Say
The Internet Encyclopedia of Philosophy draws a sharp line between two camps.
Weak AI: machines can act intelligently. Nobody seriously disputes this anymore — your phone can beat a grandmaster at chess.
Strong AI: machines can be intelligent. That they have genuine thought, not just the appearance of it.
Nearly every AI company implicitly sells you strong AI while only delivering weak AI.
Tobias Rees, founder of the AI studio Limn and a senior fellow at Schmidt Sciences, told Noema Magazine that AI "defies many of the most fundamental, most taken-for-granted concepts" humans use to define themselves. Agency. Creativity. Consciousness. Understanding.
Those words weren't invented arbitrarily. According to Rees, they were constructed specifically to describe what separates humans from machines — starting in the 1630s with Descartes.
Now a machine produces a sonnet, and suddenly everyone's rewriting the definition of creativity.
The NYT's English Major Fantasy
The New York Times ran a piece asking whether humanities graduates — English majors, philosophy nerds — might actually have a future in an AI world.
The framing is classic left-media comfort food. "Don't worry, the machines can't do your thing."
AI is genuinely bad at certain things: abstract reasoning, novel ethical judgment, understanding context in the way a person who has lived understands context.
But this isn't because English majors are special. No machine yet possesses consciousness, embodied experience, or genuine understanding — per the Wikipedia overview of AI philosophy, which traces this debate through Turing's "polite convention" all the way to the Chinese Room argument.
John Searle's Chinese Room thought experiment, covered in the Wikipedia entry on AI philosophy, remains relevant: a system can process symbols and produce correct outputs without understanding a single thing. A calculator doesn't know what numbers are. Current LLMs may operate similarly at massive scale.
This raises a question for everyone involved: what exactly is AI doing, and what does that mean for how we deploy it?
What Mainstream Coverage Gets Wrong
Left-leaning outlets frame this as a labor story — "will AI take your job?" They focus on economic disruption and occasionally humanize the machines to push emotional angles about AI "rights."
Right-leaning outlets often dismiss the philosophical questions entirely, treating skeptics as Luddites while cheerleading for tech deregulation.
Both miss the underlying problem.
The core issue is epistemological: we are deploying systems we do not fully understand, making decisions we cannot fully explain, at scale. According to the Internet Encyclopedia of Philosophy, fundamental questions about AI — whether machines can truly think, whether they have mental states, whether they merit moral consideration — remain scientifically unresolved.
This isn't because scientists are lazy. The concepts of "intelligence" and "consciousness" don't have clean scientific definitions yet.
We're building skyscrapers before we've agreed on what concrete is.
The Immanuel Kant Problem
The NYT headline references Kant — specifically what AI can't do.
Kant's philosophy centered on the idea that human understanding actively structures reality. We don't just receive information — we organize it through categories like causation, time, and space that are built into human cognition.
Current AI does not do this. It finds statistical patterns in training data. That's genuinely impressive. It is not the same as Kantian understanding.
Tobias Rees told Noema this represents a "deep caesura" — a genuine break in how humanity understands itself. But that cut goes both ways.
If a machine can produce philosophy papers, write legal briefs, and compose music — without any of the consciousness Kant thought was necessary for real knowledge — then either Kant was missing something, or the machine is doing something far shallower than it appears.
What This Means For You
If you're trying to navigate the AI moment, consider this:
AI tools are genuinely useful. Use them. They save time on real tasks.
AI systems are not thinking. They're pattern-matching at industrial scale. Treating their outputs as authoritative is a mistake.
AI companies are selling you strong AI while delivering weak AI. Read the philosophy. Ask harder questions. Don't let billion-dollar marketing campaigns redefine what intelligence means.
The philosophers have been asking the right questions since 1950. Silicon Valley just finally has enough money to drown them out.