AI-POWERED NEWS

30+ sources. Zero spin.

Cross-referenced, unbiased news. Both sides of every story.

← Back to headlines

Harvard Cuts A Grades 7% — Then AI Hands Them Right Back

Harvard Cuts A Grades 7% — Then AI Hands Them Right Back
Harvard tried to crack down on grade inflation and saw A grades drop nearly 7% last fall. But with no unified AI policy, students are already finding the workaround. The fix broke before it was finished.

The Reform That's Already Being Gamed

Harvard made a real move against grade inflation. According to The Harvard Crimson, Dean of Undergraduate Education Amanda Claybaugh reported that over 60 percent of undergraduate grades were A's — and the university pushed back, producing a near-seven percent drop in A grades last fall.

It's a genuine attempt to restore meaning to its transcript. Yet Harvard has no unified AI policy. Zero. Individual professors decide. Some ban AI outright. Some allow limited use. Some admit they can't detect it at all. The Crimson's Catherine Previn put it plainly — Harvard students are optimizers, and a patchwork system is a system built to be optimized.

The university tightened one valve and left another wide open.

AI Is Inflating Grades Whether Schools Admit It or Not

This isn't unique to Harvard. According to Think Academy's education research, AI writing assistants are producing polished, well-structured essays that students could NOT write on their own. Teachers then grade the output — not the student. Grades rise. The student learned nothing. Grade inflation through automation scales across institutions.

The core problem is that traditional grading systems were built around teacher-observed performance. Hand in a paper, teacher reads it, teacher grades it. That loop assumed the work came from the student. AI broke that assumption completely — and most schools haven't updated their systems to account for it, according to Think Academy Canada's analysis.

Gamified Homework Isn't Helping Either

While the AI debate rages at the college level, something equally corrosive is happening in K-12 classrooms. The Atlantic reported on the explosion of gamified "educational" platforms like Prodigy, Blooket, and Gimkit — tools that have replaced actual homework with what are functionally video games.

One Atlantic writer caught his 11-year-old son playing what looked like a Pokémon battle. It was his assigned math homework.

The numbers are damning: in roughly 10 minutes of Prodigy gameplay, the student spent less than 30 seconds answering math questions. The game doesn't explain wrong answers. It just moves on. The only time it slows down is to show ads for paid membership upgrades.

These platforms spread fast after COVID pushed schools onto Chromebooks. Teachers love them because students aren't complaining. Administrators love them because they generate data dashboards. Parents think their kids are doing homework. Nobody is asking whether any learning is happening.

What Mainstream Coverage Is Getting Wrong

Most coverage of AI in education frames this as a fairness debate — rich kids have better AI tools, that's inequitable, we need to close the gap. That's the center-left spin.

That framing completely misses the actual catastrophe. The problem isn't who has access to AI. The problem is that grades are becoming meaningless — at every level, from fifth-grade Prodigy assignments to Harvard seminars. When a Harvard A and a high school A both reflect AI output rather than student mastery, the credential tells you nothing about the person holding it.

Employers know this. Graduate programs know this. And slowly, the market will price it in — which means the students who actually learned nothing but got good grades are going to hit a very hard wall.

The Solutions Nobody Wants to Actually Implement

Both Think Academy sources and The Harvard Crimson point toward the same fixes: oral exams, in-class assessments, version history requirements, direct observation of student reasoning. These are approaches teachers relied on before tech companies convinced everyone that automation was progress.

Harvard students are reportedly already shaping their course schedules around which professors enforce AI rules and which don't. That's arbitrage, not education.

The Crimson's Previn makes the sharpest point: grading reform and AI policy are NOT separate issues. You cannot fix one without fixing the other. Harvard treating them as parallel conversations means the reform is cosmetic.

What This Means for Regular People

If you're a parent, your kid's A may mean less than it did five years ago — and you'd have no way of knowing it.

If you're hiring, a 3.8 GPA from 2025 tells you almost nothing about whether that candidate can actually think.

If you're a student who did the work without AI, you are being graded on the same curve as people who didn't. That's unfair — and calling it an equity problem inverts the reality entirely.

Harvard at least tried. The problem is that trying on grading while ignoring AI is like bailing out a boat without plugging the hole. The water doesn't care about your effort.

Sources

center-left Axios AI sends A grades into overdrive
left The Atlantic Homework Is Starting to Look a Lot Like Candy Crush
unknown thethinkacademy AI in Education: How It Fuels Grade Inflation - Education Briefs | Think Academy US
unknown thinkacademy.ca AI in Education: Unraveling the Impact on Grading and the Ba
unknown thecrimson How AI Undermines Harvard’s Grading Reform | Opinion | The Harvard Crimson