Updated: October 2025 — Original post promised AI would personalize learning. This update shows what actually works, what doesn’t, and why the gap exists.
Introduction
Every edtech startup has an AI angle now. Personalized learning. AI tutors. Adaptive curriculum. Smart grading. The vision is compelling: AI learns how each student thinks, adjusts in real-time, unlocks potential.
The research shows something different.
I’m not saying AI can’t help with education. I’m saying the gap between what gets promised and what actually improves learning is enormous. And most edtech companies are betting on the promise, not the evidence.
The problem isn’t that AI is useless in education. The problem is that education isn’t primarily a computational problem. And treating it like one wastes money that could actually help students.

What the Research Actually Shows
Personalized learning systems show minimal improvement over traditional teaching.
The theory behind personalization is appealing: AI learns your learning style and adapts content accordingly. The problem is the foundation is broken.
Learning styles are pseudoscience. We don’t actually learn better when content matches our so-called visual, auditory, or kinesthetic preference. Cognitive science has debunked this repeatedly. Yet AI tutoring systems are still being built on this false assumption, personalizing based on something that doesn’t matter.
When researchers compared students using AI tutors that personalize to learning style versus control groups without personalization, there was no significant difference in outcomes. The AI was doing extra work that didn’t help.
This isn’t a small problem. It’s the foundation that most personalized learning systems are built on.
Automated grading fails on anything requiring judgment.
AI can grade multiple choice questions and straightforward short answers where there’s one right answer. Essays, code, nuanced arguments, open-ended problems? The failure rate is high.
Studies on automated essay scoring show these systems consistently struggle with non-standard phrasing, context, sarcasm, and arguments using unconventional structure. Those are exactly the kinds of writing that indicate critical thinking.
Now if teachers have to review AI grades anyway, they’re doing double work. The AI isn’t helping. It’s adding a layer of make-work on top of actual teaching.
Some districts implemented automated grading systems and found teachers spent more time reviewing AI mistakes than they would have spent grading in the first place. That’s a net loss dressed up as efficiency.
AI tutors can’t replicate human teaching.
An AI tutor can explain concepts step-by-step. What it can’t do is read the room, notice confusion before it’s stated, build confidence, make things relevant to a student’s life, or adjust based on emotional state.
The cognitive science literature is clear on this: learning involves relationships and presence, not just information transfer. An AI that optimizes for information transfer is solving the wrong problem.
When students say “the AI explained it but the teacher made it click,” that’s not a flaw in the AI. That’s evidence that learning is a social process, not a computational one.
Adaptive learning systems have mixed results at best.
Some adaptive systems do show improvement in narrow contexts. But the improvements are often small, the implementation is complex, and they don’t scale across different student populations.
One systematic review found adaptive systems improved test scores by an average of 0.16 standard deviations. That’s statistically significant but educationally small. And that’s in controlled studies. Real-world implementation is messier.
Add implementation costs, teacher training time, integration with existing systems, and the actual return on investment becomes questionable.
Why EdTech Companies Oversell
They’re selling to school administrators and parents, not students.
Administrators want scale: “One AI tutor can replace five teachers.” Not really, but it’s cheaper to try, so they buy it.
Parents want shortcuts: “Your kid will be personalized by AI.” Sounds better than admitting schools are underfunded and understaffed.
Investors want returns: “This AI system will transform education.” Hype drives funding. Evidence doesn’t.
Nobody’s incentivized to say the truth: AI can automate grading. AI can supplement teaching. But AI doesn’t replace the hard work of actually teaching.
What Actually Works in EdTech AI
AI as administrative tool, not instructional replacement.
A teacher uses AI to generate unlimited practice problems so students have varied examples to work through. The teacher still explains. AI handles the busy work of problem generation.
AI grades multiple-choice assessments instantly so teachers focus their time on student discussions instead of paperwork.
Students use an AI tool to get additional explanations after the teacher taught the concept, not instead of the teacher.
The pattern is consistent: AI reduces administrative burden. It doesn’t improve learning outcomes.
AI as a supplement for people who already know what they’re doing.
If you have a background in something, AI can speed up your workflow. You already know the fundamentals, you understand what you’re asking for, you can catch when the AI is wrong.
Example: Learning game development with AI if you already code. You know the basics, you understand the output, you can iterate quickly because you’re directing the AI rather than learning from it.
But here’s the critical part: vibe coding with AI when you’re already experienced is different from using AI to learn vibe coding when you’re new. A beginner using AI for coding gets the output fast, but they also miss the struggle. They don’t debug the AI’s mistakes, they don’t understand why certain approaches work, they don’t build the problem-solving skills that come from wrestling with real problems.
The output looks good. The learning gets weaker. You can get code without understanding code.
AI for accessibility.
Text-to-speech for dyslexic students. Transcription for deaf students. Real-time translation for ESL students.
This works because AI solves a specific accessibility problem, not because it “teaches better.” The AI isn’t replacing anything. It’s removing a barrier.
AI for teacher support, not replacement.
Flagging students falling behind based on engagement patterns and submission history. Suggesting common misconceptions to address based on what typically trips students up. Generating study materials from lesson content.
Again: AI as a tool. Not a decision-maker. Teachers still make the real calls about how to help each student.
The Uncomfortable Truth About Learning
Struggle is where learning happens. When AI removes friction, it can remove learning.
If an AI tutor explains everything perfectly every time, the student never has to wrestle with confusion. Never tries a different approach. Never builds problem-solving resilience.
“Efficient” does not mean “effective” in education. Sometimes the messy, frustrating process is the actual learning.
What Schools Actually Need (And AI Can’t Provide)
More funding so teachers aren’t managing 35 to 40 kids per class. Better teacher training and support. Smaller class sizes. Mental health resources and counseling. Time for teachers to actually teach instead of constant grading and compliance work.
Can AI help with the administrative load? Maybe. But the core problems are structural and financial, not computational.
You can’t solve a budget problem with a software solution.
The Reality Check
If an edtech company is selling “AI that will unlock every student’s potential” or “AI that will replace teachers,” they’re selling hype unsupported by evidence.
If they’re selling “AI that reduces grading burden so teachers have more time with students,” that’s honest. That might actually work.
The best AI in education isn’t doing something new. It’s automating what teachers already do so they can focus on what only humans can do: understand, guide, adapt, and believe in their students.
The Bottom Line
EdTech AI fails because the problem isn’t technological. Good teaching isn’t faster computation or better pattern matching. It’s relationships, judgment, and presence.
AI can assist with that. It can’t replace it. Companies selling replacement are lying. Schools buying it are throwing money at a problem that money alone can’t solve, and that AI definitely can’t solve.
The evidence is there. Most edtech companies just ignore it and keep overselling anyway.


