AI goes back to school, and the stakes are higher than grades
- Adam Spencer
- 3 days ago
- 4 min read
AI could help teachers and students, but rushed adoption risks weaker learning, education inequality and tech shaping classrooms for profit.
Back to school, back to big questions
As students head back to school after the annual, and from what I remember, completely AWESOME Christmas break, it’s a fitting time to consider two articles recently dropped into my feed via MIT Technology Review’s The Download.
One from NPR analysing a report by the Brookings Institution’s Center for Universal Education and another from senior AI reporter James O’Donnell warning “AI’s giants want to take over the classroom.”
In the three years since ChatGPT turned the world on its ear, there is no doubt that education has emerged as one of the industries most ripe for disruption.
But how do we balance the clear benefits of teachers saving hours on lesson plans and admin, with the risk that students are letting Grok just do the thinking for them?
I’d love to know your thoughts, especially if you have school-aged kids in 2026.
Do you know what AI platforms your kids use at school? What work they use AI for? What the rules are for use and non-use. Share your experiences here
Here’s a brief summary of what NPR and MIT had to say.
Plenty of promise, but potential peril
The NPR piece centres on a major new report from the Brookings Institution’s Center for Universal Education, drawing on global surveys and research into how AI is actually being used in schools.
The headline finding is that,
under current patterns of use and conditions, the risks of AI in education outweigh the benefits.
Brookings acknowledges genuine upsides. AI can support personalised learning, assist students with disabilities, and reduce teacher workload by automating planning, feedback and administrative tasks. In under-resourced systems, those efficiencies could matter a great deal.
But the report is blunt about the downsides. Heavy AI use risks weakening foundational skills, reducing independent problem-solving, and encouraging shortcut behaviour, especially in assessment regimes already struggling before ChatGPT.
It also flags privacy concerns, data extraction from children, and the danger that low-quality AI tools could deepen learning gaps in disadvantaged schools.
Heavy AI use risks weakening foundational skills, reducing independent problem-solving, and encouraging shortcut behaviour, especially in assessment regimes already struggling before ChatGPT.
Brookings’ position is cautious rather than anti-technology. AI can play a role, but only as a tightly controlled supplement to human teaching, and only within systems that prioritise cognitive development over productivity metrics.
Teaching … or ka-ching
James O’Donnell’s article is less about classroom outcomes and more about who is shaping the future of education, and why. It reports on a $23 million partnership (announced July 2025) between OpenAI, Microsoft and Anthropic and major US teachers’ unions to train educators in AI use.
The companies frame this as helping teachers with lesson planning, grading and reporting. But O’Donnell argues that this is also about creating long-term users.
OpenAI already offers free courses for teachers, Anthropic pitches to universities, and Microsoft integrates AI into school software. Are we turning classrooms into future customer pipelines?
To be fair, O’Donnell notes there is evidence that AI can help students brainstorm, ask questions they might avoid in class, and stay engaged.
From Nigerian maths classrooms to physics at Harvard studies suggest AI tutors can lift short-term engagement & performance.
But the same research shows more cheating, weaker critical thinking, and inevitable AI hallucinations. More troubling, the companies funding the new training academy would not share details about what would actually be taught, raising concerns that tool adoption may be prioritised over genuine AI literacy or critical evaluation of when not to use these systems.
From the front line
I find this a really tough one.
I’ve spoken to teachers who love the time savings created by platforms like the NSW Education Department’s EduChat. I’ve seen school assignments where kids are creating music and matching videos as part of authoring stories.
At the same time, my daughter continues to churn out HD-level (85%+) essays at one of Australia’s leading universities (major dad humble brag!), and she knows for a fact the person next to her is often using ChatGPT the night before to chalk up an effortless 65 on minimal effort.
Can our educators stop a generation of students from producing work with which they have no intellectual connection?
So where does that leave us?
Taken together, these two articles point to the same uncomfortable conclusion.
AI absolutely can help teachers and students, but the current push is being driven at least as much by commercial strategy as by educational evidence.
If schools simply bolt AI onto existing assessment models, we should not be surprised if learning degrades.
If education systems rethink what counts as understanding, creativity and original work in an AI-rich world, these tools might actually strengthen human capability rather than hollow it out.
It behoves us to get this right for the sake of our kids, and all of us.

