The question a parent asked at a school board meeting in Virginia in the fall of 2024 has become one of the defining questions of education in this decade: "If my daughter uses an AI to write her essay, and the AI is writing in her style, and she edits it to make it more her, and it passes every detection tool the school has — is she learning to write, or is she learning to manage AI?"
The answer is that educators genuinely do not know. And the speed at which AI has entered classrooms has outpaced the institutional capacity to find out.
In 2024, 46.9 percent of students reported using large language models in coursework. By 2025, that figure had risen to 88 percent — a near-doubling in a single year. Student discipline rates for AI-related academic misconduct climbed from 48 percent in 2022-23 to 64 percent in 2024-25. Ninety-six percent of college instructors believe at least some of their students cheated in the past year. The cognitive dissonance runs deep: nearly 80 percent of students say using an AI to complete work is "somewhat" or "definitely" cheating — and most of them do it anyway.
Something is being disrupted in how children learn. The question worth sitting with is what, exactly, is being lost.
The Real Promise First
It would be dishonest to tell only the cautionary side of this story. AI tutoring tools have demonstrated genuine benefits, documented in the kind of peer-reviewed, multi-study evidence that warrants taking seriously.
Khan Academy's Khanmigo, which operates by asking guiding questions rather than providing answers, grew from 68,000 users in 2023 to more than 700,000 in 2024-25 — a trajectory driven in part by teacher endorsement of its ability to support students who would otherwise have no access to individualized attention. A systematic review of 45 studies on AI-driven adaptive learning confirmed that personalized educational systems produce meaningful improvements in academic performance, with one study reporting a 23.5 percent improvement when multi-dimensional learner data was incorporated. For English Language Learners, for students in under-resourced schools, for children with learning differences, the potential for AI tutoring to provide what human instruction cannot always deliver is real.
The problem is that the benefits and the risks are not evenly distributed across the ways students actually use these tools. An AI tutor that prompts a student to think harder is a different technology, in every meaningful sense, from an AI that writes the student's essay for them. And the market pressure — the convenience of the second, the effort required for the first — is not neutral.
The Cognitive Offloading Problem
A January 2025 study published in Phys.org found a significant negative correlation between AI usage frequency and critical thinking scores. The correlation was particularly pronounced among younger users — those aged seventeen to twenty-five showed more cognitive degradation than older users. The mechanism identified was "cognitive offloading": the tendency to outsource mental effort to AI systems, reducing the load on one's own thinking and, over time, the capacity to perform that thinking independently.
This is not a new concern. Every generation of educational technology has been accompanied by warnings about cognitive dependence. The calculator, the spell-checker, the internet itself were each predicted to damage some essential cognitive capacity. What makes AI different is the scope: where a calculator outsources arithmetic and a spell-checker outsources orthography, a language model can outsource the entire sequence of reading, analyzing, synthesizing, and articulating an argument.
The U.S. Department of Education's AI policy report frames this with the clarity of an institution that has thought hard about it: "Students must recognize that the value of learning includes the intellectual and emotional growth that comes with grappling with challenges, which cannot be replicated by any AI tool." The grappling is not incidental to learning. For many subjects, the grappling is the learning.
Research confirms that AI tutoring tools optimize effectively for domain knowledge acquisition on measurable assessments — they can help a student learn to solve a specific type of math problem. What they do poorly is build what might be called metacognitive capacity: the ability to recognize when you don't understand something, to develop productive strategies for working through difficulty, to persist through confusion. These capacities are developed precisely through the grappling that AI tools, by design, reduce.
The EdTech Surveillance Layer
Beneath the educational question sits the data question we examined in Part 2 of this series — but in the context of school-mandated software, it takes a distinctive form.
When a parent sends their child to school with an app they have chosen and can choose to remove, the data dynamics are uncomfortable but navigable. When school districts mandate the use of specific platforms — and most do, at significant scale — the family's ability to opt out is effectively eliminated. The child must use the platform to participate in school.
Internet Safety Labs found, analyzing over 200 apps and platforms, that 73 percent of EdTech applications monetize children's and families' personal information in some form. UNICEF, studying 164 EdTech products across 49 countries, found that 89 percent risked children's privacy by embedding commercial trackers. The average app with trackers transmitted children's data to 6.7 separate data broker companies per login.
These are companies that process a child's reading speed, error patterns, learning pace, and inferential markers about cognitive development — not to improve teaching, but to build commercial profiles. The child learns. The data broker learns too, and profits.
What Teachers Bring That Machines Cannot
When Duolingo declared itself "AI-first" in 2024 and laid off approximately 10 percent of its workforce — primarily the human linguists who had been writing its content — veteran users and language educators pushed back with an argument that transcends the specific case. "Language isn't only structure," one educator wrote. "It's emotion, culture, and tone." The complaint was not nostalgia. It was a description of something real.
Human teachers bring to classrooms not just information and feedback but a form of attunement — the ability to read the particular child in front of them, to sense when confusion is becoming discouragement, when confidence is masking misunderstanding, when a student needs to be challenged and when they need to be steadied. This attunement is not a feature that can be added to a language model. It requires the full presence of another person who cares.
The American Psychological Association, in its June 2025 advisory, identified this risk directly: if teacher-student relationships diminish, students' social skills and interpersonal development will suffer in ways that extend far beyond academic achievement. Children learn to be in the world, in part, through sustained relationships with adults outside their families who hold expectations for them. AI tutors do not hold expectations. They adapt.
The most productive question for parents is not whether AI should be in education — it is there, and it will remain — but how to ensure that the human relationships their children need are not hollowed out to make room for it. Schools that supplement human teaching with AI tools are not the same as schools that replace human teaching with AI tools, and the difference matters enormously in ways that assessment scores will not immediately reveal.
In the next piece, we step back from individual harms to examine the architecture that produces them: the business models that make children's data, attention, and wellbeing commercially profitable — and why the industry has resisted change.
This is Part 7 of "Raising Children in the Age of Intelligent Machines," a 10-part series from PeopleSafetyLab on the intersection of AI and family safety.