Is the AI-Generation AI-Damaged?

In our work across higher education, one of us as a learning scientist, the other as an educational developer in the Instructional Design, Engagement, and Support (IDEAS) team at UMass Amherst, we spend our days inside classrooms, conversations, and the quiet spaces where students grapple with ideas. We come from different corners of the university, but we meet in a shared commitment: to help learners think, question, and form themselves in ways that transcend tools and trends.

Over the past year, however, we have noticed a new kind of stillness settling across our learning environments. It is not the reflective pause that precedes insight, nor the focused hush of collaborative work. It is a silence that feels suspended, like thought has been interrupted before it begins. Faculty describe it. Students embody it. And as practitioners, we sense it in our daily work: a subtle shift in how learners initiate, persist, or even approach the act of making meaning.

This article emerges from that shared observation. It is an attempt to name the unease we see but rarely articulate. It is also an invitation, to educators, designers, and scholars, to sit with a difficult question about what it means to learn in an age when generative systems can imitate the products of thinking more quickly than human minds can produce them.

What follows is not a warning, nor a lament. It is a reflective inquiry from two people who care deeply about the purpose of education. It is our effort to trace the contours of a new cognitive landscape and to ask what happens when the tools we design begin to reshape the very conditions of thought itself.

The Silence We Can No Longer Ignore

A distinct silence has fallen over the modern classroom, a quietude unlike the studious hush of the past, but rather the silence born from the friction of thought being stripped away from the educational process. It is the silence of the friction of thought being removed from the educational process. As educators, we find ourselves standing at a precipice, looking out at a generation for whom the blinking cursor is no longer a daunting provocateur of creativity, but a waiting command line for automation.

We pose a question that is as uncomfortable as it is necessary: Is the AI-generation “AI-damaged”? We do not use the term “damaged” to imply a medical pathology or an irreversible cognitive defect. Rather, we use it philosophically to describe an atrophy of the will, a thinning of the existential struggle that is required to turn information into knowledge, and knowledge into wisdom.

In this collaborative inquiry, we argue that the integration of gen AI into the academic sphere requires a total re-evaluation of the purpose of education. If we continue to assess students on the process of inquiry rather than on the production of answers, we risk cultivating a generation proficient in prompting but deficient in thinking.

The Outsourcing of Cognition

To understand the potential “damage”, we must first understand the nature of the tool itself. Unlike the calculator, which offloaded the drudgery of arithmetic to allow for higher-order mathematical reasoning, Large Language Models (LLMs) threaten to offload the reasoning itself. Writing is not merely the transcription of thoughts; it is the mechanism by which thoughts are formed. As Neil Postman warned in his book titled Technopoly, technology is never additive; it is ecological. It changes everything. Postman argued that “new technologies alter the structure of our interests: the things we think about. They alter the character of our symbols: the things we think with. And they alter the nature of community: the arena in which thoughts develop” (Postman, 2011, p. 20).

When a student outsources the drafting of an essay to an algorithm, they are not merely cheating the system; they are bypassing the neural architecture building that occurs during the struggle of articulation. The “damage” manifests as a loss of cognitive stamina. We are witnessing a decline in the ability to sit with ambiguity, to wrestle with a complex text, and to synthesize disparate ideas without a digital intermediary (such as Kosmyna et al., 2025). If the struggle is removed, the growth is removed. The outcome is a polished product masking a hollowed-out process.

The Ontological Crisis of Education

If the machine can produce the output, what is the value of the human element? This brings us to the definition of education itself. For the better part of the last century, education has been viewed through a transactional lens: the teacher deposits information, and the student withdraws it upon request for an exam. This “banking concept of education” as critiqued by Paulo Freire (Freire, 2018), is rendered obsolete by AI. The bank is now open to everyone, everywhere, instantly.

Therefore, we must redefine the purpose of education from acquisition to formation. The goal can no longer be competency in the retrieval of facts or the mimicry of standard prose. The goal must be the cultivation of the self. We must shift our pedagogical focus toward Gert Biesta’s concept of “subjectification”, the process by which students become autonomous, thinking subjects rather than objects to be filled. Biesta argues that education must always involve a “beautiful risk”, noting that “if we take the risk out of education, there is a real chance that we end up with something that is no longer education” (Biesta, 2015).

AI removes the risk. It provides a safe, average, and sometimes hallucinated answer. To repair the “AI-damage”, we must reintroduce the risk. We must create environments where failure is not a metric of incompetence but a necessary step in the dialectic of learning. Education must become a sanctuary for human connection, ethical debate, and the messy, inefficient process of original thought.

Redefining Assessment

If we accept that the purpose of education is the development of the human intellect and character, our assessment standards must change radically. The traditional five-paragraph essay, the take-home summary, and the multiple-choice test are artifacts of a pre-AI world. Continuing to use them is akin to measuring the speed of a car with a ruler; the tool is no longer fit for reality.

We propose a shift from output-oriented assessment to process-oriented assessment. The current model asks, “What did you produce?” The new model must instead ask, “How did you arrive here?”, and “What did you learn?”.

This requires a return to older, more human-centric forms of evaluation that AI cannot mimic. We advocate for the return of the viva voce, the oral defense. In a conversation, a student cannot hide behind a screen or a generated script. Their understanding, their hesitation, and their ability to connect concepts in real-time become the metric of success. We must value the “un-hackable” elements of the student experience: their ability to debate, to empathize, and to apply ethics to complex scenarios.

Furthermore, assessments should move toward what John Dewey described as “continuity of experience” (Dewey, 1940) and what Grant Wiggins identifies as “authentic” (Wiggins, 1990). Instead of isolated assignments that can be easily generated by a bot, assessments should be longitudinal projects requiring iteration, physical engagement with the community, and personal reflection that links specific, lived experiences to course material. They should reflect real-world tasks that embody deep understanding, higher-order thinking, and complex problem solving. An AI can analyze a text about poverty; it cannot interview a local shelter director and reflect on how that conversation challenged the student’s specific worldview.

The Ethical Imperative: Agency over Algorithms

The most profound aspect of the AI-damage is the potential erosion of ethical agency. When we allow an algorithm to curate our arguments and select our evidence, we tacitly allow it to shape our moral compass. Education must essentially become a training ground for cognitive sovereignty.

We must teach students that using AI to do their thinking is an act of self-erasure. The ethical violation is not just against the academic institution (plagiarism); it is against the self. It is a voluntary surrender of one’s voice to a statistical average of the internet.

To counter this, we propose a curriculum that integrates AI literacy not as a “how-to” for prompting, but as a philosophy of technology. Students must understand the bias (Noble, 2018), the hallucinations, and the environmental costs of these models. They must learn to treat AI as a sparring partner, not a ghostwriter. The assessment then becomes: “Here is what the AI produced on this topic; now, critique it. Where does it lack humanity? Where does it lack nuance? How can you, as a human, improve upon this machine output?”

Final Thoughts: The Renaissance of the Human

Is the AI-generation AI-damaged? Potentially, yes. If we leave the current structures of education in place, we doom them to a future of cognitive dependency. We risk creating a class of humans who are merely managers of machines, unable to distinguish between a calculated probability and a genuine truth.

However, this crisis is also our greatest opportunity. The presence of AI forces us to distill what is truly human about learning. It forces us to strip away the busy work, the rote memorization, and the performative writing that has cluttered our syllabi for decades.

We are called to usher in a new educational renaissance. This renaissance prioritizes the development of wisdom over the accumulation of data. It values the stammering, imperfect, original voice of a student over the flawless, sterile syntax of a bot. By redefining our purpose and our assessments, we can ensure that the tools of the future serve the development of the human spirit, rather than replacing it. We must teach the next generation that the difficulty of learning is not a bug to be bypassed, but the very feature that makes us human.


References

  • Biesta, G. J. J. (2015). Beautiful risk of education. Routledge.
  • Dewey, J. (1940). Nature in experience. The Philosophical Review, 49(2), 244. https://doi.org/10.2307/2180802
  • Freire, P. (2018). The banking concept of education. In Thinking About Schools (pp. 117–127). Routledge. https://doi.org/10.4324/9780429495670-11
  • Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025, June 10). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv.Org. https://arxiv.org/abs/2506.08872
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://doi.org/10.18574/nyu/9781479833641.001.0001
  • Postman, N. (2011). Technopoly: The surrender of culture to technology. Vintage.
  • Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research, and Evaluation, 2(1), 2. https://doi.org/10.7275/ffb1-mm19

Cite this paper: Gattupalli, S., & Giovannini, J. (2025). Is the AI-Generation AI-Damaged? Society and AI Perspectives. https://societyandai.org/perspectives/is-the-ai-generation-ai-damaged/