Each year, educators, education technology specialists, and learning scientists from across New England gather at Gillette Stadium for the MassCUE Fall Conference—a vibrant exchange of ideas on teaching, learning, and innovation. This year was no different. Once again, the stadium filled with the collective energy of teachers eager to reimagine classrooms and technologists eager to contribute tools that make that vision possible.
As one of the speakers at MassCUE 2025, I shared research and experiences from my work on AI-driven learning environments—focusing on how we can help educators create more engaging, equitable, and effective experiences for students. Across sessions and hallway conversations, one theme kept resurfacing: the challenge of preserving student curiosity within increasingly AI-mediated learning systems.
While many presenters celebrated efficiency gains and personalized pathways, far fewer addressed a deeper tension—how do we design AI systems that kindle rather than extinguish the human capacity for wonder?
The Phenomenology of Wonder
Wonder, as philosophers from Aristotle to Heidegger have long argued, marks the beginning of genuine inquiry. It is the felt sense of encountering something that exceeds our current understanding—a productive disorientation that compels movement toward new knowledge.
Educational psychologist Alison Gopnik (2010) describes this state as “the best kind of thinking”—open, exploratory, and unconstrained by predetermined outcomes. Neuroscientific research confirms that curiosity activates the brain’s reward centers, releasing dopamine, which both motivates exploration and enhances memory retention.
Yet wonder is more than a neural event. As philosopher Karen Barad (2007) notes, it involves “intra-action”—a dynamic exchange in which the learner and the object of curiosity co-constitute each other through engagement. The learner is changed by the act of wondering; the world, in turn, reveals itself differently to the curious mind.
Educational research consistently affirms this: meaningful learning rarely arises from the quick completion of tasks but rather from sustained engagement with uncertainty, tangents, and self-directed questions.
How AI Systems Can Suppress Curiosity
As algorithmic systems increasingly mediate learning, well-intentioned designs often suppress curiosity. Analyses of adaptive platforms and AI writing assistants reveal several recurring patterns.
1. Premature Closure of Inquiry
Adaptive learning tools often respond to uncertainty with immediate intervention—offering hints, simplifying problems, or triggering remediation. While efficient, this approach can truncate the productive struggle essential to deep learning.
Lev Vygotsky (1978) described this space as the “zone of proximal development”—the region where students stretch their understanding just beyond current capability. AI systems optimized for speed often treat confusion as an error to eliminate rather than a moment to explore and resolve. In doing so, they may deprive students of the deeper insights that emerge from grappling with difficulty.
2. Gamification and the Atrophy of Intrinsic Motivation
Gamification strategies—points, badges, and streaks—can keep students “engaged,” but they frequently shift motivation from intrinsic to extrinsic.
Alfie Kohn (1993) demonstrated that external rewards tend to diminish authentic interest. Students begin learning not for discovery but for accumulation of points. Over time, dopamine is triggered not by understanding but by external achievement markers—eroding the natural joy of comprehension.
A student solving math problems for a badge experiences learning differently from one who persists because the problem itself feels fascinating. Gamification risks training students to chase rewards rather than insight.
3. Algorithmic Narrowing of Exploration
Recommendation systems predict what a student might like next based on prior behavior, narrowing exposure to the unfamiliar. But curiosity thrives on serendipity—encountering the unexpected.
A student studying photosynthesis might stumble upon a poem about trees; a young coder might discover algorithmic music composition. These lateral, seemingly unrelated encounters often spark lifelong interests.
By contrast, algorithmic curation filters out the unexpected, gradually shrinking the learner’s intellectual world. Yong Zhao (2021) warns that such narrowing produces not cognitive but environmental learning disabilities—a loss of curiosity born from over-optimization.
Designing AI Systems That Support Wonder
If efficiency-driven systems risk suppressing curiosity, how might we design AI that preserves wonder?
Principle 1: Center Questions Over Answers
Instead of measuring success by accuracy or completion, we can measure it by the depth of questions students generate. Systems might visualize inquiry through question maps or allow “wondering time,” encouraging open-ended exploration.
In this model, the AI acts not as an answer machine but as a conversational partner—posing prompts that deepen thinking rather than close it.
Principle 2: Preserve Productive Struggle
Psychologists Sidney D’Mello and Art Graesser (2012) describe cognitive disequilibrium—a blend of confusion and engagement—as the emotional signature of deep learning.
AI systems should recognize and preserve this state, intervening only when confusion turns to frustration. Metrics should shift from time-to-mastery to depth of engagement—tracking persistence, experimentation, and conceptual understanding.
Principle 3: Enable Serendipitous Discovery
Learning environments should intentionally promote discovery beyond prediction. Borrowing from the design logic of libraries, we can foster “calculated randomness”—introducing content that connects indirectly yet meaningfully to a learner’s interests.
A student studying circuits might see parallels in poetry or logic; one learning about ecosystems might encounter indigenous ecological perspectives. These cross-disciplinary sparks sustain curiosity and intellectual openness.
Principle 4: Make Learning Meaningful, Not Merely Engaging
True engagement arises from meaning, not mechanics. Mihaly Csikszentmihalyi (1990) showed that sustained attention comes when challenge and ability align in intrinsically rewarding ways—a state he called “flow.”
AI can support this by helping students connect learning to their own questions, communities, and aspirations—revealing education not as a checklist but as an evolving dialogue with the world.
Why Wonder Matters: Philosophical Commitments
Wonder as Ethical Stance
Martha Nussbaum (2016) describes wonder as an ethical stance—a willingness to be transformed by encounter. It cultivates humility, empathy, and respect for difference.
AI systems that emphasize rapid certainty risk training learners toward mastery without receptivity. In an age that demands nuance and compassion, the loss of wonder can erode not only curiosity but also civic empathy.
Moving Forward: A Call to Action
As AI becomes more sophisticated, we must decide what kind of learners—and what kind of people—we wish to cultivate. Systems optimized solely for efficiency may produce competent problem-solvers who no longer wonder.
To avoid that fate, we must design AI intentionally for curiosity:
- Value ambiguity over efficiency
- Protect productive confusion
- Encourage exploration beyond the predictable
- Treat time spent wondering as time well spent
Efficiency is not the highest educational virtue. The capacity to wonder—to approach the world with awe, humility, and imagination—remains the foundation of human flourishing.
If we can build AI that amplifies rather than erases that capacity, we will have done more than improve learning systems. We will have preserved what makes learning—and living—most fully human.
References
- Gopnik, A. (2010). How Babies Think: The Science of Childhood. Scientific American. https://alisongopnik.com/Papers_Alison/sciam-Gopnik.pdf
- Barad, K. (2007). Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Duke University Press.
- Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Harvard University Press.
- Kohn, A. (1993). Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A’s, Praise, and Other Bribes. https://www.mv.helsinki.fi/home/hotulain/Punished.pdf.
- Zhao, Y. (2021). Learners Without Borders: New Learning Pathways for All Students. Corwin Press.
- D’Mello, S. K., & Graesser, A. (2012). Dynamics of affective states during complex learning. Learning and Instruction, 22(2), 145–157. https://doi.org/10.1016/j.learninstruc.2011.10.001
- Csikszentmihalyi, M. (1990). Flow: The Psychology of Optimal Experience. Harper & Row.
- Nussbaum, M. C. (2016). Not for Profit: Why Democracy Needs the Humanities. Princeton University Press. http://assets.press.princeton.edu/catalogs/S12Paper.pdf
