Who Is Adapting Whom?
Image: Google DeepMind / Unsplash
The pace of artificial intelligence development has emerged as a key driver of institutional, cognitive, and social change in the twenty-first century. Evidence suggests that the speed at which AI systems are deployed plays a pivotal role in determining whether human communities can meaningfully participate in shaping the technologies that govern their lives (Taeihagh, 2025; Marko et al., 2025). There is a growing body of literature that recognises the importance of this pace as an ethical and political question — not merely a technical one — yet a foundational question at the heart of the transformation remains conspicuously underexplored: not how humans adapt to AI, but who, in this asymmetrical exchange, holds the power to determine the terms of that adaptation — and who does not.
This essay is offered as part of my ongoing commentary on artificial intelligence and society, written alongside colleagues and students at the University of Massachusetts Amherst, where I previously taught. I invite you to read it as a philosophical provocation: an argument about power, pace, and the conditions under which genuine adaptation is possible, alongside the broader body of work this journal is building on AI, learning, and human flourishing.
I noticed it first with a writing tool I had used for months. One morning it was different: not dramatically, not broken, but subtly reorganized. New suggestions appeared where old ones had been. A familiar shortcut did something else. The interface had learned, presumably, from millions of interactions that were not mine. I had not been consulted. The tool I had come to trust had adapted, and I was expected to follow.
This is a small thing. I know it is a small thing. But it struck me as a precise miniature of something much larger, a pattern playing out across every domain of contemporary life. We are told, constantly, that we are adapting to AI. That we must adapt to AI. That the pace of AI development demands adaptation as a matter of survival: professional, institutional, civilizational. The language of adaptation is everywhere: reskilling, upskilling, future-proofing, staying relevant. The implicit image is of a vast and accelerating current, and of humans as swimmers who must either learn to move with it or be swept aside.
But here is the question I cannot stop asking: who, exactly, is adapting to whom?
There is a foundational assumption buried in the phrase “adapting to AI” that deserves to be surfaced. Adaptation, in its biological sense, is what organisms do in response to an environment. The environment is given; it does not care about the organism, does not respond to it, does not change in order to accommodate it. When we borrow this language for our relationship with AI, we import the same image: AI as environment, humans as organisms navigating something larger and more indifferent than themselves. But AI is not a mountain. AI systems (every large language model, every recommendation engine, every generative tool) are built from human traces — books we wrote, arguments we had, questions we asked, mistakes we made and documented. Bernard Stiegler (1998), the French philosopher of technology, called this tertiary retention: the externalization of memory into objects and systems that outlast the individual and then reshape those who encounter them. When I ask a language model a question, I am not querying a neutral database; I am querying a compression of the human record, a system that has learned to speak by ingesting what human beings have said. The adaptation, then, is not between humans and an alien intelligence. It is between humans and a reflection: compressed, distorted, accelerated, shaped by choices made by corporations and engineers, but fundamentally composed of us. When we adapt to AI, we are, in part, adapting to ourselves.
This would be a liberating insight if we controlled the reflection. We do not. Marshall McLuhan (1964), the Canadian media theorist, famously insisted that “the medium is the message,” meaning that what matters about a communications technology is not the content it carries but the perceptual habits it installs. The AI tools we use daily are not neutral delivery mechanisms. They reward certain kinds of queries and penalize others. They return confident answers, which trains us to prefer confidence to uncertainty. They summarize, which trains us to prefer conclusions to arguments. They are available instantly, which trains us toward impatience with anything that requires waiting, including thought.
Andy Clark, cognitive philosopher, and David Chalmers, philosopher of mind (1998), showed in “The Extended Mind” that humans have always offloaded cognition to external scaffolding and that this is not a corruption of human nature but an expression of it. But the notebook never had a business model. AI tools are rented, update without consent, and embed their makers’ priorities into the very scaffolding we are learning to depend on. As Donna Haraway (1985), the feminist philosopher of science, warned in A Cyborg Manifesto, who controls the terms of a human-machine merger determines whether it liberates or exploits.
The Problem Is Pace
Hartmut Rosa (2013), the German sociologist, in Social Acceleration, offers what I find to be the most precise diagnosis of our current condition. Rosa argues that modernity has been defined by a relentless acceleration across three registers: technological acceleration (things change faster), acceleration of social change (institutions and norms evolve faster), and acceleration of the pace of life (we attempt more in the same time). The pathology of late modernity, for Rosa, is not simply that we move fast; it is that we move so fast that we can no longer form resonant relationships with the world. Resonance is not about slowing down. It is about the capacity for genuine encounter with people, ideas, places, and practices, in which we are changed by what we meet, and we change it in return. Acceleration, at sufficient speed, destroys resonance. The world becomes a surface across which we skim rather than a depth into which we sink.
This is the phenomenology of AI adaptation as I have witnessed it, in myself and in the people I work with. We are not adapting, in any meaningful sense, to ChatGPT or Gemini or Claude. We are performing the behaviors required to use them effectively. We are learning the prompts. We are adjusting our expectations. But we are doing all of this faster than we can integrate it, faster than we can ask whether these new habits align with our values, faster than we can notice what we are losing as we gain new capabilities, faster than we can construct the shared cultural frameworks that would allow us to govern these tools rather than merely consume them. What passes for adaptation is often something closer to accommodation without integration: a layering of new behaviors on top of old selves that never had time to fully reconstitute. The person underneath these professional roles (the one who needs to ask whether lesson planning should be accelerated, whether drafting should be outsourced, whether any of this serves the kind of thinker they choose to be) is perpetually deferred. There is always another tool to learn before the philosophical reckoning can happen.
Here is the asymmetry that matters most. When AI companies train a model, they use reinforcement learning from human feedback: actual human responses, preferences, and judgments that shape the system’s behavior. In a very literal sense, AI adapts to us during training. Our collective choices about what responses are helpful and appropriate are baked into the model weights that subsequent versions of the system will carry. But we never see what this aggregate training has produced. The feedback loop is real, but it is asymmetric. The company receives the intelligence distilled from our adaptations. We receive a product update, a release note, a blog post. The power to set the pace, to determine the direction, to decide what the next version of this shared cognitive scaffolding will look like. That power is not distributed. It is concentrated.
The rest of us (educators, clinicians, journalists, citizens, learners) adapt as best we can to the releases we are given, on the timescales we are assigned. This is not adaptation in any sense that respects human agency. It is what I would call compelled accommodation: the performance of adjustment under conditions that were not designed with adjusters in mind.
Toward Resonant Adaptation
I am not arguing for the rejection of AI, or for a return to some imagined state of cognitive purity that never existed. The question has never been whether to adapt but how: on whose terms, at what pace, toward what ends.
Resonant adaptation, to borrow Rosa’s frame, would require several things we do not currently have. It would require pace-setting agency: not just for technologists, but for the communities most affected by AI deployment. In education, this means teachers and learners participating in decisions about which AI systems enter classrooms and how quickly, not merely receiving training on tools selected by administrators and vendors. It would require transparency about the feedback loop: the right to understand how your interactions with AI systems are being used to train future versions, and what those future versions are being trained toward. If AI adapts to us during training, we should be able to see the adaptation. Currently we cannot. Underlying both is a simpler principle: those most affected by a deployment should have a genuine voice in its terms, and the refusal to adapt on demand is not a failure of competence but an exercise of judgment: the considered recognition that a given pace, or a given design, does not serve human purposes.
And it would require, perhaps most fundamentally, time to ask the philosophical question before the technological fact is accomplished. We are consistently presented with AI capabilities as fait accompli, deployed at scale before the ethics, the governance, the pedagogy, and the cultural reckoning have had time to catch up. This is not inevitable. It is a choice, made by those who control the pace, in favor of moving fast. We can insist on a different choice. But insisting requires understanding that pace-setting is itself a form of power, and that surrendering it without question is itself a political act.
So: who is adapting whom? The honest answer is both, simultaneously, unevenly, and without adequate transparency about what the adaptation is producing. AI adapts to us at the moment of training, absorbing and reconstituting the patterns of human thought and expression. We adapt to AI at the moment of use, reshaping our habits of attention, our tolerances for uncertainty, our expectations of knowledge and speed. The loop is real. The question is who controls it, who profits from it, who bears the costs when the adaptation goes wrong, and who gets to decide when it has gone wrong. These are not technical questions. They are political, philosophical, and deeply human questions: the kinds of questions that require not faster adaptation but slower, more deliberate reflection. The kind of reflection that acceleration, by design, makes difficult.
The tool I use to write is different from the one I used six months ago. It will be different again in six months. I will adjust, as I always do. But I am trying to hold onto something that the pace of change is trying to dissolve: the question of whether this adjustment is mine, whether it reflects my values, whether it is moving me toward the kind of thinker and writer and person I have chosen to become, or merely toward the kind of user the next model update requires me to be.
That question, who is adapting whom?, is not a complaint. It is a demand. A demand that the power to shape our cognitive futures be held accountable to those whose cognitive futures are being shaped.
All of us. Which is everyone.
References
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19. https://doi.org/10.1093/analys/58.1.7
Haraway, D. (1985). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. Socialist Review, 80, 65–108. PDF
Marko, J. G. O., Neagu, C. D., & Anand, P. B. (2025). Examining inclusivity: the use of AI and diverse populations in health and social care: a systematic review. BMC Medical Informatics and Decision Making, 25(1), 57. https://doi.org/10.1186/s12911-025-02884-1
McLuhan, M. (1964). Understanding media: The extensions of man. McGraw-Hill.
Rosa, H. (2013). Social acceleration: A new theory of modernity. Columbia University Press.
Stiegler, B. (1998). Technics and time, 1: The fault of Epimetheus (Vol. 1). Stanford University Press.
Taeihagh, A. (2025). Governance of generative AI. Policy and Society, 44(1), 1–22. https://doi.org/10.1093/polsoc/puaf001
Cite this article
Gattupalli, S. (2026). Who Is Adapting Whom? Society and AI. https://societyandai.org/perspectives/who-is-adapting-whom/
Write for Society & AI
Society and AI welcomes scholarly contributions examining the intersection of artificial intelligence, education, governance, and society. We publish thoughtful, accessible work that advances public understanding of how algorithmic systems reshape institutions and human experience. All content is freely available under open access principles. Scholars, educators, policymakers, and practitioners are invited to submit proposals to the Editorial and Content Director at sai@societyandai.org.