Research & Commentary Advisors About Search

Cloud Capital and the Extraction of Mind

I find it rather amusing that we have spent decades debating whether machines will eventually think like humans, while quietly overlooking the more immediate question of whether humans are being trained to think like machines, or worse, to stop thinking altogether so that machines can do it for us, for a fee, indefinitely.

Your data rises. Their capital accumulates.

When I first encountered Paulo Freire’s Pedagogy of the Oppressed (Freire, 1970) as a doctoral student in 2019, I remember being struck by his description of the banking model of education, where teachers deposit information into students as though they were empty vessels waiting to be filled. Freire argued that this model dehumanizes learners by stripping them of agency, turning education into an act of domestication rather than liberation. The oppressed, he observed, often internalize the consciousness of the oppressor, accepting their own subjugation as natural and inevitable.

I find myself returning to Freire now, decades later, because I notice unsettling resemblances between the dynamics he described and something emerging in our relationship with artificial intelligence. The structures have changed, the technologies are unrecognizable, but the underlying pattern of extraction and internalized dependence feels hauntingly familiar. What Freire identified in the classroom between teacher and student, I now see playing out between AI systems and their users, between cloud platforms and the billions of people whose cognitive labor feeds them.

The Birth of Cloud Capital

Economist Yanis Varoufakis (2024) offers a framework that helps clarify what many of us have sensed but struggled to articulate. In his book Technofeudalism: What Killed Capitalism, he introduces the concept of cloud capital, which he describes as a produced means not of production in the traditional sense, but of behavioral modification. Unlike the factory machinery of industrial capitalism that manufactured physical goods, cloud capital manufactures something else entirely. It manufactures desire. It shapes preference. It extracts value from the very act of human cognition.

Consider what happens when I speak to a voice assistant, scroll through a recommendation feed, or ask an AI tutor for help with a problem. I am not merely using a tool in any conventional sense. I am training an algorithm that learns to know me, and as Varoufakis explains, I am simultaneously being trained by it to help it know me better. The process is circular and self-reinforcing. The more I interact, the more the system learns. The more it learns, the better it predicts. The better it predicts, the more effectively it can modify my behavior by recommending products I did not know I wanted, surfacing content designed to keep me engaged, and gradually shaping the very contours of my attention and desire.

Freire would recognize this pattern immediately. In the banking model, the teacher knows and the student receives. In the cloud capital model, the algorithm knows and the user provides. Both relationships are presented as benevolent, even empowering. Both, upon closer examination, reveal asymmetries that systematically benefit one party at the expense of the other’s autonomy.

Cognitive Labor Without Wages

What troubles me most as I reflect on this arrangement is recognizing that what I do when I interact with AI systems constitutes labor. When I upload photographs, compose messages, correct autocomplete suggestions, or engage with AI-generated content, I am performing cognitive work that adds directly to the cloud capital of platform owners. Yet unlike, say a factory worker who at least receives a wage in exchange for labor, I receive only the illusion of free services. The transaction is obscured by convenience.

The asymmetry becomes stark when you examine the economics. Varoufakis points out that in traditional capitalist enterprises, roughly 80% of revenue historically flowed to wages across the organization, from janitors to executives. In companies built on cloud capital, that figure can drop to as low as 1%. The remainder is extracted, routed through elaborate international tax structures, and accumulated by a vanishingly small ownership class. This is not a “bug” in the system or an unfortunate side effect. This is the system functioning exactly as designed.

When educators celebrate AI tools that personalize learning or adapt to student needs, we rarely pause to ask who benefits from the data students generate in those interactions. Every engagement a child has with an AI tutor, every mistake and hesitation and breakthrough, becomes training data that increases the value of cloud capital owned by corporations headquartered thousands of miles away. The student receives feedback. The corporation receives capital. Freire’s banking model has found a new institutional form, one that deposits not just curriculum but consciousness itself into systems that students do not own and cannot govern.

From Markets to Fiefdoms

Varoufakis argues that what we are witnessing is not merely a new phase of capitalism but its replacement by something older and in some ways more troubling: feudalism, digitized and scaled for the twenty-first century. Traditional markets, which at least in theory functioned as spaces where buyers and sellers meet as nominal equals, are being supplanted by what he describes as cloud fiefdoms. Amazon, in this analysis, is not really a marketplace at all. It is a digital estate where producers and consumers operate within walls controlled by a single lord who extracts rent from every transaction that occurs within his domain.

The implications for education are profound and deserve far more attention than they currently receive. As AI systems increasingly mediate learning, we must ask ourselves some uncomfortable questions. Are we building educational marketplaces where learners can freely choose among options and maintain agency over their own development? Or are we constructing digital fiefdoms where a handful of platform owners determine what knowledge is surfaced, what skills are valued, what questions are worth asking, and ultimately whose interests are served?

The consolidation of AI capabilities in a small number of corporations, each accumulating cloud capital through billions of daily interactions, represents a concentration of cognitive infrastructure that has no real precedent in human history. We have seen concentrations of economic power before. We have seen concentrations of political power. But a concentration of the very infrastructure through which cognition itself is increasingly mediated raises questions that our existing frameworks struggle to address.

The Cognitive Dimension

There is another extraction happening alongside the economic one, and it receives far less attention in public discourse. When I outsource cognitive tasks to AI, when I let the machine remember things I used to remember, calculate things I used to calculate, compose text I used to compose, and make decisions I used to make myself, I may be surrendering not just data but capacity.

Cognitive scientists have long understood that mental capabilities, like muscles, require exercise to maintain their strength. The research on desirable difficulties by Bjork and Bjork (2011) demonstrates that struggle is not an obstacle to learning but rather its essential mechanism. When AI removes friction by answering questions before I have fully formulated them, completing thoughts before I have thought them through, the convenience may come at a cost. It may also remove the conditions under which cognitive development actually occurs.

This creates a troubling possibility that I keep returning to in my own thinking: cloud capital grows through my cognitive labor while my cognitive capacity may simultaneously diminish through disuse. I train the machine, and the machine trains me to need it more. The asymmetry compounds over time.

Freire wrote about how the oppressed internalize the consciousness of the oppressor, coming to see themselves through the oppressor’s eyes and accepting their diminished status as natural. I wonder whether something analogous happens when we become dependent on AI systems. Do we begin to see our own cognition as inadequate, our own thinking as too slow, our own memory as unreliable? Do we internalize a view of ourselves as cognitive inferiors who naturally require algorithmic assistance? And if so, who benefits from that internalization?

Who Benefits?

Zooming out to see the full picture requires asking a question that technology discourse often obscures beneath layers of enthusiasm about innovation and disruption. That question is simply: who benefits from this arrangement?

The owners of cloud capital benefit in ways that are measurable and enormous. Their wealth compounds with every interaction, every data point, every refinement of algorithmic prediction. The numbers are public and staggering.

Users receive convenience, efficiency, and the seductive comfort of systems that anticipate their needs. These are real benefits that I experience myself daily. But convenience has costs that rarely appear on balance sheets, and efficiency must always prompt the question: efficient for whom, and toward what ends?

What of the broader society? Here the accounting becomes more complex. When somewhere between 20 and 30 percent of economic value, by Varoufakis’s estimation, is siphoned from the circular flow of income into the accounts of cloud capital owners, aggregate demand suffers. The quality of available jobs degrades as traditional employment is replaced by platform-mediated gig work. Economic inequality widens into a chasm. And the political power that inevitably accompanies such concentrated wealth begins to reshape governance itself in ways we are only beginning to understand.

For education specifically, the stakes feel existential to me. If AI systems continue to centralize cognitive infrastructure in the hands of a few corporations, we risk creating a world where the very capacity to think independently becomes something like a luxury good. It would be available to those who can afford to cultivate it deliberately, and atrophied in those who cannot escape the frictionless embrace of algorithmic mediation. This is not a distant dystopia. It is a trajectory we are already on.

The Question Before Us

I want to be clear that I do not write this as a someone who either opposes or is resistant to the use of new and emerging technologies. I use AI tools daily in my research and teaching. I see their genuine potential to democratize access to knowledge, to personalize learning in ways that honor individual difference, and to augment human capability rather than simply replace it. The technology itself is not the problem.

But potential is not destiny, and the same technologies that could liberate can also extract. The same systems that could empower learners can also enclose them within digital fiefdoms. The question before us is not whether AI will transform education, because it already is transforming education whether we attend to it or not. The question is whether that transformation will serve the flourishing of all learners or the accumulation of capital by the few.

Varoufakis makes a point that I think is essential for educators to understand. This is not fundamentally a personal matter, not a question of individual choices about which apps to use or which platforms to avoid. Even if you do not have a smartphone, even if you try to disconnect from the digital infrastructure entirely, you cannot really escape because you are living in an environment in which the quality of jobs has been depleted, in which power has become immensely concentrated in the hands of the very few. The effects are structural and systemic, not individual and personal.

The cognitive gym I will propose in subsequent essays, the deliberate cultivation of mental capacity in an age of outsourcing, is necessary but insufficient on its own. Individual practice cannot substitute for collective action. Personal cognitive fitness cannot compensate for structural extraction. We need both.

What we need is not merely to exercise our minds but to reclaim the infrastructure through which cognition itself is increasingly mediated. That is a political project as much as a pedagogical one. It requires the kind of critical consciousness that Freire called conscientização, an awakening to the structures of oppression and an understanding that those structures can be changed through collective action.

The cloud is not neutral territory any more than the classroom was neutral in Freire’s analysis. It is owned. It is governed. And what is owned and governed can be owned and governed differently, if we have the clarity to see what is happening and the collective will to demand alternatives.

References

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher & J. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56-64). Worth Publishers.

Ferenczy, B. (2024). Cloud generator [Interactive visualization]. CodePen. https://codepen.io/BalintFerenczy/pen/qENdpoL

Freire, P. (1970). Pedagogy of the oppressed. Continuum.

Varoufakis, Y. (2024). Technofeudalism: What killed capitalism. Melville House.


Cite this article: Gattupalli, S. (2026). Cloud Capital and the Extraction of Mind. Society and AI. https://societyandai.org/perspectives/cloud-capital-extraction-of-mind/

This work is free to read and share — but not free to produce.

Society & AI is sustained entirely by readers like you. If this scholarship informed your thinking, consider supporting independent, open-access research.