Research & Commentary Advisors About Search

Is AI Disciplining Society?

Discipline has always been society’s quiet architect. Who gets watched? Who gets judged? The image above confronts us with discipline’s oldest pattern: anonymous authority directing its gaze at those with the least power to refuse it.

Long before algorithms, institutions shaped behavior through observation, normalization, and correction. Schools taught children when to speak and when to remain silent. Factories trained workers to synchronize their bodies with machines. Prisons isolated those who failed to conform. Michel Foucault (1975) called this the disciplinary society—a world where power operates not through visible violence but through subtle mechanisms that train individuals to regulate themselves. The genius of modern discipline, Foucault argued, is that it eventually becomes invisible. We internalize the rules until they feel like common sense, until obedience feels like choice.

I have come to believe that artificial intelligence represents the most sophisticated disciplinary apparatus humanity has ever constructed. Not because it wields force, but precisely because it does not need to. The algorithmic gaze is constant, distributed, and—most critically—welcomed into our lives as convenience.

Consider how algorithmic systems now mediate nearly every consequential decision in contemporary life. Credit scores determine who can access housing. Hiring algorithms filter who receives employment. Content moderation systems decide whose speech circulates and whose disappears. Predictive policing tools direct where law enforcement concentrates its gaze. In each case, the algorithm encodes rules—often opaque, frequently contested, rarely democratically determined—and enforces compliance through consequences that feel automatic, inevitable, even natural. This is the rise of algorithmic societies made manifest: governance by code rather than consent.

What makes AI discipline distinctive is its scale, speed, and invisibility.

Jeremy Bentham’s panopticon required physical architecture—a tower from which guards might observe prisoners at any moment. The uncertainty of being watched, Bentham theorized, would compel self-regulation. Today’s algorithmic panopticon requires no tower. It is distributed across data centers and embedded in devices we carry willingly. Shoshana Zuboff (2019) describes this as surveillance capitalism: an economic logic that extracts behavioral data not merely to observe but to predict and modify human action. We are not simply watched; we are shaped. The system learns what prompts us to click, to buy, to comply—and it optimizes relentlessly for those outcomes.

The punishment for non-compliance is often invisible until it arrives. There is no trial, no appeal, no moment of confrontation with authority. The loan is simply denied. The resume never reaches human eyes. The post silently vanishes. The algorithm renders its judgment, and life proceeds as if nothing happened—except for those who bear the weight of exclusion.

Virginia Eubanks (2018) documented how automated systems in welfare, housing, and child protective services create digital poorhouses—algorithmic enclosures that trap vulnerable populations in cycles of denial and surveillance. Safiya Umoja Noble (2018) named this technological redlining—the way search engines perpetuate racist and sexist representations, disciplining users into accepting distorted visions of the world as objective truth. Cathy O’Neil (2016) named these systems weapons of math destruction: opaque models that punish the poor, reinforce inequality, and resist accountability precisely because they are assumed to be neutral. The question my colleague Joan Giovannini and I raised in Is the AI-Generation AI-Damaged? extends here: what happens to a society whose members are perpetually shaped by systems they cannot see, cannot question, cannot refuse?

This is not a problem that technical solutions alone can address.

The disciplinary function of AI is not a bug to be patched but a feature that serves particular interests. Platforms profit from engagement algorithms that reward outrage and conformity. Employers benefit from hiring tools that filter for compliance. States embrace predictive systems that promise order without the friction of democratic deliberation. Frank Pasquale (2015) called this the black box society—a world where consequential decisions are made by systems whose logic remains hidden from those they affect. To ask whether AI is disciplining society is to ask who benefits from that discipline—and who bears its costs. As I have argued in exploring cloud capital and the extraction of mind, the asymmetry is not accidental. It is structural.

What would it mean to resist algorithmic discipline? And is resistance even possible when the disciplinary apparatus knows us better than we know ourselves?

Foucault reminded us that where there is power, there is resistance. But resistance requires awareness, and awareness requires making the invisible visible. This is why I advocate for society-centered AI—frameworks that place affected communities at the center of design, deployment, and governance. Resistance means building institutions that can interrogate algorithmic power: auditing systems for bias, demanding transparency, creating spaces where human judgment is not subordinated to machinic efficiency.

The discipline imposed by AI is not inevitable. It is a choice—made by designers, investors, policymakers, and ultimately by societies willing or unwilling to accept algorithmic governance as the price of convenience. Aldous Huxley (1932) warned that the most effective tyrannies are those we welcome, those that arrive dressed as comfort and efficiency. We are not yet in Huxley’s world. But we are closer than most recognize.

The question is not whether AI can discipline society. It already does. The question is whether we will remain its subjects—or become its authors.


References

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. https://virginia-eubanks.com/automating-inequality/

Foucault, M. (1977). Discipline and punish: The birth of the prison (A. Sheridan, Trans.). Vintage Books. (Original work published 1975) https://www.penguinrandomhouse.com/books/55026/discipline-and-punish-by-michel-foucault-and-alan-sheridan/

Huxley, A. (1932). Brave new world. Chatto & Windus. https://www.huxley.net/bnw/

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. https://nyupress.org/9781479837243/algorithms-of-oppression/

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. https://www.penguinrandomhouse.com/books/241363/weapons-of-math-destruction-by-cathy-oneil/

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. https://www.hup.harvard.edu/books/9780674970847

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. https://www.hbs.edu/faculty/Pages/item.aspx?num=56791


Cite this article

Gattupalli, S. (2026). Is AI Disciplining Society? Society and AI. https://societyandai.org/perspectives/is-ai-disciplining-society/


Write for Society & AI

Society and AI welcomes scholarly contributions examining the intersection of artificial intelligence, education, governance, and society. We publish thoughtful, accessible work that advances public understanding of how algorithmic systems reshape institutions and human experience. All content is freely available under open access principles. Scholars, educators, policymakers, and practitioners are invited to submit proposals to the Editorial and Content Director at sai@societyandai.org.

Meaningful scholarship requires resources.

Society & AI operates without institutional funding or advertising revenue. Your support makes open-access scholarship possible for educators, researchers, and communities worldwide.