Each year in Las Vegas, the Consumer Electronics Show (CES) transforms the desert into a cathedral of innovation. Technology companies from around the world converge to unveil their latest creations, and this year’s exhibition was saturated with one theme above all others: artificial intelligence. From refrigerators that suggest recipes to automobiles that anticipate driving conditions, from headphones that adapt to ambient noise to mirrors that analyze your skin, the message was unmistakable. We are living in an AI-saturated world, where intelligence has become ambient, embedded in the objects that surround us.
But here is what the gleaming displays and polished presentations obscure: the vast majority of these devices are not self-contained. They are portals. The intelligence does not reside in the refrigerator or the headphone or the mirror. It resides in data centers scattered across continents, humming with servers that consume electricity measured in terawatt-hours and water measured in billions of gallons. Every question we ask our smart speakers, every image we generate, every document we summarize travels through fiber optic cables to these facilities, where massive language models process our requests and return responses. The magic happens elsewhere—out of sight and increasingly, out of mind. We interact with the interface, never seeing the infrastructure: the cooling towers, backup generators, and reservoirs drained to keep silicon from overheating.
The Disclaimer You See
We use AI tools in our own work and research every day. If you use ChatGPT, Claude, Gemini, or any of the major AI chatbots now embedded in daily life, you have likely noticed a small line of text beneath the input field. In ChatGPT, it reads:
“ChatGPT can make mistakes. Check important info.”
ChatGPT’s accuracy disclaimer appears beneath every query.
Claude displays a similar warning about potential errors.
This acknowledgment represents a form of corporate humility. Companies are admitting, in plain language, that the outputs may not be accurate. They ask users to exercise judgment, verify claims, remain skeptical. This is commendable. Accuracy matters. Misinformation spreads harm.
But we think accuracy is not the only thing that matters.
The Disclaimer You Do Not See
What if, directly beneath that accuracy warning, another line appeared? Something like:
“This response consumed approximately 0.3 watt-hours of electricity and contributed to water usage in data center cooling systems. Learn more about AI’s environmental impact.”
You will not find such a statement on any major AI platform today. And yet the environmental costs are neither speculative nor trivial.
OpenAI CEO Sam Altman has publicly disclosed that the average ChatGPT query consumes approximately 0.34 watt-hours of electricity—roughly what a high-efficiency lightbulb uses in a couple of minutes—and about 0.000085 gallons of water, roughly one-fifteenth of a teaspoon (Altman, 2025). While these figures appear modest in isolation, scale transforms individual efficiency into collective consequence.
While the current state of the art has advanced to GPT-5, Li et al. (2023) reported that GPT-3 consumed approximately 500 milliliters of water per 10–50 responses, with usage varying by deployment location and cooling efficiency; and as CES showcases an ever-expanding ecosystem of AI-enabled devices, this infrastructural dependence will only compound and intensify these resource demands. As AI infrastructure increasingly underpins everyday appliances and services, these seemingly negligible per-query costs scale into a staggering cumulative impact.
According to Food & Water Watch, by 2028 AI-related data centers in the United States could require as much as 720 billion gallons of water annually just for cooling—enough to meet the indoor water needs of 18.5 million households.
The carbon footprint is similarly alarming. de Vries-Gao (2025) estimates that AI systems alone could be responsible for between 32.6 and 79.7 million metric tons of CO₂ emissions in 2025—comparable to the annual emissions of a major city like New York.
The Asymmetry of Transparency
The contrast is stark. AI companies have decided that users need to know about potential inaccuracies. They embedded disclaimers directly into interfaces—small but visible reminders that the technology is fallible. This represents a choice: a decision that certain information is important enough to surface.
Why, then, does environmental impact not meet the same threshold?
If companies believe users deserve to know when AI might produce an error in a letter or calculation, surely users also deserve to know when their interaction contributes to resource depletion and carbon emissions. The logic is identical: transparency enables informed decision-making. Silence on environmental costs suggests that accuracy is a consumer concern worth addressing, while ecological impact is not.
Toward Meaningful Disclosure
We are not arguing that AI should be abandoned. We use these tools ourselves in daily work. They offer genuine value in education, research, accessibility, and countless other domains. But value does not exempt technology from accountability.
What we propose is deceptively simple: environmental transparency at the point of interaction. Every chat interface should display—alongside accuracy disclaimers—an estimate of the energy consumed, water used, and carbon emitted per session or query. This could appear as a per-session summary, aggregate dashboard, or real-time indicator. The data already exists within company infrastructure.
Such disclosure would not deter legitimate use. But it would invite reflection. It would encourage efficiency—prompting users to consolidate queries rather than treating AI as an infinite resource. And it would create market pressure for companies to reduce environmental impact, knowing users can now see the cost.
In an era of climate crisis, we cannot afford to treat AI’s footprint as an externality. The companies building these systems have a responsibility to make the invisible visible.
The next time you see “ChatGPT can make mistakes,” ask yourself:
What else are they not telling you?
References
Altman, S. (2025, June 10). The gentle singularity. Sam Altman’s Blog. https://blog.samaltman.com/the-gentle-singularity
de Vries-Gao, A. (2025). The carbon and water footprints of data centers and what this could mean for artificial intelligence. Patterns. https://doi.org/10.1016/j.patter.2025.101430
Food & Water Watch. (2025). Artificial intelligence: Big Tech’s big threat to our water and climate. https://www.foodandwaterwatch.org/2025/04/09/artificial-intelligence-water-climate/
Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI less “thirsty”: Uncovering and addressing the secret water footprint of AI models. arXiv preprint arXiv:2304.03271. https://arxiv.org/abs/2304.03271
Cite this paper: Gattupalli, S., & Chakravarty, P. (2026). The Invisible Cost of Every Chat. Society and AI. https://societyandai.org/perspectives/invisible-cost-of-every-chat/