A new study in Nature Sustainability looks at what happens if the current boom in generative AI continues and the USA fills up with AI servers by 2030. Spoiler: the environmental tab is huge, and “AI for environment” will only make sense if we clean up how we run AI itself.
The authors estimate that, between 2024 and 2030, AI servers in the US alone could use 731–1,125 million cubic meters of water every year and emit an extra 24–44 million tonnes of CO₂-equivalent.
To picture that: that water is roughly like filling about 290,000 to 450,000 Olympic-size swimming pools each year, based on ~2,500 m³ per pool. The CO₂ is comparable to around 14–26 million round-trip flights from Paris to New York per year (about 1.7 tCO₂e per passenger per round trip).
And this is just for AI servers in one country.
Why does this happen? Because training and running large machine learning models needs a lot of electricity, and that electricity still mostly comes from a grid that burns fossil fuels and uses a lot of water for cooling power plants. This is the energy–water–climate nexus of AI: every new GPU is indirectly plugged into rivers, aquifers and the atmosphere.
The good news: the study shows there is big room for improvement. By using best practices in data-centre efficiency (better cooling, smarter server utilization) and by improving “power usage effectiveness” (how much of the electricity really goes to computing) and “water usage effectiveness”, the footprint of AI servers could be cut significantly. In some scenarios, operational emissions and water footprints can be reduced by up to 73% and 86% compared with business as usual, though real-world constraints in today’s energy infrastructure limit how far we can go.
Location also matters a lot. Putting the same AI cluster in a different US state can mean very different water and carbon footprints, depending on the local grid mix (coal vs. wind/solar vs. hydropower) and local water scarcity. The authors find that Midwestern states such as Texas, Montana, Nebraska and South Dakota look particularly promising for more sustainable AI data centres, because they combine strong renewable energy potential with less water stress than some western states.
But even with better cooling, smarter siting and more renewables, the paper concludes that AI is unlikely to reach genuine net-zero water and carbon by 2030 unless companies rely heavily on compensation mechanisms like carbon offsets and water restoration projects. These tools are useful, but also uncertain and sometimes opaque.
This is a sobering reminder: responsible AI is not just about fairness or safety; it is also about the physical infrastructure underneath our models. If we want AI for environment, we need energy systems that are actually green, transparent reporting from AI companies, and regulation that nudges data-centre growth toward low-carbon and low-water locations.
In other words: each new model deployment is not just a line of code; it’s a potential swimming pool and a plane ticket. As AI engineers, policymakers and users, we all have a role in shrinking both.
Oscar Rodríguez, PhD is currently an AI Engineer at LumApps (France). He holds a Ph.D. in Computer and Control Engineering from Politecnico di Torino, with a focus on AI. His expertise includes Knowledge Representation and Reasoning, Natural Language Processing (LLMs) and Machine/Deep Learning. As president of Greenminds, he applies AI to drive sustainability initiatives.

Leave a Reply