AI’s promise for climate action is tempered by its environmental and ethical costs. In their 29 February 2024 Nature commentary “How to make AI sustainable”, Indervir Singh Banipal and Sourav Mazumder detail AI’s hidden carbon, water and transparency footprints—and offer targeted strategies to ensure that AI serves sustainability rather than undermines it.
The Environmental Toll of Scale
Banipal & Mazumder open by quantifying AI’s staggering resource demands. A world using generative AI daily could emit up to 47 million tonnes of CO₂ per year—a 0.12 percent bump in global emissions. Even a modest chatbot deployed in one call centre (50 agents, four users per hour) emits about 2,000 tonnes of CO₂ annually, while water needed for cooling could match the annual fluid intake of 328 million adults. These figures spotlight a paradox: digital solutions touted for environmental progress can generate significant ecological burdens.
The Ethics of the Black Box
Beyond energy and water, AI’s opacity poses ethical risks. Modern AI models are often inscrutable, making it difficult to detect biases or errors. Banipal & Mazumder note that this lack of transparency not only raises fairness concerns—illustrated by the New York Times lawsuit against OpenAI and Microsoft over alleged unattributed training on copyrighted text—but also hampers trust and adoption across industry and academia.
Charting a Sustainable Path
Recognizing these challenges, the authors showcase a suite of mitigation strategies:
- Domain-Specific Models: Instead of ever-larger foundation models, focusing on lean, specialized AI reduces compute and data needs, curbing environmental impact while still addressing real-world problems.
- Prompt Engineering & Fine-Tuning: Refining inputs and adapting pre-trained models to niche tasks minimizes hardware cycles, translating directly into carbon savings.
- Efficient Deployment Techniques: Quantization, knowledge distillation and client-side caching enable sophisticated AI on resource-limited devices, slashing energy use at the edge.
- Green Data Centres: Migrating workloads to cloud facilities powered by renewable energy and advanced cooling systems lowers the carbon footprint of AI operations.
- Transparency Metrics: The Foundation Model Transparency Index—devised by researchers from Stanford, MIT and Princeton—offers a 100-point framework to assess how openly models are built, function and are used downstream, laying groundwork for accountability.
Why This Matters
Banipal & Mazumder’s article arrives at a pivotal moment. As governments adopt AI for climate monitoring and corporations rush to integrate generative tools, unchecked growth threatens to offset sustainability gains from AI-enabled innovations. By spotlighting both environmental metrics and ethical frameworks, the authors bridge a critical gap between AI research and environmental engineering practice.
Strengths, Gaps and Future Directions
The commentary excels at marrying quantitative estimates with actionable guidelines, resonating with GreenMinds’s mission to demystify AI’s environmental trade-offs. Yet open questions remain: How can policymakers enforce transparency standards? What incentives will drive companies to adopt green-AI practices at scale? And can emerging hardware paradigms—such as analog or in-memory computing—deliver on both performance and sustainability promises? These questions chart the future of Green AI research and policy.
Oscar Rodríguez, PhD is currently an AI Engineer at LumApps (France). He holds a Ph.D. in Computer and Control Engineering from Politecnico di Torino, with a focus on AI. His expertise includes Knowledge Representation and Reasoning, Natural Language Processing (LLMs) and Machine/Deep Learning. As president of Greenminds, he applies AI to drive sustainability initiatives.

Leave a Reply