- Centre for Sustainable Intelligence
- Posts
- Thirsty AI: The Secret Water Footprint of AI Models
Thirsty AI: The Secret Water Footprint of AI Models
AI's unhealthy appetite for water is slightly concerning
Welcome to another edition of the Sustainable Intelligence newsletter.
This is where I try to explore the intersection of artificial intelligence and environmental sustainability. My mission is to share my observations across these core themes: AI for Good, Ethical AI, Sustainable AI and AI in Sustainability. I hope to inform, inspire and engage readers interested in the potential of AI to drive positive change for both people and the planet.
Here’s what’s in stock for you today:
News Bite: Microsoft is transforming data centre water efficiency.
Deep Dive: The secret water footprint of AI models
Tech Trend: The AI models exploding context length/window
News Bite: Microsoft is transforming data centre water efficiency.
Microsoft is transforming its data centre operations to improve water efficiency, aiming to be water positive by 2030. The company has reduced water intensity by over 80% and is implementing innovative cooling technologies, such as direct-to-chip cooling and using reclaimed water, to minimise water use. These efforts are part of a broader strategy to conserve freshwater, replenish water resources, and support global water sustainability initiatives. Microsoft pledges to double down through partnerships with customers, local communities, municipalities and partners to advance water infrastructure and policy globally.
Deep Dive: The secret water footprint of AI Models
Many sustainability advocates like myself have often focused on the energy consumption of large language models (LLMs), especially in relation energy footprints of data centres as the demand for AI soars. We've scrutinised data centre energy use extensively, but we’ve paid little attention to the substantial water footprint—both in terms of withdrawal and consumption—of these AI models.
Research by Ren and colleagues reveals a startling fact: training GPT-3 in Microsoft's cutting-edge U.S. data centres can lead to the evaporation of 700,000 litres of clean freshwater, a fact that has largely been kept hidden. More alarmingly, the global AI demand could necessitate water withdrawal of 4.2 to 6.6 billion cubic meters by 2027. This figure exceeds the total annual water withdrawal of 4 to 6 Denmark or half of the United Kingdom.
This is deeply troubling. Freshwater scarcity is one of the most urgent challenges we face today, exacerbated by a rapidly changing weather patterns, growing population, diminishing water resources, and aging water infrastructures. Countries are issuing new rules in the face of growing water droughts. The UK has often applied hosepipe bans. Spain has recently applied swimming pool rules due to shortages and Scotland is at risk of water shortages, according to a recent study by the University of Dundee.
As we move forward, the reliance on generative AI applications like ChatGPT, Gemini, Claude, etc is only going to increase. This means the energy and water footprint of data centres is only going to escalate. It is concerning, to say the least, because most of us are not paying attention, and it might be too late before we realise the full impact.
There are more than 8,000 data centres globally, with the highest concentration in the U.S., and the demand for data centres is expected to rise by 15% to 20% annually through 2030, according to the Boston Consulting Group. As the world engages in an AI arms race, more data centres will be built, putting additional pressure on freshwater supplies across various regions.
While the benefits of AI are undeniable, we must balance these advancements with a clear-eyed understanding of their environmental costs. Policymakers and Governments must not be asleep at the wheel during this critical juncture of human and technological evolution. The sustainability of our planet depends on our attention on these potential adverse effects of innovation.
Tech Trend: The AI models’ exploding context length/window
The trend of expanding context windows in AI models is rapidly gaining momentum. OpenAI's ChatGPT Enterprise supports a 128k context length, while ChatGPT and ChatGPT for Teams offer 32k. Claude Pro and API currently boast 200k+ tokens, approximately 500 pages of text. Remarkably, Gemini 1.5 Flash features a 1-million-token context window, and Gemini 1.5 Pro doubles that with 2 million tokens.
These substantial increases unlock previously unimaginable use cases, driving innovation in text generation and multimodal inputs. Considering these advancements are happening within months, the potential growth over the next year is immense.
That’s all I have for you today.
That’s all I have for you in this edition of Sustainable Intelligence.
If you have any questions or feedback please use this form. I will try my best to respond to all your questions and feedback.
If want me to feature an article in the News Bite or Tech Trend sections, please get in touch and I will see what I can do to feature it.
If you are not yet subscribed to this newsletter, please subscribe here and forward it on to all your friends who you think will benefit from it.
Thank you!
Emeka Ogbonnaya
Reply