In Part 2 of our two-part series Environmental impact of liberal artificial intelligence, MIT News Some methods of experts are working to reduce the carbon footprint of technology.
The energy demands of the generative AI are expected to grow dramatically in the next decade.
For example, the International Energy Agency’s April 2025 report has predicted global power demand from data centers, which is home to computing infrastructure to train and deploy AI models, more than doubled by 2030, up to 945 terravat-hour. While all the operations conducted in the data center are not related to AI, this total amount is slightly higher than Japan’s energy consumption.
In addition, the analysis of August 2025 from Goldman Sachs Research estimates that about 60 percent of the growing power demands from data centers will be met by burning fossil fuels, leading to an increase in global carbon emissions by about 220 million tonnes. In comparison, driving a car -powered car up to 5,000 miles produces about 1 ton of carbon dioxide.
These figures are shocking, but at the same time, mit and scientists around the world are studying innovations and interventions to reduce the ballooning carbon footprints of AI, to increase the efficiency of algorithm, to increase the design of data centers.
Carbon emissions
Reducing the carbon footprint of generic AI is usually focused on “operational carbon” – emissions used by powerful processors known as GPU inside a data center. It often ignores the “embodied carbon”, which is emissions created by building data centers in the first place, Vijay Gadpali, senior scientist at MIT Lincoln Laboratory, who leads research projects at Lincoln Laboratory Supercaputing Center.
Creating and retroping data centers made of steel and concrete tons and filled with air conditioning units consumes large amounts of carbon, computing hardware and miles of cable. In fact, the environmental impact of the creation of data centers is searching for more sustainable construction materials like meta and Google. (Cost is another factor.)
In addition, data centers are huge buildings-the world’s largest, China telecom-inr Mongolia Information Park, about 10 million sq ft-a general office with about 10 to 50 times the energy density of the building, adds gadpali.
He says, “The operational side is only part of the story. Some things that we are working to reduce operating emissions can lend ourselves to reduce embroidered carbon, but we need to do more on that front in the future,” they say.
Reducing operating carbon emissions
When it comes to reducing the operational carbon emissions of AI data centers, there are many similarities with home energy-saving measures. For one, we can simply close the lights.
“Even if you have the worst lightbalb in your home from a efficiency point of view, stopping them or dimming them will always use less energy than running them on full explosion using low energy,” says Gadpali.
In the same fashion, research from the Supercomputes Center has shown that the data center has to “close” the GPU, so that they consume three-tenth part, energy has a minimum effect on the performance of the AI model, while the hardware becomes easy to cool.
Another strategy is to use low energy-intensive computing hardware.
Demanding general AI workload, such as training new logic models such as GPT -5, usually needs to work together many GPUs. Goldman Sachs analysis estimates that a state -of -the -art system may soon operate 576 connected GPUs at once.
But engineers can sometimes achieve similar results by reducing the accuracy of computing hardware, perhaps by switching to less powerful processors that have been tuned to handle a specific AI workload.
There are also measures that promote the efficiency of the powerful deep-decorating models before being deployed.
Gadpali’s group found that about half of the power used to train the AI model is spent to score the final 2 or 3 percent marks in accuracy. Stopping the training process quickly can save a lot of that energy.
“There may be cases where 70 percent of accuracy is sufficient for a particular application, such as a recommendation system for e-commerce,” they say.
Researchers can also take advantage of efficiency enhancing measures.
For example, a postdock at the supercomputeing center felt that the group could run a thousand simulation during the training process to take two or three best AI models for their project.
By creating a device, which allows them to avoid about 80 percent of the futile computing cycles, they dramatically reduced the energy demands of training, in which there is no lack of model accuracy, says Gadpali.
Improvement in efficiency
Constant innovation in computing hardware, such as the dense arrays of the transistor on semiconductor chips, is still enabling dramatic improvement in the energy efficiency of the AI model.
Even though since 2005, the improvement in energy efficiency for most chips has been slowing down, the volume of calculations that can be done with GPU per joule, is improving 50 to 60 percent each year, Neil Thompson, director of MIT’s computer science and artificial intelligence, said that a major investor and a leading investor in the MIT initiative on the digital economy.
The tendency to get more and more transistors on the “Still-in-on-in ‘Rule of Moore’ is still very important of these AI systems, because it is very valuable to improve efficiency even after running operations in parallel,” says Thompson.
More important than this, their group’s research indicates that new model architecture that can resolve complex problems rapidly, consume low energy to achieve equal or better results, doubling every eight or nine months.
Thompson coined the word “Negflop” to describe this effect. Similarly, a “Negavat” represents the power saved due to energy-saving measures, a “negaFlop” is a computing operation that does not require performance due to algorithm improvement.
These can be things such as “pruning” or employing compression techniques of unnecessary components of a nervous network that enable users to make more with less calculation.
“If you need to use a very powerful model today to complete your task, in a few years, you may be able to use a small model to do the same thing, which will take very little environmental burden. To make these models more efficient is a single-and-important thing that you can do to reduce the environmental costs of AI,” called Thompson.
Maximum energy saving
When reducing the overall energy use of AI algorithms, and computing hardware will cut greenhouse gas emissions, not all energy is uniform, adding gadpli.
He says, “The amount of carbon emissions vary greatly in 1 kWh hours, even during the day, as well as in the month and also,” they say.
Engineers can avail these variations by taking advantage of the flexibility of AI workload and data center operations to maximize the reduction in emissions. For example, some generative AI workload is not required to perform in its entirety at the same time.
Deepjoti Deka, a research scientist at MIT Energy Initiative, says, to divide computing operations, so some are performed later when more electricity is used in the grid, more power than electricity, which is from renewable sources such as solar and wind, can set a long way towards reducing the carbon footprint of a data center.
Deka and his team are also studying the “smart” data centers, where the AI workloads of several companies using the same computing equipment are flexible to improve energy efficiency.
“Looking at the system as a whole, we hope to reduce the dependence on fossil fuels along with the use of energy, while still maintains reliability standards for AI companies and users,” says Deka.
He and other mittei are creating a flexibility model of a data center that consider different energy demands of training of a deep-reflected model to deploy that model. They hope that the best strategies to schedule computing operations to improve energy efficiency should be highlighted.
Researchers are also searching for the use of long -term energy storage units at data centers, which stores additional energy for many time when necessary.
With these systems, a data center can use stored energy that was generated by renewable sources during the high-Mang period, or avoid the use of diesel backup generators when grid fluctuations occur.
“Long-term energy storage can be game-chainer here because we can design operations that actually change the emission mixture of the system to rely more on renewable energy,” Deka says.
In addition, researchers at MIT and Princeton University are developing a software equipment for investment scheme in the power sector, called Genx, which can be used to help companies determine the ideal location to detect data centers to reduce environmental impacts and costs.
The location can have a major impact on reducing the carbon footprint of the data center. For example, a city on the coast of Meta Northern Sweden operates a data center in Lulia, where the cooler temperature reduces the amount of power required to cool computing hardware.
Thinking outside the box (forward), some governments are also searching for the construction of data centers on the moon, where they may potentially operate with almost all renewable energy.
AI-based solution
Currently, the expansion of renewable energy production here on Earth is not keeping pace with the rapid growth of AI, which is a major route to reduce its carbon footprint, Jennifer Turilyuk MBA ’25, a short -term lecturer, pre -slurry, and Mitin Trust for Mitin Trust Center for Martin Trust Center.
Local, state and federal review processes required for a new renewable energy projects may take years.
Researchers at MIT and other places are searching for the use of AI to speed up the process of connecting new renewable energy systems to the power grid.
For example, a generic AI model can streamline interconnection studies that determine how a new project will affect the power grid, a step that often takes years to complete.
And when it comes to accelerating the development and implementation of clean energy technologies, AI can play a major role.
“Machine learning is very good to deal with complex conditions, and the electrical grid is called one of the world’s largest and most complex machines,” Turilyuk says.
For example, AI can help customizing the prediction of solar and wind energy production or identifying ideal places for new features.
It can also be used to detect forecasting maintenance and defects for solar panels or other green energy infrastructure, or to monitor the capacity of transmission wires to maximize efficiency.
Turliuk says that researchers helped collect huge amounts of data and analyzes a large amount of data, AI can also inform the target policy intervention, aimed at acquiring the biggest “bang” from areas such as renewable energy.
Consider the versatile costs and benefits of the AI system to help policy makers, scientists and enterprises, he and his colleagues developed a net climate effect score.
The score is a structure that can be used to help determine the net climate effect of AI projects, considering emissions and other environmental costs in future.
At the end of the day, the most effective solution will be a result of cooperation between companies, regulators and researchers, with academics, Turliyuk says.
“Every day is counted. We are on a path where the effects of climate change will not be completely known until it is too late to do anything about it. It is a one-time life-long opportunity that is an opportunity to innovate and make the AI system with less carbon-intercourse,” she says.