China Carbon Credit Platform

A multi-pronged approach to make AI more energy-efficient

SourceCenewsComCn
Release Time4 months ago

The World Economic Forum's official website recently reported that in order for artificial intelligence (AI) to realize its transformative potential, improve productivity levels and improve social well-being, humanity must ensure that it develops sustainably. The core challenge to this vision is that energy consumption is growing rapidly as computing power and performance continue to increase.

The AI ecosystem, from hardware and training protocols to operational technology, consumes a lot of energy. With this in mind, scientists are looking for ways to make it more energy-efficient. Measures include changing the operation strategy of AI and developing more energy-efficient algorithms and chips.

Energy "Gold Eater"

AI is an energy-intensive technology. How energy-intensive is it?

According to the website of the French newspaper Les Echos, Sasha Lucioni, a researcher and head of environmental issues at the AI platform Hug Face, said that AI algorithms such as Midjourney or Dall-E consume the equivalent of fully charging a smartphone to generate an image. A Nvidia H100 graphics processing unit consumes more electricity in a year than a medium-sized American household consumes in a year.

The Harvard Magazine website proposes that large language models do a better job of generating human-like, coherent, and contextual text. But this improvement also comes at a cost, with the energy consumed to train GPT-3 equivalent to the energy consumed by 120 American households in a year. U.S.New york timesChatGPT responds to about 200 million requests a day, consuming more than 500,000 kilowatt-hours of electricity in the process.

According to the World Economic Forum, the annual growth rate of energy consumption to run AI tasks is between 26% and 36%. This means that by 2027, the AI industry will consume as much energy as countries like Iceland or the Netherlands do in a year.

Change the operating strategy

It's imperative to make AI more energy-efficient.

The first is to adjust the AI operation strategy. According to the official website of the World Economic Forum, AI operation is generally divided into two main stages: the training stage and the inference stage. During the training phase, the model learns by digesting large amounts of datapracticeand development, after training, they move on to the inference phase, which is used to solve the problem posed by the user. Limiting energy consumption during these two phases can reduce the total energy consumption of AI operations by 12 to 15 percent.

Professor Thomas Detrick of the School of Electrical Engineering and Computer Science at Oregon State University in the United States pointed out that another effective strategy is to optimize scheduling to save energy. For example, running a lightweight task at night, or running a larger project during the colder months, for example, can also save a lot of energy. In addition, moving AI processing to data centers can also help reduce their carbon emissions, as data centers operate very efficiently, some of which use green energy.

In the long run, promoting the synergy between AI and emerging quantum technologies is also an important strategy to guide AI towards sustainable development. The energy consumption of traditional computing increases exponentially with the increase of computing demand, while the energy consumption of quantum computing increases linearly.

In addition, quantum technology can make AI models more compact and learningpracticeIt is more efficient and improves its overall functionality.

New models and new devices

One of the drivers of competition among AI companies is the belief that more parameters are better, which also means that with more parameters, more energy is consumed. For example, GPT-4 has 1.8 trillion parameters, while its "predecessor" GPT-3 has 175 billion parameters. Therefore, in order to make AI more energy-efficient, many scientists are trying to find algorithms that do not require so many parameters.

HawAI.tech company uses new electronic components and AI technology based on probability theory to save energy. At the same time and energy consumption, the new device is 6.4 times faster than NVIDIA's Jeston chip. Rafael Frisch, the company's co-founder and CEO, says that by combining probability theory and more optimized electronic components, their solutions will use less data and energy.

In addition, neuromorphic chips that mimic the functions of the human brain are also expected to improve the efficiency of AI. Recently, Intel Corporation released a large neuromorphic system called Hala Point. It has 1,152 built-in Loihi 2 processors based on Intel's quad process, supports up to 1.15 billion neurons and 128 billion synapses, and can process more than 380 trillion 8-bit synapses and more than 240 trillion neuron operations per second. Its unique capabilities enable real-time continuous learning for future AI applicationspractice, such as solving scientific and engineering problems, logistics, smart city infrastructure management, large language models, and AI agents.

Like(0)
Collect(0)