AI’s Energy Usage Is A Problem That’s Being Solved As Fast As It’s Being Created…Maybe

Finding out about the energy consumption side of LLMs of the likes of Claude II, ChatGPT4, Bard and so many others...is like discovering a garden hose leak, slowly transforming a tiny trickle into a torrent, threatening environmental overflow (sorry for the hyperbole). Our AI obsession coupled with AI’s growing appetite for energy is a recipe for environmental disaster. But it doesn’t have to be that way.

First, A Quick Lesson on AI Mechanics

(Skip ahead if you already know about tokens, weights, model architecture, and the difference between training and inference):

Before an AI like ChatGPT can process a message, it breaks down the text into ‘tokens,’ much like a reader parsing words and sentences. These tokens are the basic units the model uses to understand and generate language.

Each model has a unique architecture or a neural network, formed much like a human brain through learning (training). It includes layers of neurons (mathematical functions), each layer learning different aspects of language. More complex architectures can grasp subtleties in language but require more energy to run. The challenge for developers is designing architectures that strike a balance between sophistication and energy efficiency. Just like how a gardener decides which plants to grow and where to place them.

Before this garden can thrive, it needs to be nurtured — this is where ‘training’ the AI model comes in. Think of it as the intensive phase of tending to the farm: preparing the soil, planting seeds (feeding data) into the soil (neural network) and ensuring they receive the right amount of water and nutrients (computational resources) to grow.

Once the AI model, or crop, is mature, we enter the ‘inference’ phase — the harvest time. Here, the AI utilizes its training to make predictions or decisions, similar to reaping the produce from a mature, well-tended farm. This phase is like routine maintenance of a flourishing garden, less intensive than the initial growth period but still needs water and tending.

The size of an LLM is often directly related to the number of weights it has. Weights (how many decimal places a response is accurate to) can also affect energy consumption. Larger models (bigger farms) with more weights can capture more nuanced patterns in data but also require significantly more computational resources to train and run. Using lower-precision, or to stick with the garden analogy, “ less resource-intensive farming methods” will reduce the amount of memory and computation needed.

The Energy Debate: Not Just a Crypto Problem

Remember the uproar over Bitcoin’s energy consumption? Similar to the cryptocurrency and blockchain industry, which has faced scrutiny for its environmental impact, there’s a similar storyline emerging for AI. Both industries demand a high energy input for complex calculations — crypto for securing transactions and AI for processing vast amounts of data.

Understanding the Scale

To put it in perspective, training a single AI model can consume as much electricity as several hundred homes do in a year.

If every search on Google used AI similar to ChatGPT, it might burn through as much electricity annually as the country of Ireland. Why? Adding generative AI to Google Search increases its energy use more than tenfold. (-The Verge)

The reason? AI, especially deep learning langauge models, require a vast amount of computational power. These models learn from huge datasets, and crunching these numbers is like trying to solve a jigsaw puzzle the size of a football field, where each piece constantly changes shape.

Consider self-driving cars, a pinnacle of AI application. These vehicles don’t just ‘see’ the road; they process a constant stream of data from sensors, cameras, and GPS systems. This processing happens almost in real-time and demands a continuous, high-powered computational effort. Multiply this by thousands of AI applications, from voice assistants to automated trading systems, and you start to grasp the scale of the energy challenge, or as this April 2023 Research Paper on AI trend points out:

As more and more devices use AI (locally or remotely) the energy consumption can escalate just by means of increased penetration, in the same way that cars have become more efficient in the past two decades but there are many more cars in the world today.

Three Pioneering Solutions: Gensyn, Box, and Sakana

Innovative solutions are emerging, addressing the energy challenges of AI with creativity and technological ingenuity.

Box AI: On the business model front, Box AI is redefining how AI services are priced with a unique credit system. In an industry where running foundational models like OpenAI’s can be prohibitively expensive, Box AI’s approach is refreshingly practical. Users receive a set number of credits for AI interactions, putting a cap on usage. This model indirectly limits the energy consumed by each user, making them more conscious of their AI usage. For power users, additional credits are available, ensuring scalability while still promoting mindful consumption. This innovative pricing strategy not only addresses cost concerns but also subtly encourages users to consider the energy impact of their AI usage. This model reminds of water rationing practices in farming–an essential practice for the common good!

Sakana AI: Sakana AI is exploring an entirely different yet “natural” frontier. Based in Tokyo and led by former Google AI luminaries, this startup is drawing lessons from nature to build the next generation of AI. Inspired by the fluidity and adaptability of natural systems, like schools of fish (derived from the word for fish さかな in Japanese), they are working on AI models that are more flexible and responsive to their environment. This nature-inspired approach could lead to AI systems that are not only more efficient in terms of energy consumption but also more robust and adaptable in their functionality.

Gensyn: By creating a blockchain-based marketplace, Gensyn democratizes access to computational power. This platform taps into a global network of underutilized resources, from smaller data centers to personal gaming computers, harnessing their latent power for AI development. It’s a game-changer, potentially increasing available compute power for machine learning by an order of magnitude. The integration of blockchain in AI goes beyond just a supporting infrastructure. It can actively offer new ways to handle data, ensure security, and create economic incentives for data sharing and model training. I like to think of this approach as permaculture!

These diverse approaches by Gensyn, Box AI, and Sakana AI reflect a growing awareness in the tech community of the need to balance AI’s benefits with its environmental costs. Each solution brings a unique perspective to the table, collectively pushing the boundaries of what’s possible in sustainable AI development and business.

The Blockchain-AI Intersection:

A note on blockchain:

It’s no secret that so many crypto developers flocked to “AI” this year. Yes, it could be about dwindling investments in web3. But I’d like to think that it’s because of a shared ethos and bright-eyed idealism of all open source movements, dating back to the early days of the INTERNET itself.

Transparent information sharing, creating viable alternatives to traditional methods of conducting business, individual transactions, owning one’s assets, data, and identity. All these ideas are central to the ethos of the blockchain movement, and are also applicable in the AI space. The convergence of blockchain and AI is what organic practices were to farming practices, But blockchain doesn’t just stop at decentralizing the “farming industry”. It revolutionizes the very soil and tools used to cultivate AI. The dream of an AI blockchain-powered “farm” is one where every tool is designed for optimal energy efficiency, every seed is traceable, and every yield contributes back to the health of the digital ecosystem. It’s about cultivating AI in a way that’s not just technologically advanced but also environmentally conscious and sustainable for future generations— more on this in a future article by popular demand!

I’d like to think that it’s because of a shared ethos and bright-eyed idealism of all open source movements, dating back to the early days of the INTERNET itself.

Beyond Blockchain: Technological Advances and Strategies

As we all grapple with the energy demands of AI, several energy-efficient strategies are emerging, each playing a crucial role in reducing the computational footprint of AI systems. Here’s a quick breakdown of some key strategies that are making AI more sustainable:

Model Quantization: Think of quantization as a way of making AI models ‘lighter’ in terms of data, selecting the most essential nutrients for growth — just like, not everything in the soil is needed for a plant to thrive. In technical terms, it involves reducing the precision of the numbers used in the model’s calculations. For example, instead of using numbers with a long decimal tail, quantization might round these numbers off. This rounding-off significantly reduces the computational load and memory usage, leading to less energy consumption. It’s like compressing a large photo file into a smaller size so it’s easier to send via email, but still clear enough to view.

Distillation: Model distillation is like extracting the essence of a plant, similar to creating a potent herbal extract from a larger mass of raw material. This method ensures that the smaller model retains the effectiveness of the larger one but with a fraction of the energy cost. It’s similar to distilling the key insights from a thick textbook into a concise study guide that’s quicker and easier to use.

Sparsification: Sparsification involves removing redundant or non-essential information from AI models similar to pruning unnecessary branches from a tree to promote healthier growth.. This ‘thinning’ helps the model to process information more quickly and efficiently.

Custom Hardware for Energy-Efficient Inference and Training: Developing custom hardware for AI is like crafting specialized farming tools designed for specific tasks. These tools — tailored for AI’s unique requirements — handle computational processes more efficiently than general-purpose tools.

More Good News

A recent MIT study highlights how simple hardware interventions, like GPU power capping, are being deployed with a big impact on energy costs. So, it really is about refining our techniques and also marginally increasing our patience.

Limiting GPU power to 150 watts saw a two-hour increase in training time (from 80 to 82 hours) but saved the equivalent of a U.S. household’s week of energy. (MIT News)

The invisible energy cost of AI is a critical issue we cannot afford to ignore in the name of progress. By supporting sustainable AI research, adopting energy-efficient technologies, and educating ourselves on the environmental impact, we can make more conscious choices in our AI usage. Just as buying organic supports agricultural practices that are better for the environment, opting for sustainable AI promotes technology that is environmentally responsible. Our choices significantly influence the sustainability and ethical implications of consumption, whether it’s the food we eat or the technology we use.

🤖Disclaimer: This article was generated with the assistance of ChatGPT4 and Perplexity to enhance research and content creation, reflecting a collaborative effort between human input and artificial intelligence.

Subscribe to Dima
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.