In the Age of AI, Sustainability is Even More Urgent and Existential

Patricia Halfen Wexler
6 min readApr 10, 2023

--

If AI is in fact the next big thing and comes at us faster than anything before, a new economy built on limitless clean energy is the only way to unlock its value, avoid a climate crisis even sooner, and address some of the societal challenges AI spawns.

Careful development of AI and protecting our planet are inextricably linked

tl;dr (Key Points):

· Great, benevolent AI or limitless clean energy can unlock human progress unimaginable today. Conversely, either one of AI or climate worst-case scenarios is devastating for humankind

· AI success without clean energy accelerates planetary distress: data centers alone account for >2% of electricity demand, imagine our turbocharged needs. Expert predictions vary widely, but digital communications will likely require >20% of electricity consumption by 2030, and perhaps as high as 50% (of note, these predictions were made pre-ChatGPT). Conversely, AI can be hampered or even fail if there is insufficient energy to meet AI + human demands

· If we are serious about building a better future, we cannot ignore the perils of living in an unwelcoming planet

· In a world with changing and potentially disappearing jobs faster than ever, what can be a more ideal solution than high-quality, physical world jobs required to rebuild our economy sustainably?

· To develop AI with highest societal benefit and minimum existential risk, channeling initial use cases towards inherently clear and positive outcomes based on high-quality datasets is the best and safest approach

· Pursuing these two grand initiatives in parallel is fundamental — rather than investing in AI at the expense of sustainability, the more AI turns out to be as big as some hope, the MORE unlimited clean energy is required

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

The rapid adoption and impressive outputs of ChatGPT have woken us up to the enormous opportunities and perils of AI, and ultimately, AGI. While its fullest expression may be far in the future, the pace of improvement means change will likely come at us faster than any other technological innovation that has transformed humanity’s way of life. The fact that the basic underpinning of AI is currently based on large language models (LLMs) creates a whole set of issues to consider, from quality of source data to its ownership, from acceptable use cases to alignment with overall and specific human values.

Investors, especially tech investors -and specifically venture capitalists- are embracing AI wholeheartedly as they see the utopian opportunity we all aspire to, follow the “move fast and break things” philosophy of the last decade, and hold on to this lifeline to distract from the recent losses in their portfolios. At Avila we are also incredibly excited about the upside, but cautious about making irreversible mistakes. We recognize we cannot and should not simply stop progress, but rather need a national and global agenda that finds the balance between risk and reward and increases the odds of a bright future for humanity.

The planetary health crisis in the making will make life on Earth painful if we don’t address it even with great AI. And it will only be exacerbated if AI takes off. The energy consumption of powerful AI will further accelerate carbon emissions substantially if we continue to get our energy from fossil fuels. And if we must meter expensive energy, we will cap its upside. The only solution is to transition to a world of abundant clean energy as quickly as we can.

The crazy thing is even if we succeed beyond our wildest dreams in developing good AI or in building the most sustainable future, the outcome will be disastrous if the other one goes massively the wrong way. And if we develop great AI but do not power it with clean energy we will actually accelerate the destruction of the planet in ways that will make human existence on Earth truly unpleasant.

World leaders, policymakers, and technologists must come together now and develop frameworks and solutions faster than ever. Data ownership, misinformation, distribution of productivity gains, cross-border transactions, all require incredibly complex and thoughtful approaches to avoid stifling innovation while protecting our future. Consider these examples:

a) The disruption caused by offshoring and globalization took decades to happen, and we are still dealing with negative consequences as the benefits were not evenly distributed (think of heavy industry moving from middle America to Asia and the impact on those communities still felt today). How quickly must we have a well-thought out jobs policy in place if the comparable effects of AI on jobs today happens 10x faster?

b) The unintended consequences of an unregulated internet gave rise to disinformation, mental health impacts on teens, and polarization, to name a few. In hindsight, early guardrails would have curbed much bad behavior in a way that is hard to implement post-fact in a democratic and capitalist society. What lessons can we draw from this as we roll out a potentially much more powerful technology?

c) Climate experts keep warning us the transition curve (and therefore costs and pain) gets steeper and steeper as we delay the transition to a lower emissions economy. If AI explodes and thus its energy demands skyrocket without anything else changing, our energy costs will explode and the breach of acceptable CO₂ levels in the atmosphere will come even sooner than projected today.

Working towards both goals at the same time increases the odds of a good outcome in each. AI can be a real tool to advancing the green transition, and deploying abundant clean energy will solve for the energy demands and (at least partially) the employment challenge of an AI-driven economy.

AI can help solve the climate challenge, which in turns unlocks AI opportunity and minimizes downside risk

The safest way to move AI forward is to begin by developing applications based on high-quality datasets to tackle well-defined BHAG (big, hairy, audacious goals). Climate is an obvious one, in which there are many possible applications of AI to advance solutions such as synthetic biology and grid neutrality (human health is another big one, but a subject for another post). With this approach we minimize the risks of disinformation and black box outcomes of a training tool based on a generalized corpus of text — that itself can be further filled with disinformation — and maximize the benefit for humanity.

In addition, unlocking unlimited clean energy may prevent misalignment between humans and machines triggered by competition for scarce energy resources. Explosive AI growth would pressure energy demand and accelerate carbon emissions even further than currently projected.

Finally, the rapid loss of millions of jobs due to AI could cause unhappiness and societal unrest. Rebuilding our economy sustainably will require millions of new high-quality jobs.

Tackling the climate challenge without destroying standards of living is a clear imperative of our time. Adding AI to the mix only increases the importance and urgency of the issue. We cannot let the sexy new object run amok and distract us from continuing to work on what matters in the long run. In fact, it now matters more. The stakes have only gotten higher.

If this resonates, reach us at www.avila.vc

Note: this post does not mean to imply in any way that climate and AI outcomes are subject to the same probabilities or probability distribution. In fact, there is a significant chance that the promise of AI fizzles out for any number of reasons, for example if LLMs do not prove to be a useful route to knowledge enhancement or AGI. On the contrary, climate science has been well studied and likely outcomes on current trajectory or with expected/hoped for improvements are much more predictable. Our perspective is that under virtually every scenario investing thoughtfully in a sustainable future provides the best risk adjusted returns today, with the significant added benefit of moral clarity as capital allocators.

--

--

Patricia Halfen Wexler
Patricia Halfen Wexler

Written by Patricia Halfen Wexler

Founder & General Partner @ Avila.VC. Mother of 3 (+dog). Miami-based and proud Hispanic American

Responses (1)