1 Jul
2025

What Kind of Intelligence Does Earth Need?

Research

What Kind of Intelligence Does Earth Need?

By Martín E. Wainstein, Founder & Executive Director, Open Earth Foundation

Originally published in Medium →

As artificial intelligence accelerates into every layer of society, we face a pivotal question:

What kind of intelligence are we building—and what is it aligned to protect?

While today’s headlines focus on the race to develop powerful general-purpose AI systems, far less attention is given to their alignment with planetary systems. At OpenEarth, we’ve spent years building digital infrastructure for climate transparency and global cooperation. But as AI begins to shape how knowledge, decisions, and power flow across institutions, we believe it’s time to raise a new, deeper question:

What would it mean to build AI that aligns not just with user intent, but with the long-term livability of Earth?

"We can build AI to compete, consume, and control — or to understand, protect, and sustain. This is the choice before us. And the window is closing."

Introducing Gaia AI: Intelligence in Service of Life

In a new thought piece, I introduce the concept of Gaia AI—a vision for foundational AI models designed not to dominate or extract, but to steward. Inspired by living systems, Gaia AI imagines nested agents operating at multiple levels—buildings, cities, nations, ecosystems—all interconnected through a planetary framework that prioritizes balance, resilience, and the preservation of life.

This isn’t about replacing human agency. It’s about designing intelligence that can augment our collective capacity to care for the planet—helping us coordinate, adapt, and respond across climate, water, energy, and ecological systems.

"A Gaia-level AGI wouldn’t be a tool, but a planetary-scale intelligence — rooted in context, interdependence, and the principles of living systems. Its safety lies not in constraint, but in aliveness."

Beyond the Hype: A Research Roadmap

The article outlines a research direction that goes beyond current applications of AI in climate and sustainability (which are already valuable). It suggests new pathways at the foundational model level—drawing from active inference, ethical preference states, and distributed system design.

It also introduces the need for a new kind of alignment: one where AI doesn’t simply learn from internet-scale text, but from planetary boundaries, ecological feedback, and diverse human values.

"Nesting AI agents within spatial and functional layers is key to building safe AGI. Like living organisms, each layer must have skin-like boundaries—defining where one system ends and another begins, while allowing adaptive coordination across scales."

Humility First: Stewardship Carries Risk

This vision is not without its challenges. Even AI trained to protect Earth could, if misaligned, act in ways that are harmful—treating humans as a threat to the biosphere, or prioritizing ecological goals without consent or care.

That’s why the article also calls for humility in design: AI systems must remain adaptive, pluralistic in their ethics, and answerable to human and planetary feedback. The point is not to create a sovereign machine—but a steward, deeply embedded in the web of life.

Why This Matters for OpenEarth

At OpenEarth, we build open-source tools to help humanity collaborate at scale on climate and sustainability. From global climate accounting to city platforms like CityCatalyst, our mission is rooted in transparency, coordination, and planetary boundaries.

This exploration into Earth-centered AI is a natural extension of that mission. As foundational AI systems emerge, we believe now is the time to guide them—so they support not just efficiency or engagement, but equity, ecological health, and intergenerational survival.

Read the Full Article

>> AI May Destroy Us—Unless We Teach It to Care for Earth (Full article on Medium)

Highlighted news

Research
-
Jul 2022

Stay up to date

Sign up for our newsletter and stay in touch.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
topographic lines in the background