The United Nations Sustainable Development Goals (SDGs) were established in 2015 to address some of the world’s most pressing problems. The 17 goals include 169 targets such as eradicating poverty, reducing inequality and acting on climate, in service of peace, prosperity and environmental sustainability.
Since 2015, the use of AI has grown exponentially, while progress toward most of the targets has ceased or even reversed—particularly on poverty, hunger and climate resilience. AI offers new possibilities for addressing some of the SDGs as well as further both constraining and amplifying the issues the SDGs were created to address—e.g., the mounting energy needs of large data centers. The issue is no longer if AI matters for sustainable development or not, but how to apply it in ways that decreases costs, expands access, improves decision making, and does not further deepen inequality.
In this regard, it is important to consider the intersection of AI and the SDGs.
One promising frontier lies in agricultural livelihoods. AI assistants deployed in multiple local languages now answer millions of questions each year by farmers. Three dynamics are responsible for their popularity. First, cost compression: the model’s applicability is widening, infrastructure is maturing, and deployment means are becoming cheaper. Second, usability: voice and image inputs allow people to contribute without typing or high literacy, overstepping a historical barrier to digital participation. Third, context applicability: systems can merge a farmer’s question with real-time weather, market price and local knowledge to provide dynamic, context-specific guidance rather than static advisories. These features are tackling the SDGs head-on regarding food security, fair work and climate resilience.
Information is evolving at a rapid rate under environmental change, thus conventional datasets get outdated very quickly. Most of the languages and contexts central to the SDGs remain largely underserved in training data. Closing that gap will require multilingual, community-driven “gold” datasets to anchor models in on-the-ground needs and reduce systematic error for marginalized communities. No less important is deployment design. Most benefits happen where tools meet actual users, not in idealized model training. The practical solution is responsible deployment, not constant delay in the interest of perfection.
Governance and infrastructure can control whether AI reduces or increases socio-economic gaps and can help us establish new ethical paradigms, learning from spiritual leaders and Indigenous people. Ethical principles and voluntary codes help, but regulatory clarity and dependable funding are what ensure principles become facts. Ensuring core digital access—devices, connectivity and computational power—as a civil right would acknowledge that bandwidth scarcity and hardware costs systematically preclude many communities from the potential advantages arising from AI rather than just the disadvantages. Public investment in socially disadvantaged groups is not alms, but rather a necessary correction enabling engagement in data co-creation, service co-design and governance. Education is essential: as climate distress and AI destabilize existing social structures, the absence of inclusive digital literacy will widen educational and socio-economic cracks.
The most important work remains ethical: to pledge justice in the funding, governance, powering and measurement of AI.
Community-led AI is a necessity and a necessary cultural approach for a trust-building strategy. Models constructed in one city or regime of risk are rarely transferable to another; hyperlocal information co-produced with local citizens is required for effective flood alerts, heat-risk maps and service targeting as well as protecting those who are externalizing the cost of large corporations via health, environmental and social issues. Low-code geospatial building blocks can make it possible for non-experts to combine satellite imagery, sensor feeds and scenario tools, making passive recipients co-analysts. Trust grows organically when communities shape the questions, own parts of the pipeline, and see outputs tied to tangible improvements rather than extractive data practices. Co-creation and empathy are necessary ingredients for the change we need. This approach aligns with SDGs on sustainable cities, health and reduced inequalities, while building the civic capacity needed for long-term adaptation.
And, of course, no assessment is complete without confronting AI’s energy appetite. Training and running big models takes lots of power, and the climate co-benefits are counteracted by extra emissions and grid strain. If computational power is the new chokepoint, digital equity conflicts with energy justice: socially vulnerable people have no way of getting affordable, consistent power and high-speed internet, so can’t build or even operate models. The efficiency win, fit-for-purpose model choice, and smart scheduling will assist, but the development agenda needs to do more.
And, to repeat a question I was recently asked: Is there such a thing as ethical AI? I am not sure there is an answer. Local microgrids, clean-power data-center procurement, and public policies prohibiting the concentration of computational centers in forms that recreate past resource inequities are required. For sure, the second round of the SDGs must ensure that an AI dividend is not paid for at a cost of climate deficit.
Global institutions are effective for norm-setting but generally lack binding power. Cities, regional associations and public-private partnerships can act more quickly—if procurement brings openness; if data-sharing arrangements protect rights while enabling research; and if evaluation methods are portable across borders but responsive to local language, law and culture.
Looking forward to 2030 and beyond, the choice is not between AI-as-solution or AI-as-threat. AI will evolve whether we want it to or not. We can, however, make choices today that will shape AI infrastructure for the next few decades. AI can refresh stale targets with fresher signals, emphasize neglected goals and unexpected trade-offs, and enable retrospective analyses to reveal which interventions actually have impact. Techniques that expose the “why” something occurred and which parameters drove that change are modern accountability tools, not just technical AI novelties anymore. But let me be clear: the human component must still be the most important aspect of AI. Nothing can replace human judgment, political will and social trust with a search online. The most important work remains ethical: to pledge justice in the funding, governance, powering and measurement of AI.
Applied in this sense, AI can expand what is actionable and knowable for sustainable development, speeding up where it has stalled and lighting up routes previously unexplored. But it will only assist the SDGs if it is designed with, for and by the people whose lives it will transform—and fueled in a manner the world can afford.
This article stems from an United Nations General Assembly side event, “Honest Discussions at the Intersection of AI and the SDGs,” co-hosted by Humane Intelligence, Technology Salon and Compiler, hosted at the Doris Duke Foundation on September 16, 2025.
Views and opinions expressed here are those of the authors, and do not necessarily reflect the official position of the Columbia Climate School, Earth Institute or Columbia University.
Source link
Guest news.climate.columbia.edu