Mandel_zoom_07_satellite.jpg

News

New Publication - Artificial intelligence, systemic risks, and sustainability.

This newly published article on ‘Artificial intelligence, systemic risks, and sustainability’ by researchers connected to the AI, People & Planet Initiative is published in the Technology and Society November issue. It came about following two workshops on “AI People and Planet” held in New York in January and October 2019.

The article explores the systemic risks and opportunities of AI technologies in stabilising the biosphere and contributing to the Sustainable Development Goals. This is a call to academics aswell as decision makers in public and private sectors to adopt AI technologies in ways that centres people and planet.

The article finds that AI technologies with direct relevance to the biosphere are concentrated to large-scale agriculture with rapid adoption in forestry, aquaculture and beyond. Today, uptake is heavily concentrated to the US agricultural sector. However, with great investments come great responsibility and over the last decade Chinese investments were greater than the whole world combined.

In the coming decade, AI technologies can be expected to proliferate across industries with great relevance to planetary stability. Accordingly, proactive governance of AI-technologies is an imperative objective for governments, investors, and businesses in the pursuit of sustainability.

Social-technological systems such as modern agriculture could be optimized for diverse crop yields, carbon storage, energy production, and resilience to extreme weather.

The article prompts us to think critically about the adoption of AI technologies. Contemporary challenges of algorithmic bias and unequal access inherent to many of today’s AI applications risk becoming entrenched. Meanwhile, the drive for digitalisation and integration of social-technological systems generate vulnerabilities in themselves. Proactive engagement can be facilitated by adopting a systems perspective to governing the trade-offs between efficiency and resilience.

The emergence of AI technologies calls for innovative forms of governance. Principles of responsible AI could serve as a foundation. Public and private actors need to come together. Government- and self-regulation are likely to co-exists. Polycentric approaches to AI governance are likely to emerge as they allow for flexible responses across jurisdictions.

For these to be effective, they require active engagement from investors, government, and private sector to safeguard biosphere integrity and prosperity for all.

Andrew Merrie