Who is writing the future? Designing infrastructure for ethical AI
By: Benjamin Kumpf, Innovation Policy Specialist, UNDP
“The future is unwritten,” stated Joe Strummer decades ago. It implied a message of hope for humanity’s future and a call for action. Today, algorithms are written that might pave the way for the end of humanity or for transformative progress.
Artificial intelligence (AI) has great potential and the time to manage its progress is now. AI strategies need to foster innovation, yet adequately address ethics, transparency, inclusion as well as biases. This was one of the main messages of last week’s ‘AI for Good’ Global Summit, convened by ITU in partnership with XPRize, ACM and more than 20 UN agencies.
The term artificial intelligence describes the general concept of machines performing tasks that are characteristic of human intelligence, coined by John McCarthy in 1956. AI is the general category and entails concepts such as machine learning and deep learning; it is inextricably connected to robotics and the Internet of Things.
AI is already creating efficiencies for human development and humanitarian assistance across a number of sectors. For example, UNICEF’s Venture Fund is supporting a range of AI startups, including Dymaxion Labs. The Argentina-based social enterprise is building AP-Latam to improve decision-making and rapid disaster responses AP-Latam utilizes satellite imagery algorithms and machine learning techniques to monitor the growth of informal settlements across Latin America — providing real-time info about the communities’ location, and changes in movements. The World Food Programme’s Innovation Accelerator is testing whether chatbots can improve WFP’s assistance to people affected by crisis. It is using an AI-based survey chatbot featuring 20 different languages to communicate with more people, at a fraction of the cost, in almost real time.
UNDP has a portfolio of drone experiments to improve data collection and analysis for decision-making, which can be augmented with machine learning. The portfolio includes remote sensing for improved decision-making on environmental protection in Mongolia and using drones to facilitate disaster preparedness in the Maldives. In Uganda, UNDP and UNHCR are using drones to map the Oruchinga Refugee Settlement. The data is used to engage refugee and host communities in jointly developing camp and host community infrastructures. Together with UN Environment (UNEP), UNDP is gearing up for the launch of the UN Biodiversity Lab, powered by MapX. MapX was initiated by UNEP, the World Bank and the Global Resource Information Database to capitalize on the use of new digital technologies and cloud computing for natural resource management. The UN Biodiversity Lab will help countries accelerate delivery of the Convention on Biological Diversity’s (CBD) Aichi Biodiversity Targets (ABTs) and the Sustainable Development Goals (SDGs) by developing a customized spatial analysis platform to support conservation and development decision-making across the globe. In Sudan, UNDP is working with the Central Bureau of Statistics, the Sudan Telecom, Zain Telecom, the University of Berlin and other partners to research the potential of call detail records in predicting proxy poverty levels. Recent results show that the multidimensional poverty index and covariates from call detail records show high correlations.
These initiatives, leveraging spatial and telco data provide ideal entry points for machine learning, and we are actively scouting for partners on AI.
To improve the efficiency of policy advice to government partners on the Sustainable Development Goals, UNDP is partnering with IBM to automate UNDP’s Rapid Integrated Assessment — a tool that helps governments assess the alignment of national development plans and sectoral strategies with the 169 SDG targets. First trials showed efficiency gains, cutting the required time for analysts significantly.
The AI for Good Summit featured three tracks focusing on operational effectiveness for development cooperation, including one on satellite images (see this recent brief from the UN Innovation Network); one on smart cities and one on AI and health.
UNDP engaged in the fourth track, focusing on inclusive, ethical AI led by the Centre for the Future of Intelligence. The UN Secretary-General recently emphasized the role of the UN system in fostering inclusion and transparency, along with upholding global values. He also underlines the importance of humility regarding frontier technology and to constantly learn from partners.
Particularly for UNDP, it will be essential to work with Member States in designing conducive AI ecosystems that foster innovation while ensuring inclusivity and transparency. Key challenges ahead include:
Leave no one behind: How can we strengthen the analogous foundation of digital economies to bridge the digital divide through investments in infrastructure and digital skills? What is the very potential of AI to design education, including on digital skills, for all? How can we support citizens to develop an emancipatory relationship with data, privacy and AI systems? Open data and algorithms need to be usable and relevant for regular citizens to overcome the dichotomy between data user and data producer. The city of Barcelona is experimenting with a hybrid model of online and offline participatory democracy, designing digital engagement tools directly with users and a focus on privacy and security.
Create data commons: Algorithms are as good and as biased as the data sets they are fed with. How can we support the creating of cross-border data commons, sharing open data? Data commons refer to interoperable infrastructure that co-locates data, storage, and computes with common analysis tools. The UAE Government announced the launch of a blockchain-based data commons pilot at the AI Summit in Geneva. Meanwhile, UNDP is supporting the ‘Ministry of Data’, a regional initiative to use open data for public good in Armenia, Belarus, Georgia, Moldova and Ukraine.
Facilitate innovation: The velocity to technological progress outstrips the ability of governments to regulate it. Countries such as the UK, Canada, France and UAE endorsed AI strategies, aiming at facilitating economic growth, yet putting in place regulations to protect citizens and consumers from harm and discrimination and mitigating the risk of new monopolies. Elements of these strategies can be classified as anticipatory regulation, a new class of instruments that entails iterative not fixed rules as well as testbeds and sandboxes. Such sandboxes allow for live testing of innovations such as new products and services in a controlled environment, for example Rwanda’s Ministry of Health testing the comparative advantage of UAV’s for the delivery of life-saving drugs to remote areas and achieving a reduction from four hours to an average of half an hour.
Set up ecosystems: A key component of successful anticipatory regulation is joint-up multidisciplinary regulation through collaboration platforms on AI that include startups, think-tanks and academia, large and medium-sized companies, governments and their ethics commissions, civil society and activists. Germany, for example, developed regulation on self-driving cars with ethicists from academia and involved private sector partners and civil society. This includes investing in capacities within Government, building required skills in Ethics Commissions as well as in human rights and development organizations. The industry is also responding with platforms being launched, such as OpenAI, created by Elon Musk and others, to promote the safe use of AI. The Partnership on AI, founded by Google, Facebook, Amazon, Microsoft, and IBM, was established to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI.
Establish ethical frameworks and accountability systems: Government-led AI strategies to-date have put a significant emphasis on ethics, transparency and inclusion. For example, the recent UK House of Lords AI Committee report proposed five principles to develop a national and international code of conduct. In Australia, the Government’s Chief Scientist just proposed a system similar to the Fairtrade coffee certification for responsible AI developers. We also see the emergence of actors offer services on algorithmic transparency to measure them for fairness, legal discrimination, and meaning. We can expect the proliferation of many more outfits such as Orcaa and the expansion of services to critically examine data sets and their inherent biases, as often training data for AI is insufficiently diverse, prompting biases in computing and forecasts. But ethical guidelines, standards and voluntary reviews will not be sufficient.
“What is most urgently needed now is that these ethical guidelines are accompanied by very strong accountability mechanisms. We can say we want AI systems to be guided with the highest ethical principles, but we have to make sure that there is something at stake. Often when we talk about ethics, we forget to talk about power”, underlined Kate Crawford, co-author of the must-read ‘AI Now 2017 Report.
Combine collective with artificial intelligence: collective intelligence describes the outcome of collaborative processes in a way that amounts to more than the sum of their parts. While the concept is not new, digital technologies now enable much more profound ways of collaboration, problem-solving and crowdsourcing. Increasingly, governments are designing and leveraging collective intelligence systems, along with the development sector. In Armenia, UNDP engages citizens in designing solutions to wicked problems such as the future of education and crowdsources foresight through our Kolba Lab. As UNDP is gearing up to deliver its Signature Solutions through Country Platforms, collective intelligence strategies will be key to convene diverse actors, identify acupuncture points in complex systems that can accelerate SDG achievement efforts and facilitate integrated policy support. There are emerging practices to combine collective and artificial intelligence. Our partners at Nesta just announced the launch of a Centre for Collective Intelligence Design to combine human and machine intelligence at scale.
“We won’t experience 100 years of progress in the 21st century, it will be more like 20,000 years of progress,” notes Ray Kurzweil co-founder of Singularity University. Governments and development organizations, including the UN system, have the responsibility to facilitate the dialogues on preventative, yet agile sectoral frameworks and invest in their capabilities to understand frontier technologies, their potential and risks. This requires going beyond the attempts to regulate emerging technologies. Advances in nanotechnology, biohacking and synthesis biology, robotics and AI mandate us to revolutionize our institutions and redefine paradigms guiding development work including value creation, social contracts on value redistribution and human freedom. The path to such redefinitions entails concrete AI experiments to increase the effectiveness of our work and to facilitate platforms for ethical and inclusive AI.
If you are interested in co-writing the future, please get in touch.
Benjamin Kumpf is an innovation policy specialist with UNDP. Follow Benjamin on Twitter: @bkumpf