Image

How do we Start Thinking About AI and Development?

May 19, 2023

     By Duncan Green     

Spent a mind-bending day this week discussing AI and development with some NGO and legal folk (Chatham House Rule, so that’s all I can say, sorry). Everyone in the room knew at least ten times more than me on the subject. Perfect. Some impressions/ideas.

The catalyst for the discussion was the UK Government’s new White Paper on AI and Innovation, which is open for comments until 21st June. Reading the Exec Sum, I was struck by its limited focus on AI as a component of Industrial Policy, trying to establish the UK as a centre for AI by promising light-touch regulation, investment etc etc. Doesn’t look like the Foreign Commonwealth and Development Office has been anywhere near it, because there is nothing on AI as a global public good/bad. This is all about short-term British National Interest.

So if we want to go broader, how should we think about AI as a huge opportunity/threat for global development (which it is)?

Firstly, the whole issue of how you establish some kind of governance for emerging technologies is fascinating. Think geo-engineering, or nano-tech. You don’t know how they’re going to end up being used, who’s going to miss out on the benefits, where/how they are going to be abused/cause harm. On AI, decision-makers don’t really know what they’re talking about (according to one of the AI geek lawyers in the room) and yet have to make decisions now. So they default to priors (‘light touch’) or fight the last war (‘this is what happened with X’).

But at the same time, for anyone trying to influence the emerging governance regime for AI, now is the time – ideas and rules are more malleable at the start, then rapidly become set in stone and harder to shift. Waiting til everything becomes clear means you miss your best chance to shape what’s coming.

Spooky and horrible: Asked this AI image generator to illustrate ‘A blog post about the possible benefits and threats of AI to people living in poverty around the world’. WTF?

If you can’t sensibly set detailed regulation for applications that no-one has thought of yet, there are still things you can do:

Get the process right: who is going to monitor AI’s evolution, identify barriers to good stuff or emerging harms, and then tackle them, whether through discussion, regulation or litigation? There was nothing inherently evil about the technology behind ride sharing apps, until some of the companies involved started doing away with labour rights for its workers. What will be the AI equivalent?

Focus on rights: Not just individual rights, which often comes down to who is being harmed by the new tech, but also collective rights. What impact is the new tech having on digital exclusion, inequality? Lots of double-edged swords here: should data be decolonized, to make sure AI is fed enough info on women, people of colour etc to avoid any white male bias, eg in facial recognition, or is that an invasion of privacy?

Open Access: already there is a big drive to make use of AI open to all. Great. But Open Access also means you can look inside the ‘black box’ and see what is going on. That could hold real benefits for developers and users in the Global South, but makes it even harder to do anything about potential harms (e.g. when deep fake videos are used to stoke conflict in fragile states).

Who’s at the Table? How do we get meaningful (rather than tokenistic) participation of diverse voices from the beginning? A lit review of non-Northern voices (what’s Nanjala Nyabola writing about AI?) A global Citizen’s Assembly to do some deliberating on risks and opportunities, then asked to come up with some recommendations for inclusive governance?

Second Due Diligence: If there is foreseeable harm, then companies and others can be held to account for anticipating and preventing it, but how do you do it when you don’t know what future AI applications look like? Traceability – if something goes wrong, can it be traced back to establish the cause and the culprit? Testing to see if you’ve missed anything. Then monitoring so you pick up and remedy that impact asap.

Third, Systems: This is where it all gets a bit sci-fi (think Skynet in Terminator). An AI system is complex, not like a bridge, which is complicated, but can be broken down into its constituent parts. AI engineers are already admitting they don’t understand how their creation works. If this is a neural network, should you attribute any harm it does to its original creators? That would be like trying to prosecute the parents of a serial killer. A recent Economist piece even suggested we would need to treat malfunctioning AIs like humans and have some kind of psychotherapeutic approach. Wild. Perhaps better to regulate at the point of use, e.g. requiring a license to use it, like a driving license, or employers’ liability insurance.

Final Thoughts: maybe this was reflecting the mindset of litigators and tech-sceptic NGOs in the room, but the conversation was overwhelmingly about threats, rather than opportunities. Need to think more about the latter and how to ensure they materialize.

There are lots of really good analogies and precedents from the introduction of other technologies, from driverless cars to HIV/AIDs drugs, but we need some convincing hypotheticals (or actuals) about where AI can cause harm/benefits.

Transparency: one of the problems with AI is that it’s really hard to distinguish it from human activity (as I know from marking my student essays!). But the best way to identify AI is probably… asking AI. Can someone come up with an AI blocker, similar to an Ad blocker, that we can install on our laptops?

Barely scraping the surface here, and lots of the legal/technical stuff was completely over my head, so open invitation to other participants to chip in!

May 19, 2023
 / 
Duncan Green
 / 

Comments

  1. Please post a piece one day that was written by AI – to see if we can spot it.

    And since we who comment below the line have to tick a box and ID pics to ‘prove’ we are not bots, I think in future it would be somehow appropriate to find a way to prove that you have not become one either.

    1. Post
      Author
  2. All very good, important, salient points, of course. What keeps me up at night is the lack of incentives for the private sector to put social good ahead of potential profit. I think we have seen that ‘altruism’ itself is not enough of a motivator, because when it comes down to it, shareholders want ROI, and any initiatives designed to ‘do good’ or ‘give back’ will be de-funded and scrapped in the interest of short-term profits.

    I believe one of the most cogent arguments about this was recently published in The New Yorker by Ted Chiang, “Will A.I. Become the New McKinsey?” in which he writes, “Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.”

    So it really will come down to governments and multi-lateral international institutions to establish laws and regulations that have teeth and hold creators and implementers of A.I. accountable. But, these days, who effectively owns the governments? Certainly not the people most likely to be harmed by A.I.

  3. Yuval Noah Harari’s recent Economist article on this is well worth a read, and concludes with a pretty basic (yet potentially transformative) recommendation on regulation:

    “And the first regulation I would suggest is to make it mandatory for ai to disclose that it is an ai. If I am having a conversation with someone, and I cannot tell whether it is a human or an ai—that’s the end of democracy.”

    https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation

    1. Also great to see the Gates Foundation engaging in this debate and sharing how they are approaching it.

      https://www.gatesfoundation.org/ideas/articles/artificial-intelligence-ai-development-principles

      “As the foundation engages in work that leverages the power of AI, we will be guided by a set of first principles that shape our initial approach and aligned with our core mission—to help create a world where every person has the opportunity to live a healthy, productive life. These principles will be refined and adapted as we engage with partners and other outside experts, as we learn based on experience, and as future developments in AI technology evolve. 

      First principles 
      Adhere to our core values
      The foundation is guided by a belief that all people, no matter the circumstance into which they are born, should be able to live a healthy life and reach their full potential. Our approach to the use of AI technology is therefore grounded in the need to promote greater equity and opportunity for resource-poor communities. AI can be a useful tool in advancing these goals—if its downsides are properly managed.  

      Promote co-design and inclusivity 
      Low-income countries must not just be seen as beneficiaries or end-users of AI but as essential collaborators and partners in program design and uses. This means sharing insights, concerns, and information across organizations and geographies to drive AI use that is fit for purpose and contextually sensitive. We will approach the use of AI collaboratively and recognize that effective partnership must be intentional and inclusive. Acknowledging the limited availability of digital infrastructure in LMICs, maximizing the benefits of AI in these regions may present additional challenges. In keeping with our overall approach to innovation, we will invest in developing an evidence base for the responsible use of AI with, and for, communities and populations that stand to benefit from them. 

      Proceed responsibly 
      We view our role to help ensure equitable use and access to AI tools. We acknowledge that proceeding responsibly requires an approach centered on compliance, inclusivity, and continuous improvement. To achieve this, we will leverage established legal regulations, industry standards, and ethical guidelines to navigate the complex landscape of AI applications for health and development uses. We will proceed in a step-wise fashion, starting with a confined set of use cases and gradually scaling up as the evidence base is built out. 

      Address privacy and security 
      Privacy and security are essential when it comes to the use of AI, particularly given that it will likely increasingly be used in situations that involve sensitive personal information. It will be important to regularly conduct privacy and security assessments, ensure compliance with relevant regulations including data protection laws, implement transparency and accountability measures, and continually engage with stakeholders to improve systems. It will also be vital to ensure such practical measures have been taken in advance of collecting sensitive data based on informed consent, opt-out measures are provided, and materials are shared in appropriate local languages.  

      Build for equitable access 
      Amid a rapid AI transformation, there are important questions on its equitable access, use, and addressing systemic inequity. A commitment to equitable access is not just about distribution but also about ownership, maintenance, and support for AI uses within the development context.  

      Ensure transparency  
      Given the potential for companies to commercialize the use of AI tools, we understand the importance of approaching this work with transparency. All of our grants are a matter of public record. And we adhere to a conflict-of-interest policy that guides all our work. Data should be shared to the greatest extent possible as a public good to allow for continual testing, improvement, and innovation. ” 

  4. Hi Duncan.

    I’m Oxfam’s new Global Programs Director, back in the organization after three years.

    Excellent reflection on AI. I agree 100% with you that we need to have conversations on both the risks and the opportunities. But very often in our sector, we start by analysing the risks and arming ourselves against them before exploring the opportunities and equipping ourselves to explore and exploit them.

    I’ve just written an article on AI and NGOs (link: https://www.linkedin.com/pulse/wave-artificial-intelligence-coming-ingos-must-ride-adama-coulibaly). I wrote it before reading your reflections, but I can see that we agree on many things. To be continued.

    Coul

    1. Post
      Author

Leave a Reply