Image

How do we Start Thinking About AI and Development?

May 19, 2023

     By Duncan Green     

Spent a mind-bending day this week discussing AI and development with some NGO and legal folk (Chatham House Rule, so that’s all I can say, sorry). Everyone in the room knew at least ten times more than me on the subject. Perfect. Some impressions/ideas.

The catalyst for the discussion was the UK Government’s new White Paper on AI and Innovation, which is open for comments until 21st June. Reading the Exec Sum, I was struck by its limited focus on AI as a component of Industrial Policy, trying to establish the UK as a centre for AI by promising light-touch regulation, investment etc etc. Doesn’t look like the Foreign Commonwealth and Development Office has been anywhere near it, because there is nothing on AI as a global public good/bad. This is all about short-term British National Interest.

So if we want to go broader, how should we think about AI as a huge opportunity/threat for global development (which it is)?

Firstly, the whole issue of how you establish some kind of governance for emerging technologies is fascinating. Think geo-engineering, or nano-tech. You don’t know how they’re going to end up being used, who’s going to miss out on the benefits, where/how they are going to be abused/cause harm. On AI, decision-makers don’t really know what they’re talking about (according to one of the AI geek lawyers in the room) and yet have to make decisions now. So they default to priors (‘light touch’) or fight the last war (‘this is what happened with X’).

But at the same time, for anyone trying to influence the emerging governance regime for AI, now is the time – ideas and rules are more malleable at the start, then rapidly become set in stone and harder to shift. Waiting til everything becomes clear means you miss your best chance to shape what’s coming.

Spooky and horrible: Asked this AI image generator to illustrate ‘A blog post about the possible benefits and threats of AI to people living in poverty around the world’. WTF?

If you can’t sensibly set detailed regulation for applications that no-one has thought of yet, there are still things you can do:

Get the process right: who is going to monitor AI’s evolution, identify barriers to good stuff or emerging harms, and then tackle them, whether through discussion, regulation or litigation? There was nothing inherently evil about the technology behind ride sharing apps, until some of the companies involved started doing away with labour rights for its workers. What will be the AI equivalent?

Focus on rights: Not just individual rights, which often comes down to who is being harmed by the new tech, but also collective rights. What impact is the new tech having on digital exclusion, inequality? Lots of double-edged swords here: should data be decolonized, to make sure AI is fed enough info on women, people of colour etc to avoid any white male bias, eg in facial recognition, or is that an invasion of privacy?

Open Access: already there is a big drive to make use of AI open to all. Great. But Open Access also means you can look inside the ‘black box’ and see what is going on. That could hold real benefits for developers and users in the Global South, but makes it even harder to do anything about potential harms (e.g. when deep fake videos are used to stoke conflict in fragile states).

Who’s at the Table? How do we get meaningful (rather than tokenistic) participation of diverse voices from the beginning? A lit review of non-Northern voices (what’s Nanjala Nyabola writing about AI?) A global Citizen’s Assembly to do some deliberating on risks and opportunities, then asked to come up with some recommendations for inclusive governance?

Second Due Diligence: If there is foreseeable harm, then companies and others can be held to account for anticipating and preventing it, but how do you do it when you don’t know what future AI applications look like? Traceability – if something goes wrong, can it be traced back to establish the cause and the culprit? Testing to see if you’ve missed anything. Then monitoring so you pick up and remedy that impact asap.

Third, Systems: This is where it all gets a bit sci-fi (think Skynet in Terminator). An AI system is complex, not like a bridge, which is complicated, but can be broken down into its constituent parts. AI engineers are already admitting they don’t understand how their creation works. If this is a neural network, should you attribute any harm it does to its original creators? That would be like trying to prosecute the parents of a serial killer. A recent Economist piece even suggested we would need to treat malfunctioning AIs like humans and have some kind of psychotherapeutic approach. Wild. Perhaps better to regulate at the point of use, e.g. requiring a license to use it, like a driving license, or employers’ liability insurance.

Final Thoughts: maybe this was reflecting the mindset of litigators and tech-sceptic NGOs in the room, but the conversation was overwhelmingly about threats, rather than opportunities. Need to think more about the latter and how to ensure they materialize.

There are lots of really good analogies and precedents from the introduction of other technologies, from driverless cars to HIV/AIDs drugs, but we need some convincing hypotheticals (or actuals) about where AI can cause harm/benefits.

Transparency: one of the problems with AI is that it’s really hard to distinguish it from human activity (as I know from marking my student essays!). But the best way to identify AI is probably… asking AI. Can someone come up with an AI blocker, similar to an Ad blocker, that we can install on our laptops?

Barely scraping the surface here, and lots of the legal/technical stuff was completely over my head, so open invitation to other participants to chip in!

May 19, 2023
 / 
Duncan Green
 / 

Comments