ALTERNATE UNIVERSE DEV

Vulnerable By Design

1. The Papers: Algorithmic Accountability

In this episode

How are public sector bodies accountable for using algorithmic systems? We look at a recent report.

Transcript

Hello, and welcome to The Papers on Vulnerable By Design, the series in which we cover some of the latest and most interesting vulnerability research. I am Chris Onrust.

Amsterdam Algorithm Register

Like many cities across the globe—think Barcelona, San Francisco—the city of Amsterdam faces pressures on its housing stock. Homes, instead of being used by locals to live in, get offered to tourists for short term holiday lets, on platforms such as Airbnb.

On the first of July 2020, the city of Amsterdam started trialing the use of an algorithm that assists in identifying homes rented out illegally.

It starts with a report of illegal rentals that the city receives, either from neighbors or from the platforms themselves. And then, based on analysis of patterns of confirmed illegal rentals over the past years, the algorithm calculates the probability that illegal letting is indeed taking place at the reported address. The probability score delivered by the algorithm is used by the city’s Department of Surveillance and Enforcement to prioritize cases, so that, as the city itself states, “the limited enforcement capacity can be used efficiently and effectively”. So basically, instead of dealing with all reports in the order in which they come in, the city just concentrates its effort on reports which are similar to cases that have been confirmed over the years before.

Well, why do I mention all this? The reason is that the algorithm trialed by the city of Amsterdam is one example of an algorithm included in the city’s Algorithm Register. This is a register of the sort which in 2020, this city (Amsterdam) and the city of Helsinki were the first municipalities to launch, such a register.

Report on Algorithmic Accountability

Moreover, the algorithm is also one of the main policy instruments or tools discussed in a report that came out just a few weeks ago, which is titled Algorithmic Accountability for the Public Sector. And this report looks into how law and policy can be used to control or guide the use of algorithms in different sectors of government. And this is a practice—the use of such algorithms—that has been increasing massively over the past few years.

Okay, well, let’s turn to the report. Who wrote it? First of all, but the report is produced by three different parties. First of all, the Ada Lovelace Institute, which is a Research Institute funded by the Nuffield foundation in the UK. The AI Now Institute, which is part of New York University (NYU). And the Open Government Partnership, which is a partnership based in the US, with participants from over 70 countries and local governments worldwide. And this partnership receives money from, amongst others, well, its participating members, the US government, European Union, the Ford Foundation, and the Luminate Group, which is from the founder of eBay.

Okay, so what does the report say? Well, the focus of this report is on what they call algorithmic accountability. Well, let’s start with dissecting what that actually means. And I will start with accountability.

What is accountability?

So if we just look at basic definitions, then ‘accountability’ is the fact or condition of being accountable. And then being accountable is being required or expected to justify your actions or decisions being responsible, being answerable for what you’ve done or decided. That’s just a basic definition of accountability or being accountable.

Within this report, accountability is understood, and I’ll give you a quote there, as “a relationship between the actors who use or design algorithmic systems, and forums that can enforce standards of conduct”. So in the report, they’re not really giving an overall definition of accountability generally, but they’re just specifying that accountability falls into the category of relations. So it’s a relation between two parties, namely, on the one hand, people or bodies that use or design algorithmic systems—and let’s just note that using and designing can be very different people doing that—and on the other hand, those who can enforce standards with respect to the use or design of algorithmic systems.

What is an algorithm?

Okay, well, we’ve mentioned the term ‘algorithm’ already a couple of times here, even though we were looking at accountability. Let’s turn to this idea of algorithmic. So the basic thing it tells us is that it’s something to do with algorithms. Well, what’s an algorithm, you might ask?

Well, let’s start just with basic definitions again. So an algorithm can be understood as a process, or a set of rules to be followed in calculations or other problem solving operations. Good. Let’s start from that. So I’ll give you a couple of examples. Baking a cake. Putting on your coat. Most likely, when you are baking a cake or when you put on your you coat, you go through a basic set of steps, or a procedure that you follow. In order to solve the problem of the cake that needs to be baked, or the coat that needs to be put on. So there’s very good ground for classifying cake baking and coat-putting-on as examples of algorithms. If you follow a series of steps to solve those problems.

That’s not usually how algorithm nowadays is understood. So nowadays, we really associate it with problem solving or calculations carried out by computers. And that’s actually quite similar to how it’s used in this report. So here, they state that they characterize ‘algorithm’ as a series of steps through which particular inputs can be turned into outputs. And then algorithmic system, they define as a system that uses one or more algorithms, usually as part of computer software—so there we’ve got ‘computer’—to produce outputs that can be used for decision making. So that’s how they understand the algorithm here: inputs, outputs, as given by computers.

And their specific focus is on algorithms used for decision making. Okay well, as the title of the project gives away, the focus of this report is on the public sector. So if we put all of this together, then for algorithmic accountability, the authors of this report have in mind: mechanisms or tools, within law or policy, through which those who design or use algorithms in public sector decision making can be held to account; so, can be asked to justify or be held responsible for the decisions or actions. That’s the overall theme and focus of the report.

Public sector decision making

Now, what sort of public sector decision making might we be thinking of here? And I should say that the report here is not super clear on delineating exactly for neatly what counts as the type of decision making that they’re concerned with. So technically, what we could think of is any domain in the public sector where algorithmic systems are used. But yeah, let’s just note that algorithmic systems and the use of algorithms for decision making, is pretty much everywhere nowadays. So take for example, a word processing program, email clients, digital calendars, software for hosting online meetings, even the coffee maker is very likely to use some sort of algorithms about when to put in sugar, if that’s what you use. So the use of algorithms in systems for decision making in the public sector is quite likely to be just omnipresent.

Now, again, that’s not exactly what they have in mind with this report, even though the broad definition that they give seems to fit it. What they have in mind is this, and I’ll give you a longer quote for that. They say: “Governments around the world are increasingly turning to algorithms to automate or support decision making in public services. Algorithms might be used to assist in urban planning, prioritize social care cases, make decisions about welfare, entitlements, detect unemployment fraud, or survey people in criminal justice and law enforcement settings.”

Okay, these are quite varied examples. So some of these are more to do with detection or surveillance. For example, is it okay to use computer algorithms to en masse monitor people’s faces in certain public spaces, even when (I’ll just add) even when there’s no suspicion that anyone has done anything wrong? How about using automation to detect money laundering? So that’s really about detecting certain patterns in public life.

But other of the examples that I mentioned a more choices. So should we automate decision making on whether a child, this child, gets social care support? How about decisions on whether and where to build a new road? More to do with choices or prioritization. And we can think of the illegal rental detection algorithm, or probability calculating algorithm, for the city of Amsterdam also as an example of this. So that’s the domain of the sort of uses of algorithms in public life, or in the public sector, that the authors of this report are thinking of.

Now, I’ll emphasize that the focus of this report is not on the use of these algorithms in the public sector itself. But, a step back, on the legal and the policy mechanisms that can be used to guide or control how algorithms are used in the public sector. So really a step more removed from the use of algorithms themselves.

Accountability mechanisms

Okay, so what sort of mechanisms in law and policy are at issue here then? What sort of mechanisms or tools are in place? The authors of this report looked at quite a bunch of them. Forty different ones in different geographies, or geographical locations. And you can see the full list of tools that they looked at in the report itself. But just to give you an indication: it includes the EU’s, General Data Protection Regulation, or GDPR, which was introduced in 2016. And the Moratorium on Facial Recognition, which was adopted in Morocco in 2019. The Ethical AI Toolkit in the United Arab Emirates, introduced in the same year. Or, for example, the UK’s Review into Bias in Automated Decision Making, which was introduced last year.

Now, the authors are very clear that they don’t pretend that the list of policy mechanisms that they’re looking at is anything close to complete or comprehensive. So what they’re talking about is, they call it the first wave of policy mechanisms. So basically, since 2016, technically across the globe—but with a bit of a skew towards the UK, Europe and North America, because that’s where these organizations are based. But yeah, basically, they just want to get people started in thinking about this.

So how can the law and policy be used, broadly, to respond to automation, automated decision making, in the public sector? Okay, well, let’s turn to the findings. So what did they find?

The main point that stands out for me is that they highlight that there’s quite some variety and spread in how the issue of the use of algorithms is approached in policy, and what instruments are used. So the report differentiates between eight different types of approaches. So they classify the mechanisms into eight different kinds. And they range from just very basic: requiring transparency, so giving people information about how and where algorithmic systems are used.

Issuing guidelines with respect to best practices; to appointing an independent oversight body, which could monitor and, if necessary, apply sanctions with how such algorithmic systems are used. To, on the more stricter end, flat-out bans on the use of certain systems. So here you can think of Morocco’s moratorium on the use of facial recognition.

Based on its survey, the authors also issue a set of six recommendations, or what they call ‘key lessons’, that can be relevant for anyone in the public sector engage with the use of algorithms in the public sector. So, let me run you through those.

Lesson 1: Clear and consistent

Lesson one: Clear institutional incentives and binding legal frameworks can support consistent and effective implementation of accountability mechanisms, supported by reputational pressure from media coverage and civil society activism.

Translation: If, as a government or a government body, you want to make sure your accountability mechanisms work well, then to make sure that being accountable is appealing, is required by law, and make sure that the media and activists are on your back. Now this latter point I find interesting, because it seems that the authors of this report think that internal mechanisms will not be enough. But pressure will be needed from the media and activists to ensure that public bodies are being held accountable for the use of algorithmic systems. Interesting.

Lesson 2: Definitions

Lesson two: Algorithmic accountability policies need to clearly define the objects of governance as well as establish shared terminologies across government departments.

Translation: Make sure you know what everyone’s talking about and what your policies actually apply to.

Lesson 3: Scope

Lesson three: Setting the appropriate scope of policy application supports their adoption. Existing approaches for determining scope, such as risk based tiering, will need to evolve to prevent under- and over-inclusive application.

Translation: If what the policy does and doesn’t apply to is clear and suitable, not too broad, not too narrow, then people will likely implement a policy.

Lesson 4: Targeting

Lesson four: Policy mechanisms that focus on transparency must be detailed and audience appropriate to underpin accountability.

Translation: If you give people information about where and how you’re planning to use automation in the public sector, make sure you actually tell people what you’re going to do. Make sure it’s actually formulated in a way that’s relevant to people and that they can understand.

Lession 5: Participation

Lesson five: Public participation supports policies that meet the needs of affected communities. Policies should prioritize public participation as a core policy goal, supported by appropriate resources and formal public engagement strategies.

Translation: It helps if people are informed and can give their input about the use of automation or algorithmic systems that actually affect them.

Lesson 6: Coordinate

Lesson six: Policies benefit from institutional coordination across sectors and levels of governments to create consistency in application and leverage diverse expertise.

Translation: Please talk to your colleagues across the corridor or in other sectors to make sure that everyone applies policies in a similar way. You might even learn something from one another.

Okay, I’m presenting this in a bit of a theatrical way, but I hope you get the point of these six key lessons from this report on algorithmic accountability in the public sector.

Broader framework

Now, I’d like to take a step back and point out what this all means. I think the main takeaway is that laws and public policies, mechanisms around the use of algorithmic systems and automation … it’s all still quite haphazard at the moment. Everyone is still finding their feet. And to those people working with the systems, the report basically says: Please, when you’re planning law or policy around of these algorithmic systems, make sure you talk to people. Inform the right affected parties. Learn from others who are doing the same. Collaborate.

And I think it might be useful to put this in a wider framework. Because the report comes in the context of automation and algorithmic systems being used more and more in all kinds of different places. Often without proper scrutiny. And often on the model of: go fast and break things and apologize later. Or apologize not at all.

Now, opinion: I’m not sure that that’s generally okay. But when it comes to the state, and to local government, within the public sector, I think it’s right that there’s extra attention to how many of these systems are being adopted, because of the impact it has on people’s lives.

And I suspect that this is also the spirit in which this report was written. Really to take stock of how across the globe, action is being taken by players in the public sector to establish law and policy, how they are introduced to control and guide the use of automation and algorithms in the public sector. So on that line, lots of work to do, we will see what the future holds.

Thanks to the authors of Algorithmic Accountability for the Public Sector for this week’s report. For more on vulnerability research, talks and essays, stay tuned for fresh episodes from Vulnerable By Design, our parent programme. You can also sign up to our email newsletter, The Vulnerability Letter. Head to vulnerablebydesign.net for more information. I am Chris Onrust. Thanks for listening and bye for now.

Episode source