ai workshop
Waag BY-NC-SA

Algorithm says no: ethical guidelines for AI systems

Tammy Dobbs, a citizen of Arkansas in the United States, signed up for a state disability program to help with her cerebral palsy. Eight years after her registration, the state assessor started using an automated decision system (ADS) to calculate the number of caregiver hours. Without explanation or discussion, the ADS assigned her 24 hours less per week than what she received before.

The example of Tammy is just one of many. Reports by organisations as the AI Now Institute and books such as Automating Inequality by Virginia Eubanks point to the consequences of eligibility systems and predictive risk models that categorise groups of people based on data. Case-by-case judgements can also be flawed or biased, but automated decision-making has a much larger impact on populations. 

Governing automated decision systems

To govern these systems, a number of organizations concerned with the social impact of autonomous systems have published artificial intelligence (AI) ethical guidelines. Of course, each AI system is different: an algorithm that filters spam emails has significantly less social impact than the ones that make healthcare decisions. But all lead to a range of questions. Who determines which AI system poses greater risks for individuals than others? Who is responsible for the impact of an AI system, and to what extent? 

To map the ethical discourse related to these issues, we studied a range of existing ethical guidelines. They focus mostly on a European context and are designed for various fields: civic organizations, research groups, international organizations and governments. Are these ethical tools helpful for the governance of an AI project? Who is their target audience and which part of the process are they referring to? This is what we learned.

Studying guidelines: some considerations

Governments and municipalities are already employing decision making algorithms. In the Netherlands, the System Risk Indication (SyRi) system has been using a combination of citizen data to assess on the basis of risk indicators, who acts against the law by cheating the benefit system. Citizens do not know if or how their data is used and when they are classified as a risk. 

Cases with greater social impact such as these highlight the necessity to consider and assess how and to what extent automated decisions impact society. Will they discriminate against certain groups? Will workers be replaced as a result of using these systems? In other contexts like areas of research, especially when personal data is involved, we need to pay special attention to technical security, robustness and transparency. 

Almost all guidelines we studied agreed upon a number of principles, such as explainability, fairness, and accountability, but the way in which these principles are ensured differs case by case. AI4People focuses on more abstract principles, while Center for Democracy and Technology (CDT) offers concrete technical considerations. Only a few organizations (mostly research and civic) explicitly mentioned the importance of labelling. They urge that citizens have to be aware that they are interacting with a machine, and that it is clear who accounts for the consequences of the use of AI. 

Overall, most of these guidelines relate to a wider audience: they concern the design, building, implementation and governance of the whole AI system lifecycle. From the perspective of usability in the public sector and governance of AI systems they all contain valuable information to initiate reflection. Some guidelines however are better suited to the public sector and provide tools that civil servants can utilise. For example, the European Union’s High Level Expert Group on Artificial Intelligence includes an impact assessment, which lists a set of relevant questions that civil servants and other involved parties can apply to their own projects (p.26-31).

Most guidelines acknowledge one issue: each project is particular to its own social context and we cannot draw on a set of unified principles. These guidelines are therefore useful to start a discussion about the ethical considerations of an AI system, so that principles become ingrained in the decisions and actions made by all involved actors at all stages of an AI system lifecycle.

These guidelines are a step forward and can help us protect people such as Tammy from flawed systems. This requires a constant effort to involve experts of ethics and social scientists in the process, to render algorithms understandable and to develop proper auditing mechanisms.  

Read the full overview of the studied guidelines.