protest usa
Victoria Pickering BY-NC-SA

Biased technology: bad neighbourhood or bad data?

Technology is not neutral. It is, instead, a reflection of our cultural and political beliefs, which includes prejudices, inequalities and racism. In this series on institutional racism, we’ll try to get to the root of the matter. As we search for solutions, we’ll use concrete examples to demonstrate that technology is not neutral.

When planning to buy a house, you would naturally be curious about what your potential new neighbourhood might be like. In the Netherlands, you can look up the liveability of a neighbourhood with the help of a tool called the Liveability Meter. Based on data from the Central Bureau of Statistics (CBS), the Liveability Meter uses a green and red colour-coding system to indicate whether a neighbourhood is pleasant (green) or unpleasant (red) to live in. At first glance, this seems like a neutral application of data. But how is the liveability score of a neighbourhood actually determined? And how neutral are these classifications?

Gerwin van Schie, a PhD candidate at Utrecht University, has been researching prejudices in technology and data applications for years. He investigates the ways the government collects data about the citizens’ ethnicity and how it applies them to projects like the Liveability Meter.

Data collection

Every data application project starts with data collection. In the Netherlands, the government collects information about—among other things—the background of citizens. Storing information specifically about ethnicity is prohibited in the Netherlands, but information regarding citizens’ nationality, birth place and parental birth place are stored in the Municipal Personal Records Database (the 'Gemeentelijke Basis Administratie' in Dutch, or GBA ). “With the help of this data, the Dutch government can determine which group you likely belong to,” explains Van Schie. “In America, people fill in details about their identity themselves. This can sometimes lead to problems when people misrepresent themselves. However, American citizens do have more say in how they identify. We don't have that in the Netherlands.”

The data collected by the government is then analysed by the CBS and used in the creation of societal statistics. For example, until 2016, people were characterised as either 'native Dutch' or 'immigrant'. Recently, the terminology was changed to 'Dutch with an immigrant background'. Yet, the function of this categorisation remains the same: a distinction is made if you or one of your parents was born outside of the Netherlands.

The CBS also differentiates between a Western and a non-Western background. While this classification is common in the Netherlands, it does not exist in other countries. “The term ‘Western’ refers to people from Europe (excluding Turkey), North America, Oceania, Japan and Indonesia,” Van Schie explains. “People from Turkey, Africa, Latin America, Asia (excluding Japan), Suriname, the Netherlands Antilles and Aruba are considered to be ‘non-Western’. So this categorisation has very little to do with geography, which indicates that other factors—like race and ethnicity—play a role.” As the Scientific Council for Government Policy wrote in 2016, it is “not the geographical location of the country of origin, but the predominantly ‘white’ characteristics of the migrants that ultimately is the deciding factor.

'The categorisation of Western versus non-Western has little to do with geography.'

Yet, even in 2020, the CBS is still using the artificial classifications of ‘Western’ and ‘non-Western’ in the analysis and application of data. Data from the CBS on societal demographics is public and available via an application programming interface (API). Because the data is about groups and is sorted by postal code, privacy laws protecting personal data do not apply. As a result, both the government and external bodies (like the police and tax authorities) can use the data for their own purposes.

 

The Liveability Meter

An example of an application of this data is the Liveability Meter, an interactive, online tool from the Ministry of Interior and Kingdom Relations. According to the ministry, the Liveability Meter can be used to keep track of the quality of life in all residential areas in the Netherlands. “Based on one hundred (mostly) objective indicators (characteristics of the residential environment), [the Liveability Meter] provides an estimate of the quality of life,” the website explains.

This purported objectivity of both the data and the application of that data is problematic.

“The Liveability Meter includes all kinds of data when determining the liveability of a district, including nationality and immigrant background. Every nationality other than Dutch, according to this system, has a negative impact on the quality of life,” Van Schie explains. “The system implies, therefore, that the presence of people with an immigrant background reduces the quality of life in a neighbourhood. That might be a particular individual’s opinion, but it’s strange that the government would present it as factual information.”

Screenshot of the Liveability Meter in Rotterdam
Screenshot of the Liveability Meter in Rotterdam

 

“When you design systems—systems like the Liveability Meter—they will always contain certain cultural and political values. You have to communicate those values ​to the end user, but we don’t see that in this case. The addition of information regarding nationality or place of birth is a political choice. There is no questionnaire beforehand asking about what the user wants from a neighbourhood. If you were to publicly ask people whether or not they’d prefer more or fewer people with an immigrant background in their neighbourhood, most would realise that the question is racist. However, if we build it in as standard, as in the case of the Liveability Meter, then nobody notices and it seems neutral.”

‘When you design systems—systems like the Liveability Meter—they will always contain certain cultural and political values.’

 

Predictive policing

This use of data and technology in systems not only generates suggestive and racist results, but is also productive. This means that using systems like these actively produces racist views and contributes to discrimination.

An obvious example of this phenomenon is the Crime Anticipation System (CAS), a predictive policing system used by the national police since 2017. Predictive policing makes predictions about future crime through the use of data and statistical probability calculations. CAS combines data from the CBS about the socio-economic and demographic profile of an area with the police’s crime rate data. From these results, police determine which areas have the highest risk of crime and where they should send extra surveillance.

The ‘non-Western background’ category was part of CAS until 2016. Today, the data on which CAS is based still contains nationality and place of birth. “People often say that ethnic profiling is accidental,” says Van Schie. “But this system is designed to discriminate. The moment you include data such as nationality and birth place in a design like this, you consciously agree to discriminate on the basis of ethnicity.”

Consequently, police surveillance is more frequent in neighbourhoods with a higher population of immigrants. In this way, CAS produces its own reality—the reality in which people with an immigrant background are arrested more often. That arrest data is then fed back into the system, further strengthening the bias. Of course, more police surveillance at certain locations also means less surveillance at other locations. “Wherever a specific group is discriminated against in a negative way, there is another large group benefitting because they aren’t caught as quickly.”

Screenshot of the CAS in Amsterdam
Screenshot of the CAS in Amsterdam

 

 

 

How do we solve this?

Van Schie points out that the biggest problem is that the government facilitates this data infrastructure through the CBS. Additionally, there is a widely-held, unrealistic belief in the power of technology to solve our social problems. “Social and societal problems are being transformed into technical problems. But the people building these applications don’t realise they’re actually engaging in politics.” In this way, the political views of a certain era can become entangled in the technological applications we still use years later.

The commonly-held ideas that data is neutral and we can talk about ethnicity in a neutral way must change. After all, the categories the CBS uses to classify its citizens are far from neutral. Gloria Wekker uses the term “white innocence” when referring to the idea that we, as Dutch people, are colourblind and impartial. We convince ourselves that we can approach and process ideas around ethnicity in a neutral way. But that’s simply not true. “Making connections using categories around ethnicity is always the goal with this,” Van Schie explains. “Why do you think this type of categorisation is relevant? We’re talking about Dutch people who may have immigrated three generations ago. The question is whether ethnicity is relevant, or whether you make it relevant by recognising it, collecting data about it and then using it in these types of applications.”

Symbolic moves, like changing the terminology from 'immigrant' to a 'Dutch with an immigrant background', is simply not enough. “The word might have changed, but its function has not. This way of thinking is still part of the government infrastructure, which constitutes institutional racism,” says Van Schie. “Depending on the goals of your project, you have to make decisions about what types of data and categories are needed. As far as I am concerned, it’s only legitimate to identify a group as a group when that group benefits, such as supporting a group in a targeted manner.”