In this blog series 'DO worry, be happy!', we talk to experts in the field of technology, innovation and ethics about new developments. On the basis of current events, we ask them to explain why we are worried about how technology will shape the future. But don’t be afraid, we are also looking for ways to take over the direction that technology is taking us. So DO worry, but be happy!
The reason
An article in The Guardian states that there is artificial intelligence that can estimate the sexual preference based on a photo with a accuracy of 81 percent. The conclusion that the Guardian draws is that this so-called 'gaydar' can be used to detect homosexuality. That sounds terrifying. But the problem is not that AI is really able to do that. The problem is that journalists do not critically question the assumptions and thereby contribute to the mystification of artificial intelligence.
To start this blog series we asked some of our questions to Marleen Stikker, director of Waag. Stikker argues that "technology is not neutral," and in her daily work she looks at ways to influence technology design with the goal of making it open, fair and inclusive.
What is the problem here?
The Guardian has assumed the claims of the scientists without any critical analysis and contributes to the view that artificial intelligence has superpower. It is exemplary how the media reports about the claims of technology. The claim that software can detect homosexuality is not correct, and further analysis of the research paper shows that the dataset used was already 'pre-cooked'. Self learning machines reproduce stereotypes and prejudices of her makers. Or as Gregor Mattson says in his extremely intelligent criticism of the Guardian article: "Machine learning is like money laundering for bias." Learning machines are fed by data sets: in this case, the profile pictures of white men who openly show their sexuality, looking for a partner and enrolled in an online dating site. Neither objective nor representative. That's something the article totally ignores in the 'fear of AI' .
Why do we need to worry?
It is worrying that the claim about this smart machine is not falsified by science and journalism. Critical notion about bias in machine learning is not applied to this publication and the Guardian copies it without care. This makes it seem that AI is smarter than people, while it is only better at reproducing stereotypes and prejudices. Machines are then better in recognizing these same stereotypes. We people see more nuances. That should be the title of the article. Not to mention the fact that the concept of sexuality is reduced to two categories and 19th century thinking in which your jaw line defines your sexual preference. This, while we are just experiencing an emancipation wave where there is room for more forms of sexuality. The systems are now aimed at culturally returning to the 19th century while, as people, we are finally able to think outside these limited categories.
Do you see a trend in this area?
We’ll experience more from this problems. Forms of artificial intelligence that are biased by training and data that they’re programmed with. The bias and assumptions are the ones of the researchers and those who fund the research. We have to ask ourselves: what are the intentions of the research and what outcomes is the client satisfied with? It remain crucial questions. You see that the world is reduced to categories that can be taught to the machine. Next, the machine is portrayed as "better" or "smarter" than man, while missing the nuances. All gray zones disappear. The machine is limited to the rubbish that its programmer puts in there. Defining the category defines the outcome. The issue surrounding AI and face recognition is also becoming increasingly urgent. James Vincent goes deeper into this in his article on the Verge.
How can we actually influence?
Perhaps trivial, but I always get a lot of energy from fooling the machine. In the tradition of 'sugar in the tank'. Let's create an application that lets you generate five variations of your own face, each with a slightly different ratio of nose, eyes, jaw line. If you're uploading a social media upload profile, this app can then automatically leave five variants of your photo in continuous carousel. By filling in the databases with fake information, you will be prevented from being placed in a stereotype category and being judged accordingly.
A more structural approach is to increase knowledge about the bias of self-learning systems. I also advocate an algorithmic authority so that claims and bias can be assessed properly. Artificial intelligence must explicitly show how it is being trained, by whom and from what values and financial interests this happens. Only then can we really control technology.