Algo-Lit
In the EU, more and more is being done to make the use of algorithms more transparent to citizens. For example, governments are publishing information about their algorithms through registries. And laws (such as the AI Act) ensure that citizens have the right to receive explanations about so-called ‘high-risk’ algorithms, such as algorithm use that may discriminate.
However, research has shown that civil servants and other professionals dealing with algorithmic systems often do not fully understand how an algorithm works or oversee the consequences of its use. Civil servants have the responsibility to explain algorithms to citizens. An informed official can better help citizens understand their rights. Then, if necessary, citizens can take their own actions in response to algorithmic decision-making.
Within the Algo-Lit project, Waag Futurelab is co-creating methods with digital inclusion specialists that professionals and citizens can use to understand algorithmic decision-making processes. Waag is also developing a transparency toolkit that brings together existing methods for explaining algorithms and provides insight into citizens' rights and possible actions regarding algorithmic systems.
Why is Waag involved?
Transparency in the use of algorithms fits into Waag's mission to create a fair internet and an inclusive society, where people are empowered to take action against unfair treatment by algorithmic decision-making.
Waag improves the skills of professionals by sharing knowledge and developing tools based on best practices in several EU countries and by connecting professionals in the Netherlands, France and Belgium.
Meta data
Project duration
Links
Team
Financiers
Partners
This project was made possible by funding from the European Commission's Erasmus+ programme under Grant Agreement number 2024-1-FR01-KA220-ADU-000250548.