Biased-by-default: position paper AI Culture Lab

Auteur
Chris Julien

In the publication 'Biased-by-default', I argue by means of cultural analysis that a radical shift is necessary in our understanding of bias if we are to develop positive futures with Artificial Intelligence (AI). Cultural analysis entails the comparative and interdisciplinary study of technology as a cultural and material practice.

The position paper describes points of departure for Waag’s AI Culture Lab, marking territories for further research by raising questions, issues and hypotheses. By exploring the historical context and cultural assumptions of AI, this paper proposes a non-modern framework through which to understand and develop this 'general purpose' technology.

Download a copy (pdf)

Artificial intelligence

Over the past two years, the phenomenon of Artificial Intelligence has claimed pole position in the rush to be the next technological disruption shaping our near-future. With advances in hardware and data gathering fuelling the ascent of machine learning and associated techniques, the promise of Artificial Intelligence has filled our collective consciousness with a weird mix of hope and dread. It calls to mind dire scenarios in which humanity is annihilated by a robot apocalypse, or has merely become redundant in the automation of everyday life. Equally, AI promises a frictionless society of leisure and automated labour, with at its apex our assimilation into a 'singularity' of digitised, transhuman consciousness.

Beyond these spectacular scenarios, technologies associated with artificial intelligence are rapidly being integrated into diverse realms of human activity. The roll-out of ubiquitous computing creates a universal pathway for AI into our lives, ranging from autonomous vehicles to predictive policing and from micro-targeting feeds to urban surveillance networks, and raising complex questions of ethics and control.