What do machine learning algorithms see when they look at us? Some concerns about the transparency and discriminatory effects of profiling based on machine learning.

In this presentation I will show how machine learning, when it is used to¬†make sense of human behavior and characteristics (‚Äėprofiling‚Äô), can lead to¬†infringements in terms of privacy, data protection and antidiscrimination¬†law. One major concern from the perspective from data protection law is the¬†question how to create useful transparency about the functioning of machine¬†learning algorithms. I illustrate some of the issues related to transparency¬†with recent work I have done in the USEMP project¬†(http://www.usemp-project.eu/). Another important concern is how to¬†distinguish which machine learning categorizations should be considered¬†‚Äėgood‚Äô and legitimate differentiations, and which ‚Äėbad‚Äô discriminations (in¬†the sense that they are either illegitimate, or at least undesirable from an¬†ethical perspective). Looking at current privacy and antidiscrimination law,¬†I argue that the existing legal framework might need to be extended.¬† In¬†discussing the conundrums of transparency and differentiation/discrimination¬†in relation to machine learning algorithms, I‚Äôll pay some specific attention¬†to the implications of the new General Data Protection Regulation.

The slides from the presentation can be downloaded here: Gradual equality Рitu 26 MAY 2016_v1.3

Sign up for the talk is not required. 


Katja de Vries is a legal researcher and philosopher of technology¬†affiliated to the Institute for Computing and Information Sciences (iCIS) at¬†the Radboud Universiteit Nijmegen (the Netherlands) and the Centre for Law,¬†Science, Technology, and Society (LSTS, Vrije Universiteit Brussel,¬†Belgium). Currently she is working on the USEMP¬†(http://www.usemp-project.eu/) project which will result in a transparency¬†tool that shows users of social networks which (commercially interesting)¬†information can be derived from their data (http://databait.eu ).¬† In a few¬†months from now Katja de Vries will defend her PhD thesis (‚ÄėMachine¬†learning/Informational fundamental rights. Reconciling two Baroque practices¬†of making sameness with a ¬†overnmentality of proportionality‚Äô).¬† Her PhD¬†research looks at how machine learning, when it is used to make sense of¬†human behavior and characteristics, can lead to infringements in terms of¬†privacy, data protection and antidiscrimination law.¬† De Vries has been a¬†member of the European ‚ÄúLiving in Surveillance Societies‚ÄĚ-network, and has¬†worked on the FIDIS (Future of Identity in the Information Society) and SIAM¬†(Security Impact Assessment Measure – A decision support system for security¬†technology investments) projects. She publishes on a wide range of legal and¬†philosophical topics and has co-edited ‚ÄėPrivacy, Due Process and the¬†Computational Turn‚Äô (Routledge, 2013). De Vries studied at Sciences Po in¬†Paris, obtained three masters degrees with distinction at Leiden University¬†(Civil Law, Cognitive Psychology and Philosophy) and graduated at Oxford¬†University (Magister Juris).

Time: May 26 2016, 12:00-14:00
Auditorium 3
Join the Facebook event here.
The event will be in English. 

ITU address:
IT-University of Copenhagen
Rued Langgaardsvej 7
DK-2300 Copenhagen S