Understanding the mismatch between ethics regulations and the epistemologies of machine learning
Jacob Metcalf, PhD, Data & Society Research Institute and Ethical Resolve
Centre for Medical Science and Technology Studies, University of Copenhagen announces the following seminar:
There is a mismatch between the moral ontologies of established research ethics practices and the machine learning methods that undergird artificial intelligence. What research ethics (particularly in U.S. regulatory contexts) tells us to care about—harm done to individual research subjects at the site of data collection—is often not applicable to the methods of machine learning, resulting in blindness to the potential harms of data science research and development. I argue that rather than simply being a historical happenstance resulting from a rapidly developing technology, this mismatch results from a fundamental mathematical feature of machine learning. The underlying linear algebra results in a flattened ontology that is not tractable for much of the familiar ethics regulatory practices we have access to. Ultimately, this points to the need for democratic control over the uses of data, rather than the current focus on the origins of data.
Jacob (Jake) Metcalf is a data ethics researcher and consultant, focused on the adapting the conceptual frameworks, policies and institutional practices of research ethics to the methods of data science and machine learning. He is a Researcher at Data & Society where he is a co-PI on the PERVADE research team and the co-founder of the technology ethics consulting firm, Ethical Resolve.
May 18, 2018, at 10:00-11:30, in room 5.0.22 at CSS, Øster Farimagsgade 5A, building 5
Everybody is welcome!