Camille Hazzard explores 'trust' in human interactions with machine learning systems as part of SRI working group

June 25, 2024 by Patricia Doherty

This post includes excerpts from "SRI working group investigating the concept of trust from across disciplinary perspectives" by Jovana Jankovic, Schwartz Reisman Institute for Technology and Society.

head shot of Camille Hazzard
Camille Hazzard

CrimSL PhD student Camille Hazzard (Supv: Professor Kelly Hannah-Moffatt) is part of an interdisciplinary working group at the University of Toronto's Schwartz Reisman Institute for Technology and Society (SRI) investigating the role of trust in human interactions with machine learning (ML) systems.

The working group is led by Professor Beth Coleman, associate professor at UTM’s Institute of Communication, Culture, Information and Technology and U of T’s Faculty of Information.

Coleman says the impetus for forming the working group was “to develop a deeper understanding of the role of trust in our interactions with ML systems, and to collaboratively identify new approaches to understanding trust.” 

“The significance of trust in this domain cannot be overstated right now,” says Coleman. “Trust influences everything from user adoption of these tools, to ethical considerations, to the societal impact of emerging technologies, and so much more.”

Hazzard says her research with the working group has revealed important insights into how trust—and distrust—affect the regulation of human behaviour in the criminal justice system and society at large.

“Our research and forthcoming report are especially needed at a time when it is increasingly difficult to discern the veracity of the data that shape the lenses through which we see the world,” says Hazzard.

“It is by becoming acquainted with the multifaceted constitution of the word ‘trust,’” says Hazzard, “that we, as humans, can learn to adjust our expectations of digital infrastructures as needed and determine the conditions under which it is appropriate to invest our trust in said infrastructures.”

The working group on trust in human-ML interaction expects to publish a paper of their research, findings, and suggestions for further inquiry by the end of the summer of 2024.

Tags