Human Learning about AI

Speaker
Bnaya Dreyfus
Date
25/11/2024 - 12:30 - 11:15Add To Calendar 2024-11-25 11:15:23 2024-11-25 12:30:00 Human Learning about AI Joint with: Raphael Raux. Speaker's homepage: https://scholar.harvard.edu/dreyfuss/home Abstract: We study how people form expectations about the performance of artificial intelligence (AI) and consequences for AI adoption. Our main hypothesis is that people rely on human-relevant task features when evaluating AI, treating AI failures on human-easy tasks, and successes on human-difficult tasks, as highly informative of its overall performance. In the lab, we show  that projection of human difficulty onto AI predictably distorts subjects' beliefs and can lead to suboptimal adoption, as failing human-easy tasks need not imply poor overall performance for AI. We find evidence for projection in a  field experiment with an AI giving parenting advice. Potential users strongly infer from answers that are equally uninformative but less humanly-similar to expected answers, significantly reducing trust and engagement. Our results suggest AI "anthropomorphism" can backfire by increasing projection and de-aligning people's expectations and AI performance. Seminar room 011, building 504 אוניברסיטת בר-אילן - Department of Economics Economics.Dept@mail.biu.ac.il Asia/Jerusalem public
Place
Seminar room 011, building 504
Affiliation
Harvard
Abstract

Joint with: Raphael Raux.

Speaker's homepage: https://scholar.harvard.edu/dreyfuss/home

Abstract: We study how people form expectations about the performance of artificial intelligence (AI) and consequences for AI adoption. Our main hypothesis is that people rely on human-relevant task features when evaluating AI, treating AI failures on human-easy tasks, and successes on human-difficult tasks, as highly informative of its overall performance. In the lab, we show  that projection of human difficulty onto AI predictably distorts subjects' beliefs and can lead to suboptimal adoption, as failing human-easy tasks need not imply poor overall performance for AI. We find evidence for projection in a  field experiment with an AI giving parenting advice. Potential users strongly infer from answers that are equally uninformative but less humanly-similar to expected answers, significantly reducing trust and engagement. Our results suggest AI "anthropomorphism" can backfire by increasing projection and de-aligning people's expectations and AI performance.

Last Updated Date : 20/11/2024