My latest research applies insights from business ethics and organizational behavior to examine how organizations can leverage AI responsibly, without compromising fairness concerns.
For a more detailed discussion of my research on AI, please see my DU Faculty Spotlight interview below.
Research
Blinded by the bot: Why AI chatbots cue human unethical behavior in negotiations. Journal of Business Ethics, 2025.
Abstract: Artificial intelligence (AI) is increasingly used in negotiation, with chatbots serving as autonomous bargaining agents. Yet it remains unclear whether people uphold ethical standards when negotiating with these systems. If they do not, understanding the psychological processes that weaken moral judgment and identifying effective mitigation strategies is essential. The current research explores these questions by investigating how ethical fading (i.e., a psychological process in which individuals fail to recognize the moral aspects of a situation) shapes (un)ethical behavior in negotiations with AI agents. Across five pre-registered experiments, we find that ethical fading is more pronounced when individuals negotiate with AI chatbots compared to human counterparts. The first four studies show that this diminished moral awareness increases unethical behavior, specifically selfishness and misrepresentation. The fifth study demonstrates that informing people about the cooperative (vs. competitive) design of the AI chatbot significantly reduces these behaviors. Together, the findings reveal a novel ethical concern in human-AI interaction, showing that the presence of AI chatbots can alter human cognition in ways that make moral lapses more likely. We encourage organizations deploying AI bargaining agents, along with developers, to recognize the risk of ethical fading and adopt human-centered safeguards that promote ethical conduct.
Dangers of speech technology for workplace diversity. Nature Machine Intelligence, 2024.
Abstract: Speech technology offers many applications to enhance employee productivity and efficiency. Yet new dangers arise for marginalized groups, potentially jeopardizing organizational efforts to promote workplace diversity. Our analysis delves into three critical risks of speech technology and offers guidance for mitigating these risks responsibly.
Do the Ends Justify the Means? Variation in the Distributive and Procedural Fairness of Machine Learning Algorithms. Journal of Business Ethics, 2022.
Abstract: Recent advances in machine learning methods have created opportunities to eliminate unfairness from algorithmic decision making. Multiple computational techniques (i.e., algorithmic fairness criteria) have arisen out of this work. Yet, urgent questions remain about the perceived fairness of these criteria and in which situations organizations should use them. In this paper, we seek to gain insight into these questions by exploring fairness perceptions of five algorithmic criteria. We focus on two key dimensions of fairness evaluations: distributive fairness and procedural fairness. We shed light on variation in the potential for different algorithmic criteria to facilitate distributive fairness. Subsequently, we discuss procedural fairness and provide a framework for understanding how algorithmic criteria relate to essential aspects of this construct, which helps to identify when a specific criterion is suitable. From a practical standpoint, we encourage organizations to recognize that managing fairness in machine learning systems is complex, and that adopting a blind or one-size-fits-all mentality toward algorithmic criteria will surely damage people’s attitudes and trust in automated technology. Instead, firms should carefully consider the subtle yet significant differences between these technical solutions.
Failures of Fairness in Automation Require A Deeper Understanding of Human-ML Augmentation. MIS Quarterly, 2021.
Abstract: Machine learning (ML) tools reduce the costs of performing repetitive, time-consuming tasks yet run the risk of introducing systematic unfairness into organizational processes. Automated approaches to achieving fairness often fail in complex situations, leading some researchers to suggest that human augmentation of ML tools is necessary. However, our current understanding of human–ML augmentation remains limited. In this paper, we argue that the Information Systems (IS) discipline needs a more sophisticated view of and research into human–ML augmentation. We introduce a typology of augmentation for fairness consisting of four quadrants: reactive oversight, proactive oversight, informed reliance, and supervised reliance. We identify significant intersections with previous IS research and distinct managerial approaches to fairness for each quadrant. Several potential research questions emerge from fundamental differences between ML tools trained on data and traditional IS built with code. IS researchers may discover that the differences of ML tools undermine some of the fundamental assumptions upon which classic IS theories and concepts rest. ML may require massive rethinking of significant portions of the corpus of IS research in light of these differences, representing an exciting frontier for research into human–ML augmentation in the years ahead that IS researchers should embrace.
Working Paper
Perceptions of algorithmic criteria: The role of procedural fairness Link to paper
This working paper was supported by the Brookings Center on Regulation and Markets.
The rise of artificial intelligence (AI) has enabled modern society to automate aspects of the organizational hiring process. Yet, prospective job candidates are hesitant to engage with such technologies in their everyday lives unless they perceive algorithms as behaving fairly. Procedural fairness is considered critical in shaping individual attitudes toward algorithms. However, empirical studies examining the role of procedural fairness in AI-enabled hiring are lacking. The present research seeks to bridge this gap by investigating how perceptions of procedural fairness and related fairness dimensions influence job applicants’ perceptions of different hiring algorithms designed to incorporate fairness ideals and their attitudes toward companies using these algorithms. Our findings indicate that people perceive hiring algorithms as procedurally fairest when they adopt a “Fairness through unawareness” approach to mitigating bias. They are also likely to view companies who use this approach more positively and are more motivated to apply for open positions.
Podcasts featuring my research
This IS Research (Spotify)
- Can AI Be Fair? – Featuring Mike Teodorescu, Gerald C. Kane, & Lily Morse (9/13/21)
Try my GenAI Job Negotiation Preparation Simulation on iDecisions!
Planning Doc
FREE* – Learn to prepare for negotiations by completing a simulated job negotiation exercise.
See my instructional video on preparing a ‘Planning Document’ for job negotiations
*Must create a free iDecisions account to access.