Ethics and AI

Blinded by the bot: Why AI chatbots cue human unethical behavior in negotiations. Journal of Business Ethics, 2025.

Abstract: Artificial intelligence (AI) is increasingly used in negotiation, with chatbots serving as autonomous bargaining agents. Yet it remains unclear whether people uphold ethical standards when negotiating with these systems. If they do not, understanding the psychological processes that weaken moral judgment and identifying effective mitigation strategies is essential. The current research explores these questions by investigating how ethical fading (i.e., a psychological process in which individuals fail to recognize the moral aspects of a situation) shapes (un)ethical behavior in negotiations with AI agents. Across five pre-registered experiments, we find that ethical fading is more pronounced when individuals negotiate with AI chatbots compared to human counterparts. The first four studies show that this diminished moral awareness increases unethical behavior, specifically selfishness and misrepresentation. The fifth study demonstrates that informing people about the cooperative (vs. competitive) design of the AI chatbot significantly reduces these behaviors. Together, the findings reveal a novel ethical concern in human-AI interaction, showing that the presence of AI chatbots can alter human cognition in ways that make moral lapses more likely. We encourage organizations deploying AI bargaining agents, along with developers, to recognize the risk of ethical fading and adopt human-centered safeguards that promote ethical conduct.

Working Paper

Perceptions of algorithmic criteria: The role of procedural fairness Link to paper

This working paper was supported by the Brookings Center on Regulation and Markets.

The rise of artificial intelligence (AI) has enabled modern society to automate aspects of the organizational hiring process. Yet, prospective job candidates are hesitant to engage with such technologies in their everyday lives unless they perceive algorithms as behaving fairly. Procedural fairness is considered critical in shaping individual attitudes toward algorithms. However, empirical studies examining the role of procedural fairness in AI-enabled hiring are lacking. The present research seeks to bridge this gap by investigating how perceptions of procedural fairness and related fairness dimensions influence job applicants’ perceptions of different hiring algorithms designed to incorporate fairness ideals and their attitudes toward companies using these algorithms. Our findings indicate that people perceive hiring algorithms as procedurally fairest when they adopt a “Fairness through unawareness” approach to mitigating bias. They are also likely to view companies who use this approach more positively and are more motivated to apply for open positions.

Podcasts featuring my research

This IS Research (Spotify)

  • Can AI Be Fair? – Featuring Mike Teodorescu, Gerald C. Kane, & Lily Morse (9/13/21)

Try my GenAI Job Negotiation Preparation Simulation on iDecisions!

Planning Doc
FREE* – Learn to prepare for negotiations by completing a simulated job negotiation exercise.
        See my instructional video on preparing a ‘Planning Document’ for job negotiations
       *Must create a free iDecisions account to access.