Designing chatbots to use anthropomorphic language offers potential benefits but also entails risks, shaping users’ perceptions and behaviors in sometimes unexpected ways. To examine these effects, we conducted two online experiments using a 3 × 3 between-subjects design. In the first experiment (N = 530), participants read chatbot transcripts; in the second (N = 560), they interacted with the chatbot directly. We varied conversational style (machine-like, human-like formal, human-like casual) and topic (transactional, small-talk, sensitive) to assess users’ perceptions (anthropomorphism, security, competence, warmth, enjoyment, satisfaction, trust) and behaviors (self-disclosure, donation to charity). Overall, human-like styles increased perception of warmth, enjoyment, and trust, but primarily when the style matched the topic. Style–topic congruence enhanced satisfaction and trust, while mismatches reduced perceived competence and enjoyment. Notably, during live interaction, users disclosed more to machine-like chatbots, despite expressing greater preference and trust for human-like ones, suggesting that anthropomorphic cues, while improving perceptions, may increase concerns about social judgment or risk. Moreover, sensitive topics increased donations in the scripted experiment but not during live interaction, highlighting a gap between hypothetical and actual behavior. These findings underscore the nuanced impact of linguistic anthropomorphism and suggest that chatbot effectiveness depends less on maximizing human-likeness and more on aligning conversational style with task, topic, and mode of interaction.
Association for Computing Machinery (ACM) , 2025. p. 321-331
HAI '25: International Conference on Human-Agent Interaction, Yokohama Japan, Nov 10-13, 2025