Ethical Aspects of Faking Emotions in Chatbots and Social Robots

Resource type
Preprint
Author/contributor
Title
Ethical Aspects of Faking Emotions in Chatbots and Social Robots
Abstract
Telling lies and faking emotions is quite common in human-human interactions: though there are risks, in many situations such behaviours provide social benefits. In recent years, there have been many social robots and chatbots that fake emotions or behave deceptively with their users. In this paper, I present a few examples of such robots and chatbots, and analyze their ethical aspects. Three scenarios are presented where some kind of lying or deceptive behaviour might be justified. Then five approaches to deceptive behaviours - no deception, blatant deception, tactful deception, nudging, and self deception - are discussed and their implications are analyzed. I conclude by arguing that we need to develop localized and culture-specific solutions to incorporating deception in social robots and chatbots.
Repository
arXiv
Archive ID
arXiv:2310.12775
Date
2023-10-19
Accessed
23/02/2024, 14:59
Library Catalogue
Extra
arXiv:2310.12775 [cs]
Citation
Indurkhya, B. (2023). Ethical Aspects of Faking Emotions in Chatbots and Social Robots (arXiv:2310.12775). arXiv. http://arxiv.org/abs/2310.12775