Expanding the ethics of robot identity

An AI-generated image in the style of an abstract oil painting of a woman speaking with a robot.

Abstract - It is critical that we carry a deep understanding of the ethical and social impacts of leveraging human-like identity cues in social robots. However, relevant work which considers the ethical risk of doing so is fragmented, narrow in the stakeholders, aspects of identity, and types of risk it considers, and overwhelmingly focused on risks derived only from Western conceptions of ethics. Thus, to help guide the HRI research community to more intentional and ethical usage of human-like identity in robots and other agents, I plan to orient the next four years of my PhD studies to developing, testing, and iterating on a set of principles for this purpose. This effort shall be grounded in philosophical pluralism, sensitive to the complex, intersectional nature of human identity, and contextualize the issue as it relates to social and existential risks. Through this contribution, I hope to help empower HRI researchers with a greater knowledge of how to avoid harm and bring about justice and compassion in their work.


When designing robots for society and not strictly vice versa (Šabanović 2010), it is critical that we develop a deep understanding of how our creations will impact the societies in which we introduce them. While the potential for positive social impact by intentionally leveraging human-like identity cues in robots is apparent (Winkle, Melsión, et al. 2021), the ethical waters of doing so may often run murky, especially when it comes to using identity cues of marginalized or vulnerable peoples (Winkle, Jackson, et al. 2021). Existing literature considers the ethics of many individual facets of human-like robot identity (e.g. gender (Bryant, Borenstein, and Howard 2020), race (Sparrow 2020; Bartneck et al. 2018), voice (Li and Lai 2022)). But, there is a lack of full consideration for the intersectional nature of human identity and the inherent complexities that arise when attempting to replicate it in robots.

Furthermore, existing works tend to be narrow in their consideration of stakeholders (e.g., considering only the user or a particular demographic group), and they are overwhelmingly focused on ethical risks using only Western conceptions of ethics. Given the global, cross-cultural reach of social robotics, it is prudent to adopt a philosophically pluralistic conception of ethical risk (Bardzell 2010; Zhi-Xuan 2020) focused on multiple intersections of robot and user identity, stakeholders (Winfield et al. 2021; Milano, Taddeo, and Floridi 2020), levels of social abstraction (Milano, Taddeo, and Floridi 2020; Morley et al. 2020), time horizons (Milano, Taddeo, and Floridi 2020), and the ethics of processes such as participatory design (Kelly 2019; Spiel et al. 2018).

I have just begun my doctoral studies, and is towards this problem that I will direct my work throughout the next four years of my programme. To start, I have begun a review of how robotic identity has been treated in existing HRI literature. The goal of this is to understand the current state of how the community thinks about robot (and human) identity, discover gaps, and elaborate on ethical shortcomings. Then, I plan to use this understanding to develop a set of principles and reflexive questions to help HRI researchers be more intentional about working with identity. I will then evaluate these guidelines through analyzing prior work and informing new work, culminating in putting the guidelines to the test in a user study, placing theory into practice.

It is my hope that, through this contribution, HRI researchers will be able to be more conscientious in creating human-like robots, broaden their research to more diverse users, more confidently weigh social benefit against ethical risk, and more effectively preempt unethical consequences pertaining to the usage of identity.

Diverse conceptions of ethical risk

A central idea to this effort is philosophical pluralism. In outlining their agenda for feminist HCI, Bardzell (Bardzell 2010) advocates for epistemic pluralism incorporating non-Western perspectives into design in order to avoid unintended consequences borne of Western ethnocentrism. With specific regard to creating ethical AI, Xuan (Zhi-Xuan 2020) identifies four concrete reasons for adopting a philosophically pluralist design perspective:

  1. Using only the ethics familiar to Western-educated elites excludes vast swathes of human thought of crucial relevance to developing ethical AI

  2. Ethically plural AI are more robust to situations of moral uncertainty

  3. It is politically pragmatic, in that such AI can benefit disparate political actors

  4. It respects the equality, autonomy, and diverse moral values of different people

I also argue that it would help provide cross-cultural robustness. That is, a robot designed incorporating ethics sourced from multiple different cultures is more likely to remain ethical across different cultural contexts. All of this is especially important given the prevalence of social robotics research in non-Western nations.


The first step in this effort is to understand how identity is treated in the existing HRI literature. I have begun a review of papers in HRI concerned with both “robot” and “user” identity. That is, papers which are concerned with (a) user/participant ascription of human-like attributes to robots (e.g. gender, race) and/or (b) roboticist intent to influence such ascriptions, as well as papers concerned with user (self-)ascription of these same attributes.

After identifying the specific ways in which the community’s current understanding and treatment of identity can be guided, I shall ground this guidance through diverse philosophical perspectives on social robotics (e.g. Buddhist (Hongladarom 2020), Confucian (Kim et al. 2021), Islamic (Alemi et al. 2020), indigenous Australian (Abdilla and Fitch 2017)), in addition to ethical considerations from Western works.

Furthermore, the developed guidelines shall incorporate insights from the ethics of processes such participatory design (Kelly 2019), which may draw inspiration from the rich body of literature in anthropology on the ethics of participant observation (Musante (DeWalt) and DeWalt 2010), a related practice.

Through the lens of these perspectives, ethical risk shall be assessed in a comprehensive manner considering the entire space of impact. Recent ethical reviews have identified multiple stakeholders (Winfield et al. 2021; Milano, Taddeo, and Floridi 2020), levels of social abstraction (Milano, Taddeo, and Floridi 2020; Morley et al. 2020), and time horizons (Milano, Taddeo, and Floridi 2020) as axes of analysis.

Comprehensive ethical risk assessment must consider not only the user, but all stakeholders in the system. For example: users, those whose identities are being robotically recreated, designers, teachers, peers, etc. Complementary to the stakeholder analysis, comprehensive assessment shall consider ethical risks at multiple levels of social abstraction for each stakeholder. Morley et al. (Morley et al. 2020) accomplish a reasonably thorough ethical review through analyzing six levels of abstraction: individual, interpersonal, group, institutional, sectoral, and societal.

Lastly, it is important to consider not only immediate but also long-term ethical risks—including existential risks (Bostrom 2017). While most extant literature identifies ethical risk across at least more than one time horizon, it can be beneficial to include it as a specific dimension of analysis (Milano, Taddeo, and Floridi 2020).

Existential risks

Perhaps the most important ethical risk to consider is existential risk. Under the increasingly looming possibility of eventual artificial general intelligence, the study of aligning AI to human values is a growing philosophical field (Bostrom 2017; Zhi-Xuan 2020). Social robotics is not exempt from the need to face these considerations (Zhi-Xuan 2020).

One way in which social agents may me misaligned, and indeed are often misaligned today, is in the reproduction and reinforcement of problematic social biases. For example, West et al.’s 2019 UNESCO report identified how some female-gendered social agents propagated harmful stereotypes about the docility of women when they were not programmed to respond appropriately to abusive treatment from users (West, Kraut, and Ei Chew 2019).

In this vein, existential risks which affect a particular identity of people should not be ignored. Design decisions, particularly those at scale, which causes detriment for a particular social group must be met with a gravity equal to as if it were a detriment to us all. Irresponsible treatment of robotic and artificial identity is undoubtedly a manner in which this occurs (West, Kraut, and Ei Chew 2019; D’Ignazio and Klein 2020).

It is for these reasons that I shall incorporate alignment thought into this project. It is important to note that not all is dour; Even here there is room to develop thought in positive ways of addressing existential risk. Developing more complete identity ethics in the context of HRI may reinforce the idea of social robots not as panaceae but as precision tools with specific use-cases, encouraging our fellow researchers to reconsider applications where robots are unnecessary. This leads to less usage of energy and climate-intensive materials such as lithium, and allows for the reallocation of funds to more effective causes. (Hambuchen, Marquez, and Fong 2021).


Abdilla, A, and R Fitch. 2017. “FCJ-209 Indigenous Knowledge Systems and Pattern Thinking: An Expanded Analysis of the First Indigenous Robotics Prototype Workshop.” Fibreculture Journal: Internet Theory Criticism Research 28: 1–14.

Alemi, Minoo, Alireza Taheri, Azadeh Shariati, and Ali Meghdari. 2020. “Social Robotics, Education, and Religion in the Islamic World: An Iranian Perspective.” Science and Engineering Ethics 26 (5): 2709–34. https://doi.org/10.1007/s11948-020-00225-1.

Bardzell, Shaowen. 2010. “Feminist HCI: Taking Stock and Outlining an Agenda for Design.” In Proceedings of the 28th International Conference on Human Factors in Computing Systems - CHI ’10, 1301. Atlanta, Georgia, USA: ACM Press. https://doi.org/10.1145/1753326.1753521.

Bartneck, Christoph, Kumar Yogeeswaran, Qi Min Ser, Graeme Woodward, Robert Sparrow, Siheng Wang, and Friederike Eyssel. 2018. “Robots and Racism.” In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 196–204. HRI ’18. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3171221.3171260.

Bostrom, Nick. 2017. Superintelligence. Dunod.

Bryant, De’Aira, Jason Borenstein, and Ayanna Howard. 2020. “Why Should We Gender?: The Effect of Robot Gendering and Occupational Stereotypes on Human Trust and Perceived Competency.” In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 13–21. Cambridge United Kingdom: ACM. https://doi.org/10.1145/3319502.3374778.

D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. MIT Press.

Hambuchen, Kimberly, Jessica Marquez, and Terrence Fong. 2021. “A Review of NASA Human-Robot Interaction in Space.” Current Robotics Reports 2 (3): 265–72. https://doi.org/10.1007/s43154-021-00062-5.

Hongladarom, Soraj. 2020. The Ethics of AI and Robotics: A Buddhist Viewpoint. Rowman & Littlefield.

Kelly, Janet. 2019. “Towards Ethical Principles for Participatory Design Practice.” CoDesign 15 (4): 329–44. https://doi.org/10.1080/15710882.2018.1502324.

Kim, Boyoung, Ruchen Wen, Qin Zhu, Tom Williams, and Elizabeth Phillips. 2021. “Robots as Moral Advisors: The Effects of Deontological, Virtue, and Confucian Role Ethics on Encouraging Honest Behavior.” In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 10–18. HRI ’21 Companion. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3434074.3446908.

Li, Yuanchao, and Catherine Lai. 2022. “Robotic Speech Synthesis: Perspectives on Interactions, Scenarios, and Ethics.” arXiv:2203.09599 [Cs], March. http://arxiv.org/abs/2203.09599.

Milano, Silvia, Mariarosaria Taddeo, and Luciano Floridi. 2020. “Recommender Systems and Their Ethical Challenges.” AI & SOCIETY 35 (4): 957–67. https://doi.org/10.1007/s00146-020-00950-y.

Morley, Jessica, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo, and Luciano Floridi. 2020. “The Ethics of AI in Health Care: A Mapping Review.” Social Science & Medicine 260 (September): 113172. https://doi.org/10.1016/j.socscimed.2020.113172.

Musante (DeWalt), Kathleen, and Billie R. DeWalt. 2010. Participant Observation: A Guide for Fieldworkers. Rowman Altamira.

Šabanović, Selma. 2010. “Robots in Society, Society in Robots: Mutual Shaping of Society and Technology as a Framework for Social Robot Design.” International Journal of Social Robotics 2 (4): 439–50. https://doi.org/10.1007/s12369-010-0066-7.

Sparrow, Robert. 2020. “Robotics Has a Race Problem.” Science, Technology, & Human Values 45 (3): 538–60. https://doi.org/10.1177/0162243919862862.

Spiel, Katta, Emeline Brulé, Christopher Frauenberger, Gilles Bailly, and Geraldine Fitzpatrick. 2018. “Micro-Ethics for Participatory Design with Marginalised Children.” In Proceedings of the 15th Participatory Design Conference: Full Papers - Volume 1, 1–12. Hasselt; Genk Belgium: ACM. https://doi.org/10.1145/3210586.3210603.

West, Mark, Rebecca Kraut, and Han Ei Chew. 2019. “I’d Blush If I Could: Closing Gender Divides in Digital Skills Through Education.” https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1.

Winfield, Alan F. T., Serena Booth, Louise A. Dennis, Takashi Egawa, Helen Hastie, Naomi Jacobs, Roderick I. Muttram, et al. 2021. “IEEE P7001: A Proposed Standard on Transparency.” Frontiers in Robotics and AI 8. https://www.frontiersin.org/article/10.3389/frobt.2021.665729.

Winkle, Katie, Ryan Jackson, Alexandra Bejarano, and Tom Williams. 2021. “On the Flexibility of Robot Social Identity Performance: Benefits, Ethical Risks and Open Research Questions for HRI.” In.

Winkle, Katie, Gaspar Isaac Melsión, Donald McMillan, and Iolanda Leite. 2021. “Boosting Robot Credibility and Challenging Gender Norms in Responding to Abusive Behaviour: A Case for Feminist Robots.” In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 29–37. Boulder CO USA: ACM. https://doi.org/10.1145/3434074.3446910.

Zhi-Xuan, Tan. 2020. “AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy.” Conference presentation. Effective Altruism Global x Asia-Pacific 2020. https://www.alignmentforum.org/posts/jS2iiDPqMvZ2tnik2/ai-alignment-philosophical-pluralism-and-the-relevance-of.