Sacrificing the One for the Many: Ethical Decisions About Robots, Dogs, and People
Additional Funding Sources
The research described was supported by Boise State University.
Abstract
Non-human agents (robots, dogs) may be used by rescue workers, military personnel, police, etc. This study analyzes how peoples' views about such agents affect their decision making, particularly when difficult moral choices are involved. Utilizing an online experiment, we explored whether or not humanizing an agent (a human, dog or human-looking robot) had an effect on the willingness of the participant to sacrifice the agent for other humans, as well as which agents were more likely to be sacrificed. Participants were presented with an image of an agent, given either a neutral or humanizing description and questions, and presented with two ethical dilemmas in which they were asked whether or not they would sacrifice the agent to save other humans lives. Results showed a significant interaction effect. There was not a significant difference between humanizing and neutral priming groups as was expected. However, participants were more likely to sacrifice robots than humans, and canine agents were also less likely to be sacrificed than robots, at a rate similar to humans. Instead of dividing rates of sacrifice between humans and non-humans, the rates of sacrifice were divided between living and non-living agents.
Sacrificing the One for the Many: Ethical Decisions About Robots, Dogs, and People
Non-human agents (robots, dogs) may be used by rescue workers, military personnel, police, etc. This study analyzes how peoples' views about such agents affect their decision making, particularly when difficult moral choices are involved. Utilizing an online experiment, we explored whether or not humanizing an agent (a human, dog or human-looking robot) had an effect on the willingness of the participant to sacrifice the agent for other humans, as well as which agents were more likely to be sacrificed. Participants were presented with an image of an agent, given either a neutral or humanizing description and questions, and presented with two ethical dilemmas in which they were asked whether or not they would sacrifice the agent to save other humans lives. Results showed a significant interaction effect. There was not a significant difference between humanizing and neutral priming groups as was expected. However, participants were more likely to sacrifice robots than humans, and canine agents were also less likely to be sacrificed than robots, at a rate similar to humans. Instead of dividing rates of sacrifice between humans and non-humans, the rates of sacrifice were divided between living and non-living agents.