Human responses to moral dilemmas(两难选择) can be influenced by statements written by the artificial intelligence chatbot ChatGPT, according to a study published in Scientific Reports. The findings indicate that users may underestimate the extent to which their own moral judgments can be influenced by the chatbot.
Sebastian Krigel and colleagues asked ChatGPT multiple times whether it is right to sacrifice (牺牲)the life of one person in order to save the lives of five others. They found that ChatGPT wrote random statements arguing both for and against sacrificing one life, indicating that it is not biased towards a certain moral stance(立场).
The authors then presented 767 U. S. participants, who were on average 39 years old, with a dilemma whether to sacrifice one person's life to save five others. Before answering, participants read a statement provided by ChatGPT arguing either for or against sacrificing one life to save five. Statements were from either a moral advisor or ChatGPT. After answering, participants were asked whether the statement they read influenced their answers.
Eighty percent of participants reported that their answers were not infuenced by the statements they read. However, the authors found that the answers participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance. This indicates that participants may have underestimated the influence of ChatGPT's statements on their own moral judgments. 微信公众号IAI English
The authors suggest that the potential for chatbots to influence human moral judgments highlights the need for education to help humans better understand artificial intelligence. They propose that future research should design chabots that either decline to answer questions requiring a moral judgment or answer these questions by providing multiple arguments and warnings.