ChatGPT appears to have better "bedside manners" than some doctors—at least when their written advice is rated for quality and empathy (同理心), a study has shown.
The study, published in the journal JAMA Internal Medicine, used data from social news website Reddit's Ask Does forum, in which members can post medical questions answered by verified healthcare professionals. The team randomly sampled 195 exchanges from Ask Does where a verified doctor responded to a public question. The original questions were then posed to the AI language model ChatGPT, which was asked to respond. Finally, a panel of three licensed healthcare professionals, who did not know whether the response came from a human physician or ChatGPT, rated the answers for quality and empathy.
Overall, the panel preferred ChatGPT's answers to those given by a human 79 percent of the time. ChatGPT responses were also rated good or very good quality 79 percent of the time, compared with 22 percent of doctors' responses, and 45 percent of the ChatGPT answers were rated empathic or very empathic compared with just 5 percent of doctors' replies.
Christopher Longhurst, of UC San Diego Health, said:"These results suggest that tools like ChatGPT can efficiently draft high-quality, personalized medical advice for review by clinicians, and we are beginning that process at UCSD Health."
Professor James Davenport, of the University of Bath, who was not involved in the research, said: "The paper does not say that ChatGPT can replace doctors, but does, quite reasonably, call for further research into whether and how ChatGPT can assist physicians in response generation."
Some noted that, given ChatGPT was specifically designed to be likable, it was not surprising that it wrote texts that came across as empathic. It also tended to give longer, chattier answers than human doctors, which could have played a role in its higher ratings. Others cautioned against relying on language models for factual information due to their tendency to generate made-up "facts".