A Perspective article published in Nature this week says that AI systems pose a further risk: that researchers envision such tools as possessed of superhuman abilities when it comes to objectivity, productivity and understanding complex concepts. Scientists planning to use AI "must evaluate these risks now, while AI applications are still nascent (未成熟的), because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline", write co-authors Lisa Messeri and Molly Crockett.
In this article, Messeri and Crockett put together a picture of the ways in which scientists see AI systems as enhancing human capabilities. In one ‘vision', which they call AI as Oracle, researchers see AI tools as able to tirelessly read and digest scientific papers, and so survey the scientific literature more exhaustively than people can. In both Oracle and another vision, called AI as Arbiter, systems are perceived as evaluating scientific findings more objectively than do people, because they are less likely to cherry-pick the literature to support a desired hypothesis or to show favouritism in peer review. In a third vision, AI as Quant, AI tools seem to surpass the limits of the human mind in analyzing vast and complex data. In the fourth, AI as Surrogate, AI tools simulate (模拟) data that are too difficult or complex to obtain.
Informed by anthropology and cognitive science, Messeri and Crockett predict risks that arise from these visions. One is the illusion of explanatory depth, in which people relying on another person—or, in this case, an algorithm—for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.
Another risk is that research becomes skewed towards studying the kinds of thing that AI systems can test—the researchers call this the illusion of exploratory breadth. For example, in social science, the vision of AI as Surrogate could encourage experiments involving human behaviors that can be simulated by an AI—and discourage those on behaviors that cannot, such as anything that requires being embodied physically.
There's also the illusion of objectivity, in which researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases (偏见) found in those data. "There's a risk that we forget that there are certain questions we just can't answer about human beings using AI tools," says Crockett. The illusion of objectivity is particularly worrying given the benefits of including diverse viewpoints in research.
If you're a scientist planning to use AI, you can reduce these dangers through a number of strategies. One is to map your proposed use to one of the visions, and consider which traps you are most likely to fall into. Another approach is to be deliberate about how you use AI. Deploying AI tools to save time on something your team already has expertise in is less risky than using them to provide expertise you just don't have, says Crockett.