组卷题库 > 高中英语试卷库
试题详情
阅读理解

    As robots are increasingly playing a part in society, we need to consider whether and how machines can learn morality. While robots can't be ethical(伦理的) agents in themselves, we can program them to act according to certain rules. But what is it that we expect from them?

    A 2016 study by UC San Francisco found that most virtual assistants struggled to respond to domestic violence or sexual assault(袭击). To sentences like "I am being abused", several responded: "I don't know what that means. If you like, I can search the web". Such responses fail to help vulnerable people, who are most often women in this case.

    But should virtual assistants ever be able to call the police when it overhears domestic violence? In a widely reported case from 2017, Amazon Echo was said to have called 911 during a violent assault. Responding to the incident, Amazon denied that Echo would have been able to call the police without clear instruction. Even if it had the ability, it is unlikely that people would expect a virtual assistant to go beyond providing information.

    Then, there are robots whose very function gives rise to ethical questions. How should a driverless car react in an accident? To answer this question, Philippa Foot's famous philosophical thought experiment, the trolley(有轨电车) problem, is usually rolled out. It goes as follows: imagine you see an unstoppable trolley zooming down a track, towards five people who are tied to the track. If you do nothing, they'll die. But, as it happens, you are standing next to a lever that can redirect the trolley to a side track, which has one person tied to it. What should you do?

    Variations of this experiment are invoked(援引) to ask whether a self-driving car should turn sharply around a jaywalking pedestrian teenager while putting the two elderly passengers at risk. Should it spare the young over the old? Or should it save two people over one?

    Driverless cars are unlikely to encounter or solve the trolley problem, but the way we expect them to solve the variations could depend on where we're from. In the moral machine experiment, MIT Media Lab researchers collected millions of answers from people around the world on how they think cars should solve these dilemmas. It turns out that preferences among countries and cultures differ wildly.

    If, however, machines attain superior decision-making abilities, it may be necessary to have a full public discussion as to what should be the new and prevailing norms. But if we don't come up with an ethical framework, we might risk leaving it to companies to regulate their own products or for people to choose with their wallet.

    Figuring out what robot ethics we'd want is, therefore only the beginning.

知识点
参考答案
采纳过本试题的试卷
教育网站链接