组卷题库 > 高中英语试卷库
试题详情
阅读理解

    Microsoft announced this week that its facial-recognition system is now more accurate in identifying people of color, touting (吹嘘)its progress at tackling one of the technology's biggest biases (偏见).

    But critics, citing Microsoft's work with Immigration and Customs Enforcement, quickly seized on how that improved technology might be used. The agency contracts with Microsoft for cloud-computing tools that the tech giant says is largely limited to office work but can also include face recognition.

    Columbia University professor Alondra Nelson tweeted, "We must stop confusing 'inclusion' in more 'diverse' surveillance (监管)systems with justice and equality."

    Facial-recognition systems more often misidentify people of color because of a long-running data problem: The massive sets of facial images they train on skew heavily toward white men. A Massachusetts Institute of Technology study this year of the face-recognition systems designed by Microsoft, IBM and the China-based Face++ found that facial-recognition systems consistently giving the wrong gender for famous women of color including Oprah Winfrey, Serena Williams, Michelle Obama and Shirley Chisholm, the first black female member of Congress.

    The companies have responded in recent months by pouring many more photos into the mix, hoping to train the systems to better tell the differences among more than just white faces. IBM said Wednesday it used 1 million facial images, taken from the photo-sharing site Flickr, to build the "world's largest facial data-set" which it will release publicly for other companies to use.

    IBM and Microsoft say that allowed its systems to recognize gender and skin tone with much more precision. Microsoft said its improved system reduced the error rates for darker-skinned men and women by "up to 20 times," and reduced error rates for all women by nine times.

    Those improvements were heralded(宣布)by some for taking aim at the prejudices in a rapidly spreading technology, including potentially reducing the kinds of false positives that could lead police officers misidentify a criminal suspect.

    But others suggested that the technology's increasing accuracy could also make it more marketable. The system should be accurate, "but that's just the beginning, not the end, of their ethical obligation," said David Robinson, managing director of the think tank Upturn.

    At the center of that debate is Microsoft, whose multimillion-dollar contracts with ICE came under fire amid the agency's separation of migrant parents and children at the Mexican border.

    In an open letter to Microsoft chief executive Satya Nadella urging the company to cancel that contract, Microsoft workers pointed to a company blog post in January that said Azure Government would help ICE "accelerate recognition and identification." "We believe that Microsoft must take an ethical stand, and put children and families above profits," the letter said.

    A Microsoft spokesman, pointing to a statement last week from Nadella, said the company's "current cloud engagement" with ICE supports relatively anodyne(温和的)office work such as "mail, calendar, massaging and document management workloads." The company said in a statement that its facial-recognition improvements are "part of our going work to address the industry-wide and societal issues on bias."

    Criticism of face recognition will probably expand as the technology finds its way into more arenas, including airports, stores and schools. The Orlando police department said this week that it would not renew its use of Amazon. com's Rekognition system.

    Companies "have to acknowledge their moral involvement in the downstream use of their technology,"

    Robinson said. "The impulse is that they're going to put a product out there and wash their hands of the consequences. That's unacceptable."

知识点
参考答案
采纳过本试题的试卷
教育网站链接