刷题刷出新高度,偷偷领先!偷偷领先!偷偷领先! 关注我们,悄悄成为最优秀的自己!

单选题

     Professor Stephen Hawking has warned that the creation of powerful artificial intelligence (AI) will be “either the best, or the worst thing, ever to happen to humanity”, and praised the creation of an academic institute dedicated to researching the future of intelligence as “crucial to the future of our civilization and our species”.Hawking was speaking at the opening of the Leverhulme Center for the Future of Intelligence (LCFI) at Cambridge University, a multi-disciplinary institute that will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. “We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”

    While the world-renowned physicist has often been cautious about AI, raising concerns that humanity could be the architect of its own destruction if it creates a super-intelligence with a will of its own, he was also quick to highlight the positives that AI research can bring. “The potential benefits of creating intelligence are huge,” he said. “We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one—industrialization. And surely we will aim to finally eradicate disease and poverty. And every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilization.”

    Huw Price, the center’s academic director and the Bertrand Russell professor of philosophy at Cambridge University, where Hawking is also an academic, said that the center came about partially as a result of the university’s Center for Existential Risk. That institute examined a wider range of potential problems for humanity, while the LCFI has a narrow focus.

    AI pioneer Margaret Boden, professor of cognitive science at the University of Sussex, praised the progress of such discussions. As recently as 2009, she said, the topic wasn’t taken seriously, even among AI researchers. “AI is hugely exciting,” she said, “but it has limitations, which present grave dangers given uncritical use.”

    The academic community is not alone in warning about the potential dangers of AI as well as the potential benefits. A number of pioneers from the technology industry, most famously the entrepreneur Elon Musk, have also expressed their concerns about the damage that a super-intelligent AI could do to humanity.

50. What do we learn about some entrepreneurs from the technology industry?

A
They are much influenced by the academic community.
B
They are most likely to benefit from AI development.
C
They share the same concerns about AI as academics.
D
They believe they can keep AI under human control.
使用微信搜索喵呜刷题,轻松应对考试!

答案:

C

解析:

C。根据entrepreneurs from technology industry可以定位到最后一段最后一句。该句指出,来自科技产业的代表企业家埃隆·马斯克也表达了关于超级人工智能可能对人类造成危害的担忧。本文提到学术界提出应警惕人工智能的潜在危险性,可知C项与文意相符,故C正确。文章只是说企业家与学术界的担心相同,并未说是受学术团体的影响,故A项排除。B、D项在文中未提及,故排除。

创作类型:
原创

本文链接:50. What do we learn about some entrepreneurs from

版权声明:本站点所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明文章出处。

让学习像火箭一样快速,微信扫码,获取考试解析、体验刷题服务,开启你的学习加速器!

分享考题
share