刷题刷出新高度,偷偷领先!偷偷领先!偷偷领先! 关注我们,悄悄成为最优秀的自己!

单选题

    This year marks exactly two centuries since the publication of Frankenstein; or, The Modern Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come.

    Today the rapid growth of artificial intelligence (AI) raises fundamental questions: “What is intelligence, identity, or consciousness? What makes humans humans?”

    What is being called artificial general intelligence, machines that would imitate the way humans think, continues to evade scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such as “Westworld” and “Humans”.

    Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist. “We are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.”

    But that doesn’t mean crucial ethical issues involving AI aren’t at hand. The coming use of autonomous vehicles, for example, poses thorny ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. AI “vision” today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem.

    Whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI. Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring.

    On June 7 Google pledged not to “design or deploy AI” that would cause “overall harm,” or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged not to deploy AI whose use would violate international laws or human rights.

    While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be explainable, transparent, and fair. To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s out-of-control monster.

32. In David Eagleman’s opinion, our current knowledge of consciousness ________.

A
helps explain artificial intelligence
B
can be misleading to robot making
C
inspires popular sci-fi TV series
D
is too limited for us to reproduce it
使用微信搜索喵呜刷题,轻松应对考试!

答案:

D

解析:

答案精析:根据题干中的David Eagleman定位至第四段第一句。大卫·伊格曼提到人们的思想仍然过于复杂,无法被理解,更不用说被复制了。随后他又指出,我们所处的情况是,没有好的理论来解释人类的意识究竟是什么,以及如何才能制造出一台机器来实现这一目标。由此可知,我们对意识的认识是很有限的,并且无法对意识进行复制。D项中的reproduce为原词复现,limited对应原文中的no good theories explaining what consciousness actually is,故正确答案为D。

错项排除:A项无中生有,原文中并未提及人类对意识的认识可以帮助解释人工智能,故A项错误。B项偷换概念,原文说的是没有好的理论可以帮助制造机器,并非是我们对意识的认识会误导机器人的制造,故B项错误。C项利用原文第三段末尾出现的popular sci-fi TV series作干扰,但原文中并未说明人类对意识的认识和科幻电视剧之间有什么联系,这也并非大卫·伊格曼的观点,故C项错误。

创作类型:
原创

本文链接:32. In David Eagleman’s opinion, our current knowl

版权声明:本站点所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明文章出处。

让学习像火箭一样快速,微信扫码,获取考试解析、体验刷题服务,开启你的学习加速器!

分享考题
share