刷题刷出新高度,偷偷领先!偷偷领先!偷偷领先! 关注我们,悄悄成为最优秀的自己!

单选题

    In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams “Save her! Save her!” the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah’s 11 percent. The robots decision and its calculated approach raise an important question: would humans make the same choice and which choice would we want our robotic counterparts to make?

    Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that I. robots cannot harm humans or allow humans to come to harm; 2. Robots must obey preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov’s robots—they don’t have to think, judge, or value. They don’t have to like humans or believe that hurting them is wrong or bad. They simply don’t do it.

    The robot who rescues Spooner’s life in I, Robot follows Asimov’s zeroth law: robots cannot harm humanity (as opposed to individual humans or allow humanity to come to harm—an expansion of the first law that allows robots to determine what’s in the greater good. Under the first law. A robot could not harm a dangerous gunman, but under the zeroth law, a robot could kill the gunman to save others.

    Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction exposes complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.

    Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful that a computer program can do that-at least, not without some undesirable results. A roboticist at the Bristol robotics laboratory programmed a robot to save human proxies (替身) called “H-bots” from danger. When one H-bot of headed for danger, the robot successfully pushed it out the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality. How can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?

50. What has the roboticist at the Bristol robotics laboratory found in his experiment?

A
Robots can be made as intelligent as human beings someday.
B
Robots can have moral issues encoded into their programs.
C
Robots can have trouble making decisions in complex scenarios.
D
Robots can be programmed to perceive potential perils.
使用微信搜索喵呜刷题,轻松应对考试!

答案:

C

解析:

50. C)Robots can have trouble making decisions in complex scenarios.

解析:首先在题目中找到定位词Bristol robotics laboratory和in his experiment,然后回原文定位至最后一段第4句。定位句指出,这个实验强调了道德的重要性。一个机器人怎样才能决定救谁,或者怎样对人类最好,特别是当它不能计算生存几率的时候?最后看选项:A)机器人有一天可以像人一样聪明,此项并没有在原文得到体现,故排除;B)机器人可以将道德问题编入程序中,原文强调了道德的重要性,但并没有提出把道德问题编入程序,也不是机器人专家发现的东西,故排除;C)机器人很难在复杂的情况下做出判断,与原文信息一致,故正确;D)机器人可以被编程来感知潜在的风险,在原文并没有得到体现,故排除。

创作类型:
原创

本文链接:50. What has the roboticist at the Bristol robotic

版权声明:本站点所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明文章出处。

让学习像火箭一样快速,微信扫码,获取考试解析、体验刷题服务,开启你的学习加速器!

分享考题
share