首页在线考试章节闯关谁正在考试学习应用
您的位置:在线考试首页>章节闯关>英语六级>阅读理解>历年真题> 第25关
  • 00:00:00
已做0项(正确0项,错误做0项)/共10项,剩余10项未作答
结束作答
阅读理解
收藏纠错
Professor Stephen Hawking has warned that the creation of powerful artifcial intelligence (AI) will be “either the best, or the worst thing, ever to happen to humanity”, and praised the creation of an academic institute dedicated to researching the future of intelligence as “ crucial to the future of our civilisation and our species”.
Hawking was speaking at the opening of the Leverhulme Centre for the Future of Intelligence(LCFI) at Cambridge University, a multi-disciplinary institute that will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. “We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”
While the world-renowned physicist has often been cautious about AI, raising concerns that humanity could be the architect of its own destruction if it creates a super-intelligence with a will of its own, he was also quick to highlight the positives that AI research can bring. “The potential benefits of creating intelligence are huge,” he said. “We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one—industrialisation. And surely we will aim to finally eradicate disease and poverty. And every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation.”
Huw Price, the centre’s academic director and the Bertrand Russell professor of philosophy at Cambridge University, where Hawking is also an academic, said that the centre came about partially as a result of the university’s Centre for Existential Risk. That institute examined a wider range of potential problems for humanity, while the LCFI has a narrow focus.
AI pioneer Margaret Boden, professor of cognitive science at the University of Sussex, praised the progress of such discussions. As recently as 2009, she said, the topic wasn’t taken seriously, even among AI researchers. “AI is hugely exciting,” she said, “but it has limitations, which present grave dangers given uncritical use.”
The academic community is not alone in warning about the potential dangers of AI as well as the potential benefits. A number of pioneers from the technology industry, most famously the entrepreneur Elon Musk, have also expressed their concerns about the damage that a super-intelligent AI could do to humanity.
1
What did Stephen Hawking think of artificial intelligence?
A.It would be vital to the progress of human civilisation.
B.It might be a blessing or a disaster in the making.
C.It might present challenges as well as opportunities.
D.It would be a significant expansion of human intelligence.
本题答案:
  • A
  • B
  • C
  • D
  • 参考答案:B
  • 系统解析:
    本题考查霍金对AI的看法。由Hawking、artificial intelligence定位至第一段(StephenH awking...A I will be...)A将第一段霍金观点“创建未来智能研究机构对人类文明至关單要”偷换为“人工智能对人类文明至关重要”。C将第一段霍金眼中AI的“非此即彼(要么是人类最大的幸事,要么是最大的灾难)”偷换为“并列关系(机遇与挑战并存)”且“带来机遇和挑战”无法体现霍金认为的严重性“关乎人类存亡”。D将第三段③句霍金口中的“不确定情形:无法预知当我们的大脑被AI放大时可能会取得何种成就”曲解为“确定情形:AI会显着提升人类的智力”。首段指出霍金警告“AI将会是人类历史最好的事,抑或是最糟的事”,B符合此意,in the making(在发展中, 在形成中)对应文中will be (尚未出现),a blessing or a disaster对应文中either the best,or the worst thing。
2
What did Hawking say about the creation of the LCFI?
A.It would accelerate the progress of AI research.
B.It would mark a step forward in the AI industry.
C.It was extremely important to the destiny of humankind.
D.It was an achievement of multi-disciplinary collaboration.
本题答案:
  • A
  • B
  • C
  • D
  • 参考答案:C
  • 系统解析:
    本题考查霍金对成立LCFI的看法。由Hawking、LCFI定位至第一、二段(Hawking …the creation of an academic institute...LCFI)。A将第二段句句LCFI的宗旨“解决AI研究飞快发展中出现的开放性问题(不排除控制AI发展的可能)”曲解为“加快AI研究进展”。B关注“AI产业”层面,而文中并未提及。D将第二段句句所述事实“LCFI是跨学科机构/开展跨学科研究”窜改为霍金观点“LCFI是多个学科合作的成果”。首段霍金赞扬“创建一个致力千研究未来智能的学术机构(即LCFI)”道:这对人类文明和人类物种的未来至关重要。第二段进一步指出LCFI致力于研究智能的未来,这是一个可喜的变化,可见C正确体现霍金观点。
3
What did Hawking say was a welcome change in AI research?
A.The shift of research focus from the past to the future.
B.The shift of research from theory to implementation.
C.The greater emphasis on the negative impact of AI.
D.The increasing awareness of mankind’s past stupidity.
本题答案:
  • A
  • B
  • C
  • D
  • 参考答案:A
  • 系统解析:
    本题考查“霍金对AI研究的看法”,由a welcome change in AI research 定位至第二段末。B由常识“强AI尚处千理论阶段”捏造出原文未论及的信息“AI研究应该从理论转向实践”。C由第一段事实“霍金警告人们关注AI威胁”主观推知“他认为更加关注AI负面性是可喜变化”,但这一变化是第五段博登言论暗示信息,不属于霍金言论。D将第二段末两句逻辑“霍金认为人类历史是愚蠢的,所以现在开始研究未来是一个可喜的转变”曲解为“AI研究(者)认识到人类历史是愚蠢的(所以开始研究未来),霍金认为这是可喜的转变”。第二段末两句中霍金指出,我们花了大最时间研究历史,但大部分历史是愚蠢的。所以人们现在转而研究智能的未来是一个可喜的转变,可见A是对霍金观点的正确概括。
4
What concerns did Hawking raise about AI?
A.It may exceed human intelligence sooner or later.
B.It may ultimately over-amplify the human mind.
C.Super-intelligence may cause its own destruction.
D.Super-intelligence may eventually ruin mankind.
本题答案:
  • A
  • B
  • C
  • D
  • 参考答案:D
  • 系统解析:
    本题考查霍金对AI的担忧。由题干关键词raise…concerns、Hawking、AI定位至第三段①句。A由第三段①句信息“人类可能会创造出超级智能”推出“AI可能会超越人类智力”,但霍金担忧的不是“AI到底有多强”,而是“AI对人类的影响(是否会超出人类控制,毁灭人类)”。B利用第三段③句干扰,但“我们无法预知当我们的大脑被AI放大时会取得何种成就”表明的是“AI放大人类大脑”带来的益处,并非担忧。C将第三段句句the architect of its own destruction 的含义“人类(创造出超级智能)可能会自取灭亡”曲解为“超级智能可能会自取灭亡”。第三段①句说明霍金的担忧“人类倘若创造出具有自身意志的超级智能,很可能会缔造自己的毁灭”,D符合此信息,ruin mankind 对应humanity...its own destruction。
5
What do we learn about some entrepreneurs from the technology industry?
A.They are much influenced by the academic community.
B.They are most likely to benefit from AI development.
C.They share the same concerns about AI as academics.
D.They believe they can keep AI under human control.
本题答案:
  • A
  • B
  • C
  • D
  • 参考答案:C
  • 系统解析:
    本题考查“科技界企业家的相关信息”。由题干关键词entrepreneurs,technology industry定位至末段。A由末段信息“科技界的一些企业家与学术界有同样的担忧”臆断出“这些企业家深受学术界的影响”,而文中并未提及二者之间的影响。B利用常识“AI发展给科技公司带来了巨大商业利益”干扰文中并未提及。D由末段信息“科技界企业家担忧AI失控会损害人类”臆断出原文未提及的信息:这些企业家可以让AI不脱离人类控制。末段指出:并非只有学术界在告诫人们AI的潜在威胁及益处,科技行业的一些先驱人物(包括知名企业家埃隆·马斯克)也已对超级智能AI的潜在风险表达了担忧。C符合此意。
我的笔记
提交纠错
请输入您发现的错误详情用户: