劍橋雅思16test4閱讀3
2023-06-18 13:09:21 來(lái)源:中國(guó)教育在線
劍橋雅思16test4閱讀3 劍橋雅思16閱讀第四套題目第三篇文章的主題為對(duì)待人工智能的態(tài)度。作者一開(kāi)始引出人類對(duì)人工智能缺乏信任度的問(wèn)題,然后使用例子去解釋說(shuō)明這一現(xiàn)象背后的原因,并在文章末尾給出的解決辦法關(guān)于這個(gè)問(wèn)題中國(guó)教育在線平臺(tái)就來(lái)為各個(gè)考生解答下。
劍橋雅思16 Test4 Passage3閱讀原文翻譯
A部分
Artificial intelligence (AI) can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.
人工智能已經(jīng)可以預(yù)測(cè)未來(lái)。警察用它來(lái)標(biāo)記犯罪可能在什么時(shí)候在哪里發(fā)生。醫(yī)生用它來(lái)預(yù)測(cè)病人什么時(shí)候最有可能患上心臟病或者中風(fēng)。研究者甚至嘗試賦予人工智能想象力,以便它能夠?qū)ξ丛A(yù)料到的事情進(jìn)行規(guī)劃。
Many decisions in our lives require a good forecast, and AI is almost always better at forecasting than we are. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.
我們生活中的許多決策都需要優(yōu)秀的預(yù)測(cè),而人工智能幾乎總是要比我們更擅長(zhǎng)預(yù)測(cè)一些。然而,就像對(duì)所有技術(shù)進(jìn)步一樣,我們似乎對(duì)人工智能的預(yù)測(cè)相當(dāng)缺乏信心。最近的案例表明,人們不喜歡依賴人工智能,而更傾向于相信人類專家,即使這些專家是錯(cuò)的。
If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.
如果我們想讓人工智能真正惠及人類,我們需要找到讓人類信任它的方法。要做到這一點(diǎn),我們需要理解為什么人們一開(kāi)始就不愿意相信人工智能。
B部分
Take the case of Watson for Oncology, one of technology giant IBM’s supercomputer programs. Their attempt to promote this program to cancer doctors was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. But when doctors first interacted with Watson, they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much point in Watson’s recommendations. The supercomputer was simply telling them what they already knew, and these recommendations did not change the actual treatment.
以Watson for Oncology為例,它是技術(shù)巨頭IBM推出的超級(jí)計(jì)算機(jī)程序。他們向腫瘤醫(yī)生文章來(lái)自老烤鴨雅思推銷該程序的嘗試是場(chǎng)公共關(guān)系災(zāi)難。該人工智能承諾針對(duì)12種癌癥的治療方案提供高品質(zhì)建議。這12種癌癥占到世界所有病例的百分之八十。但當(dāng)醫(yī)生與Watson互動(dòng)時(shí),他們發(fā)現(xiàn)自己處于十分尷尬的境地。一方面,如果Watson提供的治療方案與他們自己的意見(jiàn)恰好一致,醫(yī)師并不覺(jué)得Watson的建議有什么意義。超級(jí)計(jì)算機(jī)只是告訴他們他們已經(jīng)知道的東西,這些建議并不會(huì)改變實(shí)際的治療。
On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine-learning algorithms were simply too complex to be fully understood by humans. This article is from Laokaoya website. Consequently, this has caused even more suspicion and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.
另一方面,如果Watson給出的建議與專家意見(jiàn)相反,醫(yī)生往往會(huì)得出Watson并不合格的結(jié)論。機(jī)器無(wú)法解釋為什么它的治療方案很有道理,因?yàn)闄C(jī)器學(xué)習(xí)的算法太過(guò)復(fù)雜,人類無(wú)法徹底理解。這就引發(fā)更多的懷疑和不信任,讓許多醫(yī)生忽略顯得十分古怪的人工智能的建議,并堅(jiān)持他們自己的專業(yè)知識(shí)。
C部分
This is just one example of people’s lack of confidence in AI and their reluctance to accept what AI has to offer. Trust in other people is often based on our understanding of how others think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. Even if it can be technically explained (and that’s not always the case), Al’s decision-making process is usually too difficult for most people to comprehend. And interacting with something we don’t understand can cause anxiety and give us a sense that we’re losing control.
這只是人們對(duì)人工智能缺乏信心、不愿意接受人工智能所提供的服務(wù)的一個(gè)例子。對(duì)其他人的信任往往基于我們理解他們的思考方式,并對(duì)他們的可靠性有相關(guān)經(jīng)驗(yàn)。這有助于營(yíng)造一種心理上的安全感。另一方面,人工智能對(duì)于大多數(shù)人來(lái)說(shuō)仍然屬于嶄新、陌生的事物。即便它能夠從技術(shù)上得以解釋(并不總是這樣),人工智能的決策過(guò)程對(duì)于大多數(shù)人來(lái)說(shuō)仍然難以理解。與某種我們無(wú)法理解的東西互動(dòng)會(huì)引發(fā)焦慮,并讓我們產(chǎn)生一種失控的感覺(jué)。
Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes wrong. Embarrassing AI failures receive a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.
許多人也不熟悉人工智能實(shí)際發(fā)揮作用的大量案例,因?yàn)檫@往往發(fā)生在背景中。相反,他們強(qiáng)烈意識(shí)到人工智能出錯(cuò)的例子。人工智能尷尬的失敗吸引著不成比例的媒體注意,強(qiáng)調(diào)我們不能依賴科技。機(jī)器學(xué)習(xí)并非萬(wàn)無(wú)一失,這部分是由于設(shè)計(jì)它的人類也是如此。
D部分
Feelings about AI run deep. In a recent experiment, people from a range of backgrounds were given various sci-fi films about AI to watch and then asked questions about automation in everyday life. It was found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.
針對(duì)人工智能的情緒擁有極深的根源。在最近的一項(xiàng)實(shí)驗(yàn)中,來(lái)自不同背景的人們觀看了各種各樣有關(guān)人工智能的電影,然后被問(wèn)一些有關(guān)日常生活中自動(dòng)化的問(wèn)題。研究人員發(fā)現(xiàn),無(wú)論他們所看的電影中人工智能是正面角色還是反面角色,僅僅觀看有關(guān)我們技術(shù)未來(lái)的電影畫(huà)面就會(huì)讓參與者的態(tài)度極化。樂(lè)觀主義者對(duì)人工智能的熱情變得更加極端,而懷疑論者則變得更加謹(jǐn)慎。
This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as “confirmation bias”. As AI is represented more and more in media and entertainment, it could lead to a society split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.
這表明人們會(huì)用一種充滿偏見(jiàn)的方式看待有關(guān)人工智能的證據(jù),以支持他們現(xiàn)有的態(tài)度。這一根植于人類本性中的傾向被稱為“確認(rèn)偏誤”。隨著人工智能在媒體和娛樂(lè)方式中出現(xiàn)的越來(lái)越多,它會(huì)在從中受益的人和拒絕它的人之間造成分裂。更確切的說(shuō),拒絕接受人工智能所提供的好處會(huì)將一大批人置于嚴(yán)重不利的地位。
E部分
Fortunately, we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people’s opinions about the technology, as was found in the study mentioned above. Evidence also suggests the more you use other technologies such as the internet, the more you trust them.
幸運(yùn)的是,對(duì)于如何提升對(duì)人工智能的信任,我們已經(jīng)有了一些想法。正如上述所提到的研究所發(fā)現(xiàn)的那樣,僅僅有過(guò)使用人工智能的經(jīng)驗(yàn)就可以顯著提升人們有關(guān)技術(shù)的看法。證據(jù)也表明,你使用的其他技術(shù)越多,如互聯(lián)網(wǎng),你也會(huì)越信任它們。
Another solution may be to reveal more about the algorithms which AI uses and the purposes they serve. Several high-profile social media companies and online marketplaces already release transparency reports about government requests and surveillance disclosures. A similar practice for AI could help people have a better understanding of the way algorithmic decisions are made.
另一項(xiàng)解決方案可能是更多的披露人工智能所使用的算法,以及它們服務(wù)的目的。幾家高調(diào)的社交媒體公司和線上交易平臺(tái)已經(jīng)發(fā)布了有關(guān)政府要求和監(jiān)管的透明性報(bào)告。人工智能類似的操作也可以幫助人們更好的理解算法決策的方式。
F部分
Research suggests that allowing people some control over AI decision-making could also improve trust and enable AI to learn from human experience. For example, one study showed that when people were allowed the freedom to slightly modify an algorithm, they felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.
研究表明,讓人們對(duì)人工智能的決策制定擁有一定的控制也能夠提升信任,并讓人工智能可以學(xué)習(xí)人類的經(jīng)驗(yàn)。例如,一項(xiàng)研究顯示,當(dāng)人們擁有稍微修改算法的自由時(shí),他們會(huì)對(duì)人工智能的決策更加滿意,更可能相信其更勝一籌,并更可能在未來(lái)使用它。
We don’t need to understand the intricate inner workings of AI systems, but if people are given a degree of responsibility for how they are implemented, they will be more willing to accept AI into their lives.
我們不需要理解人工智能系統(tǒng)復(fù)雜的內(nèi)部工作機(jī)制,但如果人們擁有一定的權(quán)責(zé)決定它們?nèi)绾紊В麄儠?huì)更加愿意在生活中接受人工智能。
以上就是劍橋雅思16test4閱讀3的相關(guān)內(nèi)容。預(yù)祝各位取得理想成績(jī)步入夢(mèng)想大學(xué)。更多有關(guān)雅思方面的內(nèi)容可以隨時(shí)隨地關(guān)注中國(guó)教育在線外語(yǔ)頻道。
>> 雅思 托福 免費(fèi)測(cè)試、量身規(guī)劃、讓英語(yǔ)學(xué)習(xí)不再困難<<