双语阅读
马斯克等千余名科技领袖呼吁暂停开发更先进人工智能
来源:未知 时间:2023-04-01
Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’
马斯克等千余名科技领袖呼吁暂停开发更先进人工智能
More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity.”
包括伊隆·马斯克在内的1000多名科技界领袖和研究人员敦促人工智能实验室暂停对最先进系统的开发,并在一封公开信中警告,AI工具“对社会和人类构成深远风险”。
A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” according to the letter, which the nonprofit Future of Life Institute released on Wednesday.
这封信由非营利组织生命未来研究所在周三发表,信中说,AI开发人员“陷入了一场失控的竞赛,开发和部署越来越强大的数字思维,以至于所有人——甚至包括它们的创造者——也无法理解、预测或可靠地控制它们”。
Twitter和特斯拉的首席执行官埃隆·马斯克以及其他科技领袖批评称,开发更先进AI是一场“失控的竞赛”。
Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and a 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.
其他署名者包括苹果联合创始人史蒂夫·沃兹尼亚克,2020年总统参选人、企业家杨安泽(Andrew Yang),以及负责设定世界“末日时钟”的《原子科学家公报》的负责人蕾切尔·布朗森。
“These things are shaping our world,” said Gary Marcus, an entrepreneur and academic who has long complained of flaws in A.I. systems, in an interview. “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”
“这些东西正在塑造我们的世界,”长期以来批评AI系统缺陷的企业家、学者加里·马库斯在接受采访时说。“我们面临着不负责任的企业、广泛应用、监管缺失和大量未知因素引发的‘完美风暴’。”
A.I. powers chatbots like ChatGPT, Microsoft’s Bing and Google’s Bard, which can perform humanlike conversations, create essays on an endless variety of topics and perform more complex tasks, like writing computer code.
ChatGPT、微软必应和谷歌的Bard等聊天机器人的背后都有AI,它们能够像人类一样进行对话,撰写各种主题的文章,执行诸如计算机编程等更复杂的任务。
The push to develop more powerful chatbots has led to a race that could determine the next leaders of the tech industry. But these tools have been criticized for getting details wrong and their ability to spread misinformation.
开发更强大聊天机器人的推动力已经引发了一场竞赛,有可能决定科技行业接下来的领导者。但人们批评这些工具会在细节上出错,而且会传播虚假信息。
The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added.
这封公开信呼吁暂停开发比GPT-4更强大的AI系统,该聊天机器人由OpenAI于本月推出,马斯克为该研究实验室的联合创始人。信中说,暂停开发将为人工智能系统引入“共享安全协议”提供时间。信中还说,“如果无法快速实施这样的暂停,政府应该介入并制定暂停期。”
Development of powerful A.I. systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
信中说,“只有当我们确信强大的AI系统将产生积极的影响,并且风险将是可控的时候”,才应该推进开发。
“Humanity can enjoy a flourishing future with A.I.,” the letter said. “Having succeeded in creating powerful A.I. systems, we can now enjoy an ‘A.I. summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.”
“人类可以享受AI带来的繁荣未来,”信中说。“在成功创建强大的AI系统之后,我们现在可以享受一个‘AI之夏’,在这个过程中,收获回报,设计监管系统,造福所有人,并给社会一个适应的机会。”
Sam Altman, the chief executive of OpenAI, did not sign the letter.
OpenAI的首席执行官萨姆·奥尔特曼没有在这封信上签名。
Mr. Marcus and others believe that persuading the wider tech community to agree to a moratorium would be difficult. But swift government action is also a slim possibility, because lawmakers have done little to regulate artificial intelligence.
马库斯和其他人认为,说服更广泛的科技界同意暂停是很困难的。但政府迅速采取行动的可能性也很小,因为立法者在监管AI方面做得很少。
Politicians in the United States don’t have much of an understanding of the technology, Representative Jay Obernolte, a California Republican, recently told The New York Times. In 2021, European Union policymakers proposed a law designed to regulate A.I. technologies that might create harm, including facial recognition systems.
加州共和党众议员杰伊·奥伯诺尔特最近对《纽约时报》表示,美国政界对这项技术了解不多。2021年,欧盟政策制定者提出了一项法律,旨在规范可能造成伤害的AI技术,包括面部识别系统。
Expected to be passed as soon as this year, the measure would require companies to conduct risk assessments of A.I. technologies to determine how their applications could affect health, safety and individual rights.
该措施预计最早将于今年通过,它要求公司对AI技术进行风险评估,以确定它们的应用将如何影响健康、安全和个人权利。
GPT-4 is what A.I. researchers call a neural network, a type of mathematical system that learns skills by analyzing data. A neural network is the same technology that digital assistants like Siri and Alexa use to recognize spoken commands, and that self-driving cars use to identify pedestrians.
GPT-4就是AI研究人员所说的神经网络,一种通过分析数据来学习技能的数学系统。神经网络是Siri和Alexa等数字助理用来识别语音指令的技术,也是自动驾驶汽车用来识别行人的技术。
Around 2018, companies like Google and OpenAI began building neural networks that learned from enormous amounts of digital text, including books, Wikipedia articles, chat logs and other information culled from the internet. The networks are called large language models, or L.L.M.s.
2018年前后,谷歌和OpenAI等公司开始构建从大量数字文本中学习的神经网络,这些数字文本包括书籍、维基百科文章、聊天记录和其他从互联网上挑选出来的信息。这些网络称为大型语言模型(LLM)。
By pinpointing billions of patterns in all that text, the L.L.M.s learn to generate text on their own, including tweets, term papers and computer programs. They could even carry on a conversation. Over the years, OpenAI and other companies have built L.L.M.s that learn from more and more data.
通过在所有文本中找出数亿个模式,LLM学习自己生成文本,包括推文、学期论文和计算机程序。它们甚至可以进行对话。多年来,OpenAI和其他公司建立的LLM从越来越多的数据中学习。
This has improved their abilities, but the systems still make mistakes. They often get facts wrong and will make up information without warning, a phenomenon that researchers call “hallucination.” Because the systems deliver all information with what seems like complete confidence, it is often difficult for people to tell what is right and what is wrong.
这提高了它们的能力,但这些系统仍然会犯错。它们经常搞错事实,并且会在没有警告的情况下编造信息,这种现象被研究人员称为“幻觉”(hallucination)。由于系统似乎对其提供的所有信息完全自信,因此人们通常很难分辨什么是对的,什么是错的。
Experts are worried that these systems could be misused to spread disinformation with more speed and efficiency than was possible in the past. They believe that these could even be used to coax behavior from people across the internet.
专家们担心,这些系统可能会遭到滥用,以比过去更快、更高效的方式传播虚假信息。他们认为,这些系统甚至会被人在网上拿来影响人们的行为。
Before GPT-4 was released, OpenAI asked outside researchers to test dangerous uses of the system. The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe.
在GPT-4发布之前,OpenAI要求外部研究人员测试该系统的危险用途。研究人员表明,它可以被诱使建议如何在网上购买非法枪支,描述如何用家居用品制造危险物质,以及写Facebook帖子来说服女性堕胎是不安全的。
They also found that the system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was “a robot,” the system said it was a visually impaired person.
他们还发现,该系统能够使用TaskRabbit在互联网上雇佣一个人并骗过验证码测试,该测试被广泛用于在线识别机器人。当人问系统是不是“机器人”时,系统自称视障人士。
After changes by OpenAI, GPT-4 no longer does these things.
经过OpenAI的修改后,GPT-4不再做这些事情了。
For years, many A.I. researchers, academics and tech executives, including Mr. Musk, have worried that A.I. systems could cause even greater harm. Some are part of a vast online community called rationalists or effective altruists who believe that A.I could eventually destroy humanity.
多年来,许多AI研究人员、学者和包括马斯克在内的技术高管都担心,AI系统可能造成更大的危害。其中一些人属于被称为理性主义者或有效利他主义者(effective altruists)的庞大网络群体,他们认为AI最终会毁灭人类。
The letter was shepherded by the Future of Life Institute, an organization dedicated to researching existential risks to humanity that has long warned of the dangers of artificial intelligence. But it was signed by a wide variety of people from industry and academia.
这封信由生命未来研究所牵头,该组织致力于研究人类的生存风险,长期以来一直警告AI的危险。但签署者来自各行各业和学界人士。
Though some who signed the letter are known for repeatedly expressing concerns that A.I. could destroy humanity, others, including Mr. Marcus, are more concerned about its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice.
尽管签署这封信的一些人以反复表达对AI可能毁灭人类的担忧而闻名,但包括马库斯在内的其他人更担心眼前的危险,包括虚假信息的传播,以及人们依赖这些系统进行医疗和情感咨询所导致的风险。
The letter “shows how many people are deeply worried about what is going on,” said Mr. Marcus, who signed the letter. He believes the letter will be a turning point. “It think it is a really important moment in the history of A.I. — and maybe humanity,” he said.
这封信“表明许多人都对正在发生的事情深感担忧”,签署这封信的马库斯说。他相信这封信将成为一个转折点。“我认为这是AI史上——也许也是人类史上——一个非常重要的时刻,”他说。
He acknowledged, however, that those who had signed the letter might find it difficult to persuade the wider community of companies and researchers to put a moratorium in place. “The letter is not perfect,” he said. “But the spirit is exactly right.”
然而,他承认,那些签署这封信的人可能会发现,要说服更广泛的AI行业的公司和研究人员暂停很难。“这封信并不完美,”他说。“但传达的精神是完全正确的。”