AI experts disown Musk-backed campaign citing their research
Corrects paragraph 6 in April 1 story to show the Musk Foundation is a major (not the primary) donor to FLI
By Martin Coulter
LONDON, April 1 (Reuters) -Four artificial intelligence experts have expressed concern after their work was cited in an open letter – co-signed by Elon Musk – demanding an urgent pause in research.
The letter, dated March 22 and with more than 1,800 signatures by Friday, called for a six-month circuit-breaker in the development of systems "more powerful" than Microsoft-backed MSFT.O OpenAI's new GPT-4, which can hold human-like conversation, compose songs and summarise lengthy documents.
Since GPT-4's predecessor ChatGPT was released last year, rival companies have rushed to launch similar products.
The open letter says AI systems with "human-competitive intelligence" pose profound risks to humanity, citing 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google GOOGL.O and its subsidiary DeepMind.
Civil society groups in the U.S. and EU have since pressed lawmakers to rein in OpenAI's research. OpenAI did not immediately respond to requests for comment.
Critics have accused the Future of Life Institute (FLI), the organisation behind the letter,of prioritising imagined apocalyptic scenarios over more immediate concerns about AI, such as racist or sexist biases. The Musk Foundation is amajor donor to FLI.
Among the research cited was "On the Dangers of Stochastic Parrots", a paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google.
Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as "more powerful than GPT4".
"By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI," she said. "Ignoring active harms right now is a privilege that some of us don't have."
Mitchell and her co-authors -- Timnit Gebru, Emily M. Bender, and Angelina McMillan-Major -- subsequently published a response to the letter, accusing its authors of "fearmongering and AI hype".
"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a 'flourishing' or 'potentially catastrophic' future," they wrote.
"Accountability properly lies not with the artefacts but with their builders."
FLI president Max Tegmark told Reuters the campaign was not an attempt to hinder OpenAI’s corporate advantage.
"It's quite hilarious. I've seen people say, 'Elon Musk is trying to slow down the competition,'" he said, adding that Musk had no role in drafting the letter. "This is not about one company."
Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, told Reuters she agreed with some points in the letter, but took issue with the way in which her work was cited.
She last year co-authored a research paper arguing the widespread use of AI already posed serious risks.
Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.
She said: "AI does not need to reach human-level intelligence to exacerbate those risks.
"There are non-existential risks that are really, really important, but don't receive the same kind of Hollywood-level attention."
Asked to comment on the criticism, FLI's Tegmark said both short-term and long-term risks of AI should be taken seriously.
"If we cite someone, it just means we claim they're endorsing that sentence. It doesn't mean they're endorsing the letter, or we endorse everything they think," he told Reuters.
Dan Hendrycks, director of the California-based Center for AI Safety, who was also cited in the letter, stood by its contents, telling Reuters it was sensible to consider black swan events - those which appear unlikely, but would have devastating consequences.
The open letter also warned that generative AI tools could be used to flood the internet with "propaganda and untruth".
Dori-Hacohen said it was "pretty rich" for Musk to have signed it, citing a reported rise in misinformation on Twitter following his acquisition of the platform, documented by civil society group Common Cause and others.
Musk and Twitter did not immediately respond to requests for comment.
Reporting by Martin Coulter; editing by Philippa Fletcher and Giles Elgood
면책조항: XM Group 회사는 체결 전용 서비스와 온라인 거래 플랫폼에 대한 접근을 제공하여, 개인이 웹사이트에서 또는 웹사이트를 통해 이용 가능한 콘텐츠를 보거나 사용할 수 있도록 허용합니다. 이에 대해 변경하거나 확장할 의도는 없습니다. 이러한 접근 및 사용에는 다음 사항이 항상 적용됩니다: (i) 이용 약관, (ii) 위험 경고, (iii) 완전 면책조항. 따라서, 이러한 콘텐츠는 일반적인 정보에 불과합니다. 특히, 온라인 거래 플랫폼의 콘텐츠는 금융 시장에서의 거래에 대한 권유나 제안이 아닙니다. 금융 시장에서의 거래는 자본에 상당한 위험을 수반합니다.
온라인 거래 플랫폼에 공개된 모든 자료는 교육/정보 목적으로만 제공되며, 금융, 투자세 또는 거래 조언 및 권고, 거래 가격 기록, 금융 상품 또는 원치 않는 금융 프로모션의 거래 제안 또는 권유를 포함하지 않으며, 포함해서도 안됩니다.
이 웹사이트에 포함된 모든 의견, 뉴스, 리서치, 분석, 가격, 기타 정보 또는 제3자 사이트에 대한 링크와 같이 XM이 준비하는 콘텐츠 뿐만 아니라, 제3자 콘텐츠는 일반 시장 논평으로서 "현재" 기준으로 제공되며, 투자 조언으로 여겨지지 않습니다. 모든 콘텐츠가 투자 리서치로 해석되는 경우, 투자 리서치의 독립성을 촉진하기 위해 고안된 법적 요건에 따라 콘텐츠가 의도되지 않았으며, 준비되지 않았다는 점을 인지하고 동의해야 합니다. 따라서, 관련 법률 및 규정에 따른 마케팅 커뮤니케이션이라고 간주됩니다. 여기에서 접근할 수 있는 앞서 언급한 정보에 대한 비독립 투자 리서치 및 위험 경고 알림을 읽고, 이해하시기 바랍니다.