In late March, a different public letter gathered more than 1,000 signatures from members of the academic, business and technology worlds who called for an outright pause on the development of new high-powered AI models until regulation could be put into place. Most of the field’s most influential leaders didn’t sign that one, but they have signed the new statement, including Altman and two of Google’s most senior AI executives: Demis Hassabis and James Manyika. Microsoft chief technology officer Kevin Scott and Microsoft chief scientific officer Eric Horvitz both signed it as well.
Notably absent from the letter are Google CEO Sundar Pichai and Microsoft CEO Satya Nadella, the field’s two most powerful corporate leaders.
Pichai said in April that the pace of technological change may be too fast for society to adapt, but he was optimistic because the conversation around AI risks was already happening. Nadella has said that AI will be hugely beneficial by helping humans work more efficiently and allowing people to do more technical tasks with less training.
Industry leaders are also stepping up their engagement with Washington power brokers. Earlier this month, Altman met with President Biden to discuss AI regulation. He later testified on Capitol Hill, warning lawmakers that AI could cause significant harm to the world. Altman drew attention to specific “risky” applications, including using it to spread disinformation and potentially aid in more targeted drone strikes.
Sen. Richard Blumenthal (D-Conn.) said Tuesday: “These technologies are no longer fantasies of science fiction. From the displacement of millions of workers to the spread of misinformation, AI poses widespread threats and risks to our society.” He is pushing for AI regulation from Congress.
Loading
Hendrycks added that “ambitious global coordination” might be required to deal with the problem, possibly drawing lessons from both nuclear nonproliferation or pandemic prevention. Though a number of ideas for AI governance have been proposed, no sweeping solutions have been adopted.
Altman, the OpenAI CEO, suggested in a recent blog post that there probably will be a need for an international organisation that can inspect systems, test their compliance with safety standards, and place restrictions on their use ― similar to how the International Atomic Energy Agency governs nuclear technology.
Addressing the apparent hypocrisy of sounding the alarm over AI while rapidly working to advance it, Altman told Congress that it was better to get the tech out to many people now while it is still early so that society can understand and evaluate its risks, rather than waiting until it is already too powerful to control.
Others have implied that the comparison to nuclear technology may be alarmist. Former White House tech adviser Tim Wu said likening the threat posed by AI to nuclear fallout misses the mark and clouds the debate around reining in the tools by shifting the focus away from the harms it may already be causing.
“There are clear harms from AI, misuse of AI already that we’re seeing, and I think we should do something about those, but I don’t think they’re . . . yet shown to be like nuclear technology,” he told The Washington Post in an interview last week.
The Washington Post
The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.