When Emmett Shear, former CEO of live streaming site Twitch, was named interim CEO of OpenAI on Sunday night, it might have seemed like a curious choice.
After graduating from college in 2005, he spent nearly his entire career on Twitch, the Amazon-owned platform popular with video games, as it grew from a fledgling site called Justin.tv to a behemoth with more than 30 million viewers. daily, before leaving. this year.
Shear, 40, an avid video gamer, was seen as a competent leader who steered Twitch through several transitions. But he faced criticism, including for his handling of claims in 2020 that Twitch’s work culture was hostile toward women and for the site’s slowness to respond to harmful content. Some employees and live streamers also complained that his focus on maneuvering Twitch toward profitability by cutting costs was eroding the quality of the platform.
He also knows Sam Altman, who was ousted from OpenAI by its board of directors on Friday. The two were in the same group at Y Combinator, the seed fund that invested in his first two companies.
But in interviews and on social media, Shear has expressed an opinion about the risks of artificial intelligence that could appeal to OpenAI board members, who ousted Altman, at least in part, out of fear that he was unavailable. paying enough. attention to the potential threat posed by the company’s technology.
On a technology podcast In June, Shear expressed concern about what could happen if AI reached artificial general intelligence, or AGI, a term for human-level intelligence. He was concerned that, at that time, an AI system could become so powerful that it could continue to improve itself without the need for human intervention and would have the ability to destroy humanity.
Shear could not immediately be reached for comment Monday. Early Monday on X, the platform formerly known as Twitter, he wrote that he would spend the first month of his tenure investigating how Altman had been ousted and overhauling the company’s management team.
Depending on the results of “whatever we learn from this, I will push for changes in the organization, including pushing hard for significant governance changes if necessary,” he said.
In the podcast, Shear discussed a concept, often discussed in AI circles, that centers on paperclips: In short, the idea is that even giving an all-powerful AI a goal as mundane as manufacturing as many paperclips as possible would be They led him to determine that eradicating humans would be the most efficient way to achieve that goal.
“Step 1 is ‘Take control of the planet’, right? So I have control over everything. Step 2 is ‘I solve my goal,’” she said.
If AI reaches that point, Shear said, the potential catastrophe would be like a “universe-destroying bomb.”
“It is not just about extinction at the human level; Extinct humans is bad enough,” he stated. “It’s like a potential destruction of all value in the light cone. “Not just for us, but for any alien species trapped after the explosion.”
Shear said he was not as worried as some AI theorists about this type of world-ending event, partly because he did not believe current AI technology was anywhere close to such a breakthrough, and partly because he thought it would be possible to guarantee that the goals of AI systems were aligned with those of humans. But he still embraced industry safeguards.
“I’m in favor of creating some kind of fire alarm, like maybe, ‘No AIs larger than x,'” he said. “I think there are good options for international collaboration and treaties on some type of AI test ban treaty.”
In posts about X, Mr. Shear has reinforced those points, referring to himself as a “doomer” and suggesting that companies should slow down their technological advances.
“I am in favor of a slowdown” answered to another user in September. “We can’t learn to build safe AI without experimenting, and we can’t experiment without progressing, but we probably shouldn’t be moving at full speed either.”