After OpenAI’s Blowup, It Seems Pretty Clear That “AI Safety” Isn’t a Real Thing


Welcome to AI This Week, Gizmodo’s weekly roundup where we do a deep dive on what’s been happening in artificial intelligence.

Well, holy shit. As far as the tech industry goes, it’s hard to say whether there’s ever been a more shocking series of events than the ones that took place over the last several days. The palace intrigue and boardroom drama of Sam Altman’s ousting by the OpenAI board (and his victorious reinstatement earlier today) will doubtlessly go down in history as one of the most explosive episodes to ever befall Silicon Valley. That said, the long-term fallout from this gripping incident is bound to be a lot less enjoyable than the initial spectacle of it.

The “coup,” as many have referred to it, has largely been attributed to an ideological rift between Sam and the OpenAI board over the pace of technological development at the company. So this narrative goes, the board, which is supposed to have ultimate say over the direction of the organization, was concerned about the rate at which Altman was pushing to commercialize the technology, and decided to eject him with extreme prejudice. Altman, who was subsequently backed by OpenAI’s powerful partner and funder, Microsoft, as well as a majority of the startup’s staff, subsequently led a counter-coup, pushing out the traitors and re-instating himself as the leader of the company.

So much of the drama of the episode seems to revolve around this argument between Altman and the board over “AI safety.” Indeed, this fraught chapter in the company’s history seems like a flare up of OpenAI’s two opposing personalities—one based around research and responsible technological development, and the other based around making shitloads of money. One side decidedly overpowered the other (hint: it was the money side).

Other writers have already offered break downs about how OpenAI’s unique organizational structure seems to have set it on a collision course with itself. Maybe you’ve seen the startup’s org chart floating around the web but, in case you haven’t, here’s a quick recap: Unlike pretty much every other technology business that exists, OpenAI is actually a non-profit, governed wholly by its board, that operates and controls a for-profit company. This design is supposed to prioritize the organization’s mission of pursuing the public good over money. OpenAI’s own self-description promotes this idealistic notion—that it’s main aim is to make the world a better place, not make money:

We designed OpenAI’s structure—a partnership between our original Nonprofit and a new capped profit arm—as a chassis for OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.

Indeed, the board’s charter owes its allegiance to “humanity,” not to its shareholders. So, despite the fact that Microsoft has poured a megaton of money and resources into OpenAI, the startup’s board is still (hypothetically) supposed to have final say over what happens with its products and technology. That said, the company part of the organization is reported to be worth tens of billions of dollars. As many have already noted, the organization’s ethical mission seems to have come directly into conflict with the economic interests of those who had invested in the organization. As per usual, the money won.

All of this said, you could make the case that we shouldn’t fully endorse this interpretation of the weekend’s events yet, since the actual reasons for Altman’s ousting have still not been made public. For the most part, members of the company either aren’t talking about the reasons Sam was pushed out or have flatly denied that his ousting had anything to do with AI safety. Alternate theories have swirled in the meantime, with some suggesting that the real reasons for Altman’s aggressive exit were decidedly more colorful—like accusations he pursued additional funding via autocratic Mideast regimes.

But to get too bogged down in speculating about the specific catalysts for OpenAI’s drama is to ignore what the whole episode has revealed: as far as the real world is concerned, “AI safety” in Silicon Valley is pretty much null and void. Indeed, we now know that despite its supposedly bullet-proof organizational structure and its stated mission of responsible AI development, OpenAI was never going to be allowed to actually put ethics before money.

To be clear, AI safety is a really important field, and, were it to be actually practiced by corporate America, that would be one thing. That said, the version of it that existed at OpenAI—arguably one of the companies that has done the most to pursue a “safety” oriented model—doesn’t seem to have been much of a match for the realpolitik machinations of the tech industry. In even more frank terms, the folks who were supposed to be defending us from runaway AI (i.e., the board members)—the ones who were ordained with responsible stewardship over this powerful technology—don’t seem to have known what they were doing. They don’t seem to have understood that Sam had all the industry connections, the friends in high places, was well-liked, and that moving against him in a world where that kind of social capital is everything amounted to career suicide. If you come at the king, you best not miss.

In short: If the point of corporate AI safety is to protect humanity from runaway AI, then, as an effective strategy for doing that, it has effectively just flunked its first big test. That’s because it’s sorta hard to put your faith in a group of people who weren’t even capable of predicting the very predictable outcome that would occur when they fired their boss. How, exactly, can such a group be trusted with overseeing a supposedly “super-intelligent,” world-shattering technology? If you can’t outfox a gaggle of outraged investors, then you probably can’t outfox the Skynet-type entity you claim to be building. That said, I would argue we also can’t trust the craven, money-obsessed C-suite that has now reasserted its dominance. Imo, they’re obviously not going to do the right thing. So, effectively, humanity is stuck between a rock and a hard place.

As the conflict from the OpenAI dustup settles, it seems like the company is well positioned to get back to business as usual. After jettisoning the only two women on its board, the company added fiscal goon Larry Summers. Altman is back at the company (as is former company president Greg Brockman, who stepped down in solidarity with Altman), and Microsoft’s top executive, Satya Nadella, has said that he is “encouraged by the changes to OpenAI board” and said it’s a “first essential step on a path to more stable, well-informed, and effective governance.”

With the board’s failure, it seems clear that OpenAI’s do-gooders may have not only set back their own “safety” mission, but might have also kicked off a backlash against the AI ethics movement writ large. Case in point: This weekend’s drama seems to have further radicalized an already pretty radical anti-safety ideology that had been circulating the business. The “effective accelerationists” (abbreviated “e/acc”) believe that stuff like additional government regulations, “tech ethics” and “AI safety” are all cumbersome obstacles to true technological development and exponential profit. Over the weekend, as the narrative about “AI safety” emerged, some of the more fervent adherents of this belief system took to X to decry what they perceived to be an attack on the true victim of the episode (capitalism, of course).

To some degree, the whole point of the tech industry’s embrace of “ethics” and “safety” is about reassurance. Companies realize that the technologies they are selling can be disconcerting and disruptive; they want to reassure the public that they’re doing their best to protect consumers and society. At the end of the day, though, we now know there’s no reason to believe that those efforts will ever make a difference if the company’s “ethics” end up conflicting with its money. And when have those two things ever not conflicted?

Question of the day: What was the best meme to emerge from the OpenAI drama?

Image for article titled After OpenAI's Blowup, It Seems Pretty Clear That "AI Safety" Isn't a Real Thing

This week’s unprecedented imbroglio inspired so many memes and snarky takes that the ability to choose a favorite seems nearly impossible. In fact, the scandal spawned several different genres of memes altogether. In the immediate aftermath of Altman’s ouster there were plenty of Rust Cohl conspiracy memes circulating, as the tech world scrambled to understand just what, exactly, it was witnessing. There were also jokes about who should replace Altman and what may have caused the power struggle in the first place. Then, as it became clear that Microsoft would be standing behind the ousted CEO, the narrative—and the memes—shifted. The triumphant-Sam-returning-to-OpenAI-after-ousting-the-board genre became popular, as did tons of Satya Nadelle-related memes. There were, of course, Succession memes. And, finally, an inevitable genre of meme emerged in which X users openly mocked the OpenAI board for having so totally blown the coup against Altman. I personally found this deepfake video that swaps Altman’s face with that of Jordan Belfort in The Wolf of Wall Street to be a good one. That said, sound off in the comments with your favorite.

More headlines from this week

  • The other AI company that had a really bad week. OpenAI isn’t the only tech firm that went through the wringer this week. Cruise, the robotaxi company owned by General Motors, is also having a pretty tough go of it. The company’s founder and CEO, Kyle Vogt, resigned on Monday after the state of California accused the company of failing to disclose key details related to a violent incident with a pedestrian. Vogt founded the company in 2013 and shepherded it to a prominent place in the automated travel industry. However, the company’s bungled rollout of vehicles in San Francisco in August led to widespread consternation and heaps of complaints from city residents and public safety officials. Cruise’s scandals led the company to pull all of its vehicles off the roads in California in October and, then, eventually, to halt operations across the country.
  • MC Hammer is apparently a huge OpenAI fan. To add to the weirdness of this week, we also found out that “U Can’t Touch This” rapper MC Hammer is a confirmed OpenAI stan. On Wednesday, as the chaos of this week’s power struggle came to an end, the rapper tweeted: “Salute and congratulations to the 710 plus@OpenAI team members who gave an unparalleled demonstration of loyalty, love and commitment to @sama and @gdb in these perilous times it was a thing of beauty to witness.”
  • Creatives are losing the AI copyright war. Sarah Silverman’s lawsuit against OpenAI and Meta isn’t going so well. This week, it was revealed that the comedian’s lawsuit against the tech giants (which she’s accused of copyright violations) has floundered. Silverman isn’t alone. A lawsuit filed by a number of visual artists against Midjourney and Stability AI was all but thrown out by a judge last month. That said, though these lawsuits appear to be failing, it could just be a matter of finding the proper legal argument for them to succeed. Though the current claims may not be strong enough, the cases could be revised and refiled.



We will be happy to hear your thoughts

Leave a reply

Funtechnow
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart