AI’s “Oppenheimer Moment” Is Bullshit.

Well that was something. Yesterday the Center for AI Safety, which didn’t exist last year, released a powerful 22-word statement that sent the world’s journalists into a predictable paroxysm of hand-wringing:

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

Oh my. I mean, NUCLEAR WAR folks. I mean, the END OF THE WORLD! And thank God, I mean, really, thank the ever-loving LORD that tech’s new crop of genius Great Men – leaders from Microsoft, Google, OpenAI and the like – have all come together to proclaim that indeed, this is a VERY BIG PROBLEM and not to worry, they all very much WANT TO BE REGULATED, as soon as humanly possible, please.

The image at top is how CNN responded in its widely read “Reliable Sources” media industry newsletter, which is as good a barometer of media groupthink as the front page of The New York Times, which also prominently featured the story (along with a requisite “the cool kids are now doing AI in a rented mansion” fluff piece. Same ice cream, different flavor).

But as is often the case, the press is once again failing to see the bigger story here. The easy win of a form-fitting narrative is just too damn tasty – confirmation bias be damned, full steam ahead!

So I want to call a little bullshit on this whole enterprise, if I may.

First, a caveat. Of course we want to mitigate the risk of AI. I mean, duh. My goal in writing this post is not to join the ranks of those who believe AI will never pose a dire threat to humanity, or of those waiting by their keyboards to join the singularity. My point is simply this: When a group of industry folks drop what looks like an opportunistic A-bomb on the willing press, it kind of makes sense to think through the why of it all.

Let’s review a few facts. First and foremost, the statement served as a coming out party for The Center for AI Safety, a five-month old organization that lists no funders, no phone number, and just a smattering of staff members (none of whom are well known beyond its academic director, a PhD from Berkeley who also joined five months ago). Its mission is “to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.” Well Ok, that’s nice but…who exactly is doing all that equipping? And where might their loyalties and incentives lie? And do they have any experience working with real life governments or policy?

Hmm. Did The New York Times, CNN, or The Verge ask about this in their coverage yesterday? Nope. Strange, given the last time we saw a similar effort, it turned out that the organization behind it was funded in part by Elon Musk.  The golden rule of journalism is Follow The Damn Money.

OK, next. Look at the signatories. A ton of well-meaning academics and scientists, and plenty of previously vocal critics of AI (Geoffrey Hinton being the most notable among them). OpenAI’s network is all over the list – there are nearly 40 signatories from that company alone. OpenAI partner Microsoft only mustered two, but they were the two that mattered – the company’s CSO and CTO. Google clocked in with nearly 20. But not a one from Meta, nor Amazon, Apple, IBM, Nvidia, or Snowflake.

Hmmm. Did any of the mainstream media pieces note those prominent non-signatories, or opine on what they might imply? Only in the case of Meta’s Yann LeCun, who is already on record stating that AI doomsday scenarios are “completely ridiculous.”

So what’s this really all about? Well, in a well-timed blog post just last week about how best to regulate AI, OpenAI’s CEO Sam Altman called for “an International Atomic Energy Agency for superintelligence efforts.” There’s that nuclear angle, once again – this AI stuff is not only supremely complicated and above the paygrade of mere mortals, it’s also as dangerous as nuclear fissile material, and needs to be managed as such!

Altman’s testimony at Congress two weeks ago, his blog post equating AI with nukes the week after, and then this week, the newly minted Center for AI Safety’s explosive statement – come on, journalists: Can you not see a high-level communications operation playing out directly in front of your credulous eyes?

Before I rant any further, two apparently contrary ideas can in fact both be true. I am told by folks who know Altman that he truly believes “super-intelligent” AI poses an existential risk to humanity, and that his efforts to slap Congress, the press, and the public awake are in fact deeply earnest.

But it can also be true that companies in the Valley have a deep history of using calls for regulatory oversight as a strategy to pull the ladder up behind themselves, insuring they alone have the right to exploit technologies and business models that otherwise might encourage robust innovation and by extension, competition. (Cough cough privacy and GDPR, cough cough).  Were I in charge of comms and policy at OpenAI, Google, or Microsoft, the three current leaders in consumer and enterprise AI, I’d be nothing short of tickled pink with Altman’s heartfelt call to arms. Power Rangers, Unite!

I’ve written before, and certainly will write again, that thanks to AI,  we stand on the precipice of unf*cking the Internet from its backasswards, centralized business and data models. But if we declare that only a few government-licensed AI companies can pursue the market – well, all we’ve done is extend tech’s current oligarchy, crushing what could be the most transformative and innovative economy in humankind’s history. I’ll save more on that for the next post, but for now, I respectfully call bullshit on AI’s “Oppenheimer Moment.” It’s nothing of the sort.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

 

One thought on “AI’s “Oppenheimer Moment” Is Bullshit.”

  1. Hi,
    Elon Musk said he wants a regulatory authority on Ai when he was on Joe Rogan I believe. And I agree with him on that because I’m too concerned about the global effect of Ai.

Leave a Reply

Your email address will not be published. Required fields are marked *