On AI: What Should We Regulate?

EU classification of AI risk.

I’ve been following the story of generative AI a bit too obsessively over the past nine months, and while the story’s cooled a bit, I don’t think it any less important. If you’re like me, you’ll want to check out MIT Tech Review’s interview with Mustafa Suleyman, founder and CEO of Inflection AI (makers of the Pi chatbot). (Suleyman previously co-founded DeepMind, which Google purchased for life-changing money back in 2014.)

Inflection is among a platoon of companies chasing the consumer AI pot of gold known as conversational agents – services like ChatGPT, Google’s Bard, Microsoft’s BingChat, Anthropic’s Claude, and so on. Tens of billions have been poured into these upstarts in the past 18 months, and while it’s been less than a year into since ChatGPT launched, the mania over genAI’s potential impact has yet to abate. The conversation seems to have moved from “this is going to change everything” to “how should we regulate it” in record time, but what I’ve found frustrating is how little attention has been paid to the fundamental, if perhaps a bit less exciting, question of what form these generative AI agents might take in our lives. Who will they work for, their corporate owners, or …us? Who controls the data they interact with – the consumer, or, as has been the case over the past 20 years – the corporate entity?

That’s why I think Tech Review’s interview with Suleyman is required reading. Suleyman is a new author – his book The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma came out earlier this month (I’ve ordered it, but not yet read it). In the interview, Suleyman is asked why he’s excited about the Large Language Model (LLM) technologies driving companies like OpenAI and Inflection. His response bears quoting at length:

The first wave of AI was about classification. Deep learning showed that we can train a computer to classify various types of input data: images, video, audio, language. Now we’re in the generative wave, where you take that input data and produce new data.

The third wave will be the interactive phase. That’s why I’ve bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, you’re going to talk to your AI.

And these AIs will be able to take actions. You will just give it a general, high-level goal and it will use all the tools it has to act on that. They’ll talk to other people, talk to other AIs. This is what we’re going to do with Pi.

That’s a huge shift in what technology can do. It’s a very, very profound moment in the history of technology that I think many people underestimate. Technology today is static. It does, roughly speaking, what you tell it to do.

But now technology is going to be animated. It’s going to have the potential freedom, if you give it, to take actions. It’s truly a step change in the history of our species that we’re creating tools that have this kind of, you know, agency.

Finally, someone talking out loud about the same things I’ve been on about for … too long: A massive shift in how humanity interacts with computing, from keyboards and poking at tiny mobile screens to the one thing we’re naturally quite good at: Dialog. I’ve called this the “conversational interface” for more years than I care to document, and I share Suleyman’s excitement about what this might mean for society. But alas, the interview never gets to the heart of the matter: the data rights model underpinning this shift to a conversational economy.

Instead, the interviewer rightly presses Suleyman on the potential downsides of an AI having “agency” – shouldn’t we regulate it, he asks? It’s here that I find Suleyman’s answers get a bit … optimistic. “I think everybody is having a complete panic that we’re not going to be able to regulate this,” he says. “It’s just nonsense. We’re totally going to be able to regulate it. We’ll apply the same frameworks that have been successful previously.”

Er…the history of regulation digital platforms, particularly here in the US, is notoriously ineffectual. Suleyman is pressed (and fact checked) on his assertions that “We’ve done a pretty good job with spam. You know, in general, [the problem of] revenge porn has got better, even though that was in a bad place three to five years ago. It’s pretty difficult to find radicalization content or terrorist material online. It’s pretty difficult to buy weapons and drugs online.”

Damn, I bet Suleyman will wish he hadn’t uttered those words. In any case, he and most other leading AI executives are begging national and international regulatory bodies to quickly pass frameworks for AI regulation. And for his part, Suleyman seems to think they’ll be up to the task.

I tend to disagree. Not because I think regulators are evil or stupid or misinformed – but rather because a top-down approach to something as slippery and fast-moving as generative AI (or the internet itself) is brittle and unresponsive to facts on the ground. This top down approach will, of course, focus on the companies involed. But instead of attempting to control AI through reams of impossible-to-interpret pages of regulation directed at particular companies, I humbly suggest we should focus on regulating the core resource all AI companies need to function: Our personal data. This is a thesis I’m currently working up – and have written about extensively over the past two decades – but may well prove the most flexible and effective. It’s one thing to try to regulate what platforms like Pi or ChatGPT can do, and quite another to regulate how those platforms interact with our personal data. The former approach stifles innovation, dictates product decisions, and leads to regulatory capture by large organizations. The latter sets an even playing field that puts the consumer in charge.

More on that in future posts.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *