Vint Cerf: Maybe We Need an Internet Driver’s License

Vint Cerf is one of the most recognizable figures in the pantheon of Internet stardom – and as he enters his ninth decade of a remarkable life, one of its most accomplished. I had the honor of interviewing Dr. Cerf last month as part of the “Rebooting Democracy in the Age of AI” lecture series hosted by the Burnes Center for Social Change at Northeastern University. The conversation also served as the kick-off to my own Burnes Center lecture series, “The Internet We Deserve” where I’ll talk with notable business, policy, technology and academic leaders central to the creation of the Internet as we know it today (last week I spoke with Larry Lessig). 

Universally recognized as one of “the fathers of the Internet,” Cerf’s many awards include the National Medal of Technology, the Turing Award, the Presidential Medal of Freedom, the Marconi Prize, and membership in the National Academy of Engineering. Dr. Cerf received his PhD from UCLA, where he worked in the famous lab that built the first nodes of what later became known as the Internet. He has worked at IBM, DARPA, MCI, JPL, and is now Chief Internet Evangelist at Google. Cerf has chaired, formed, and participated in countless working groups, governing bodies, and scientific, technological, and academic organizations. 

Below is a transcript of our conversation, which, caveat emptor, is an edited version of AI-assisted output. In our wide-ranging conversation, Cerf floats a number of ideas that left me pondering the future of the Internet, notably the concept of an “Internet Driver’s License,” the possibility  that the singularity is closer than we might think, as well as a proposal to rethink anonymity on the Web. Come for the brainpower, but stay for Cerf’s impression of Sigmund Freud. For that, however, you just might want to view the video, embedded above. 

—-

John Battelle  Vint, thank you so much for being here.It certainly feels like we are in another significant inflection point for the Internet as a platform – broadly under the header of “artificial intelligence.” It feels that it might be similar to other milestones we’ve seen in tech over the years  – graphical user interfaces, the World Wide Web. Others have said, it’s as significant as the iPhone. But how big a deal is AI? Is there a moment in the history of computing that is as significant as the one we’re in now? 

Vint Cerf   I absolutely believe so. For a lot of different reasons. And let me put this into two pieces of context. One of them is the Internet itself, and the applications that are running on it right now. For many years, John Perry Barlow had, you know, a beautiful vision of an open Internet where everybody shares information with each other, and we all sing Kumbaya. We clearly know in 2024, that the situation is a little more complex than that, that there is harmful behavior on the network. It’s amplified by various and sundry applications, including social media, among others, and that we have to do something about that nation states are starting to recognize that they’re trying to pass laws or international treaties, I just got off the phone with an hour of discussion about a cybercrime treaty. So it’s well recognized that this system is, I won’t say out of control. But I would say that there’s increasing concern about the potential hazards and harms that this online environment creates. 

So that’s one kind of state of affairs. The second thing with regard to the state of computing right now, for many, many years, two of my colleagues, Dave Patterson, and John Hennessy, who’s the Chairman of the Board of Google, in their independent roles at Stanford and Berkeley came up with what’s called the reduced instruction set computer, the RISC design. And that has been the workhorse of the design of computer chips for years and years. We have reached the point now where specialization and heterogeneity is our friend, and we are deeply dependent on specialization in order to achieve increased gains in computing capability. What does that translate into? Well, in Google, it translates into having a variety of computers in our data centers, conventional central processing units, specialized graphical processing units, Tensor Processing Units for specialized inference in machine learning. Someday, maybe quantum processing units, we’re still trying to make these things function in a useful way. So we’re seeing a transformation in computing capability and type of computing type of problems that these things are well adapted to solving. That’s super exciting. 

The large language model aspect of machine learning has triggered yet another highly speculative view of what machine learning and artificial intelligence can do. The problem that we have right now is that the way it does it is predictable. And in fact, you hear terms like hallucination, where factual material is conflated into counterfactual output by these extraordinary programs, machine learning mechanisms that are not yet reliable for all possible applications. So it is a super exciting time for almost anyone who’s interested in computing and communications, to say nothing and the fact that we have more access to the Internet than ever, thanks to Starlink and other low Earth orbiting satellites, in addition to the mobiles that we carry. This is a time of enormous potential and I’m quite excited about the rest of this decade.

And could you compare it to a time in history? Would you say it’s comparable to the dawn of the web, or more comparable to the dawn of program programmable computers themselves?

I would say the dawn of programmable computers – that’s where we are. That was the beginning of an enormous period of elaborate invention and development. Software is sort of the ultimate clay, there’s no limit to it. It’s only limited by your imagination and ability to program. We have more capacity than we ever had before – in terms of scale, transmission data rates, computing, power, memory, and everything else. So from my point of view, the sky’s the limit for new applications. So I’m as excited in this period as I would have been, I think, when computing first rolled out in commercial form in say, the 1950s.

That’s a date that hasn’t come up much – the 1950s – when people think about the history of the Internet! I want to ask you to reflect on what you think the Internet’s greatest impact has been on democracy and how we govern ourselves?

Many of us had hoped that democracy would be enhanced as a consequence of the Internet’s existence and all the devices that it can serve as our ability to reach information at our fingertips, which I still think is astonishing. I was sitting at a dinner party last night in a fancy restaurant, a question came up, somebody pulled out their mobile and did a little Google search, and got the answer. And there we are, you know, 50 years ago, you would never imagine it was possible to do that. However, we’re also faced with all of the potential hazards of misinformation disinformation, the fragmentation of discussion into these, what do you call it, confirmation bias vacuoles. And that is the groups of people that believe in certain things. And then they only look at the evidence that reinforces that belief. It creates polarization, to say nothing of all of the various real harms that take place, whether it’s ransomware, or malware or denial of service attacks, or other kinds of fraud and abuse and everything else. So we are in a fairly contentious environment right now. 

The natural reaction is to try to get control of that somehow. So that’s why you see a lot of legislation being passed and debated and discussed, I worry that we don’t have a depth of understanding of the dynamics of all this technology that would inform our choices of legislation and law. Even if you pass a law, can you actually enforce it? Back in the early part of the 20th century, there was a little French village that had a lot of vineyards. And the mayor decided to have a law passed that said no UFOs were allowed to land in the vineyards. So 50 years later, they had a great celebration that the law was effective and no UFOs had landed in the vineyards in the past 50 years. This is a classic category error on the part of the legislators. So we still lack a firm understanding of what kinds of laws make sense and how they’ll be enforced. 

In an earlier conversation you mentioned something that I wanted you to unpack. You said you advocated for focusing regulation on use cases, instead of the technology itself. Can you say a bit more about what you mean by that?

This came up in the context of artificial intelligence and machine learning and large language models. These are startling inventions. Some of them do things that we never imagined they could do. For example, one of our colleagues at Google had just presented a prompt to a chatbot saying “Please reverse this random string of characters,” which it did. And then it said, “Oh, by the way, here is the Python program that does that.” Now, none of us were expecting it to go write a Python program. And it worked. So there are things that these complex systems can do that we didn’t anticipate. The problem here is that we don’t understand this deeply enough to write rules about how they run and how they work. The better tactic, in my opinion, is to say, well, what are people using these for? If it’s pure entertainment, that’s fairly low risk. You know, if I asked it to write a story about an alien that got into my wine cellar and was surviving on my Cabernet Sauvignon. And it wrote a little story about a Martian who’d gotten into the wine cellar. And I consider that to be innocuous. Now, there are arguments about the use of these technologies for generating entertainment, which might run into some intellectual property problems. For example, if you use a famous person’s face or voice or whatever, without their permission, there’s some questions about whether you should be allowed to do that or not. Setting that aside, the entertainment side is relatively innocuous. But as you start working your way up the risk factor layers, when you get towards the top, you’re talking about financial advice, medical diagnosis and medical treatment. The last thing in the world you should do is use a chatbot to get advice about investing, unless there’s some evidence that the party who put that thing together actually taken significant steps to prevent it from giving you bad advice. 

Or perhaps that organization’s willingness to take some liability? 

Exactly. And so that raises really interesting questions about who’s liable for what – just like the argument about when the self-driving car runs into something, who’s liable? 

A lot of the discussion around these topics of how and should we regulate AI tend to focus on the negative case. But I wanted to ask you how you think these new technologies might help us be better citizens, better at governing ourselves? How can AI strengthen democracy? 

Well, boy, that’s a good question. Again, the worry that I have is that if you use an artificial intelligence, large language model, to ask questions and get answers back, the problem that I foresee is that you may not be confident that the answer you got back is valid. And so we still have a lot of work to do to find a way to verify that the output from this bot has validation – what is the provenance of the information used to compose the answer? Just to give you an example of the trickiness here, I asked an AI to write an obituary for me, thinking that was a well-defined form. And it did produce an obituary, as you would expect “We’re sorry, Dr. Cerf passed away…it gave a date, which I found slightly unsettling. 

Was it in the future? 

Yeah, well, but not far enough! So then it produced a rendering of my career. But while it was doing that it gave me credit for stuff I didn’t do, and it gave other people credit for stuff I did, which was disappointing. And then when it got to the remaining family members, it made up family members that as far as I know, I don’t have. So the problem we run into is that if we would like to use AI as important sources of information – to exercise our democratic rights to vote among other practices – we really need help figuring out whether or not the information we get back is valid. And of course, the big problem that humans have is that confirmation bias is quite natural – if the thing comes back packed with stuff that I believe in, even if it’s wrong, you know, I’ll accept that. So we still have work to do to figure out how to make these kinds of systems produce reliable and verifiable output. 

If you asked a question, and you got back an answer, you should be able to ask more questions to verify. Like, what information was used to compose the answer? Can you show me the websites that might have been sources of content? Are there any corroborating assertions from sources that I would trust? So there are a bunch of things that I wish that we would do. Now, a lot of people don’t want to do that, because it takes work. I now believe that critical thinking may be an important skill that we should employ in order to make use of these advanced mechanisms in our pursuit of democracy. But we need to be smart about the way we use the data.

Google hangs its hat on citation, an almost academic citation framework. A problem so far is the opaqueness of these AI systems. I think that’s what you’re referring to. I’m a big supporter of figuring out how we can get more transparency into the system.

People need agency in order to exercise this critical thinking. So the designs of these systems need to take into account a desirable property, which is that it is capable of responding to you by telling you what the provenance of the content is. And you can then decide perhaps on your own whether or not you accept the provenance as an indication of accuracy of what comes back.

I harp on this critical thinking thing all the time. And I finally realized you could boil this down to an Internet driver’s license. Think about what we do with teenagers: we insist that we run them through these (driving) training programs, and we show them what car crashes do and you know, to people and to property, and so on. We warn them that other drivers are less competent than they are and they should drive defensively and all this stuff. We even give them a test before we give them a driver’s license that lets them drive a car. Now, I’m not arguing that people should necessarily pass the test before they can use the Internet. But I do think the kind of training that we do to help people be safe on the road couldn’t be an analog of that could be helpful to people who are going to use the Internet, so they are aware of the risk factors. And how to cope with them.

A sort of literacy, if you will. There’s a follow up that I noticed from one of the esteemed members of the audience, Esther Dyson…

A dear friend.

If you could go back in time and change something – a decision that was taken or a path that was embarked upon as it related to something in the Internet’s structural or technological or social history, what might it be?

Well, the first obvious one is that I would start with 128 bits of address space instead of 32 bits. But I can just imagine, you know, my 80-year-old self whispering in the ear of my 30-year-old self. So you’re going to need 128 bits of address space, you know, my reaction back then would have been, are you out of your cotton picking mind, that’s more than there are electrons in the universe. And, and of course, at the time, we went through some reasoning, and said, you know, for development purposes, the 32-bit address space shouldn’t be more than enough. We don’t, and we’re not going to have more than, say, 256 of these experimental networks anyway. And if it ever did get out, you know, that’s two per country that out to be enough for competition. And then we allow for 16 million computers in each network. And at the time, 1973, computers were these big, honking big pieces of equipment and air conditioned rooms, and they didn’t get up and run around, and so on. So I don’t think that I would have been able to sell the idea of 128-bit address space at the time that we were doing. 

Some people have said, Why didn’t you put more security into the system? And once again, I don’t think that would have worked very well. The people who were designing and building and using the system were graduate students for all practical purposes, and they are not the first cohort you go to, to find disciplined use of cryptographic keys. They get distracted by homework and final exams, and dissertations and so on. And so even though I was working with the National Security Agency on secured versions of the system, using end to end packet crypto, which by the way, we had to invent …that was not a snap. That was a scenario that made sense for the Defense Department. But it didn’t make much sense at the time that we were doing the development. Now today. That’s a whole other story. And you notice that people who use the World Wide Web almost exclusively use HTTPS for end to end cryptography to protect the integrity of the confidentiality of the exchanges that are taking place. There’s no doubt that that’s important and needs to be, in fact emphasized increasingly, to say nothing of digital signatures to authenticate origins and integrity of the content.

Esther’s question is specific to the Internet Corporation for Assigned Names and Numbers, where you helped get that organization spun up and funded. You were chair of ICANN for many years. What do you see as the greatest success and the greatest failure of ICANN?

Esther was the first chair, and I was present at the board meeting where she was elected chair. And she served the organization well. I succeeded her in the early 2000s. I would say that the greatest success is it still exists, it has extracted itself from government oversight, which was a thorn in the side of most other countries other than us. And the Internet continues to work, the domain name system works, the IP address allocation system works. I will say that it is a barnacle encrusted institution. The solution for almost everything is to create a new committee or a new support organization or some new procedure or what have you. It is a very complex and in some cases, confusing system. And but in spite of all that, it is, I think, quite astonishing that the Internet continues to function. In spite of the evolution which has grown by almost all dimensions by about seven orders of magnitude. This system is 10 million times bigger than it was when it was first turned on in 1983. In almost all dimensions, number of computers, number of users, the bandwidth that’s available and so on. So it’s survived and we need to give it credit for that. On the other hand, I’ve already alluded to some of the challenges that we see in this online environment, socio economic challenges in particular political challenges. ICANN is not the place to solve any of that – it is not a content focused organization, nor should it be in my opinion. It’s there to make sure that the basic naming and addressing infrastructure works to ensure that the names and the addresses are uniquely assigned, but the application space is outside of its purview. Right? It should resist being drawn into that. It’s already complicated enough as it is.

We have a question from Gentry Lane. State-sponsored persistent cyber aggression on civilian infrastructure is rendering cyberspace untenable, and threatens the continuity of critical goods and services. Is cyber conflict an intrinsic component of the Internet?

So I think the answer is not that it’s an intrinsic component of the Internet, it is an intrinsic component of human society. You know, we will find new battlefields. And at this point, the Internet and the World Wide Web represent a new battlefield where conflict can occur. And it occurs between nation states, it occurs between interested parties, criminal elements and the rest of society. It is a place in which competition takes place among various corporate entities. So it really is another space in which this kind of conflict can occur. Maybe we have air, land, sea, and now we have this virtual environment. And that, of course, is a challenge, because we would like to make it less likely that this space is harmful to be in. And in order for that to be a true statement, we need to have international agreement and cooperation about what’s acceptable and what is not acceptable behavior online. And speaking even more broadly, forget about the online nature, the Internet, the software world, is enormous. And even without networking, we still rely unbelievably heavily on programmable things. And when you think about your mobile, for instance, we rely on them for an amazing number of things. And you get cascade failures that can happen when, if I’m going to log in to service online, sometimes I’m going to need the login to my mobile in order to get a second password or a second authenticator from the site that’s serving me. So I’ve used my username and password and it says, Well, I don’t trust that. So I’m sending you something on your mobile, if you can’t log into your mobile, or if you get logged into the mobile, but it doesn’t have service, then that’s not going to work, right? And then so then you can’t log into your email, if you can’t go on into your email, then you can’t get the message that’s going to save your company from going bankrupt. That, you know, that’s sort of an extreme cascade failure. But I’m nervous that we rely so heavily on these things. We really frankly wished that other devices besides the mobile could be used to satisfy these various functions. So my laptop or my desktop or my pan or something, could be an alternative to the mobile for a lot of these functions. But unfortunately, that’s not the case.

If you could wave a wand and implement one global rule to make the Internet better, or perhaps to govern AI in a way that you think would be beneficial, what would you do? 

Wow, I’m not sure that I can boil it down to you know, using Harry Potter to make the world better. Let me suggest – this is controversial, and it will be interesting to see if anyone wants to joust on the topic. But I have become increasingly persuaded that anonymity is not our friend in this online world. It is useful and important under certain conditions. But I believe that accountability is becoming an increasingly important element in this space. And the parties who are misbehaving in particular need to become accountable. And in order for that to happen, we have to be able to identify them. We may have to get help across international boundaries because the perpetrator could be in one jurisdiction and the victim in another. None of this is easy. I mentioned the cybercrime treaty, it’s a very complex agreement, and it’s right now fraught with debate, but we do need to have cooperation between willing parties in order to hold parties accountable. 

Are there any governance models that are emerging right now that you see, for example, there’s some pretty robust ones coming out of the EU, around artificial intelligence that concern you? And if so, why? 

You picked the EU and artificial intelligence, which is a toxic mix. First of all, my view of the EU = with no disrespect intended – is that they tend to look at new environments, new technologies, and they say something bad might happen, we have to regulate it. In America, it’s a little different. It’s sort of like something bad has happened, we need to regulate it. And so the Europeans tend to get out in front. And in some cases, that’s arguably a useful thing, like the general data protection rules (GDPR), although it’s my view that those general data protection rules have run afoul of law enforcement, because law enforcement needs to know and the data protection laws say no, you don’t. And so you often end up with these peculiar conflicting regimes that somehow need to be resolved. So I think trying to literally regulate artificial intelligence or machine learning or large language models is premature, because I don’t think we understand well enough what the rules should be to constrain their misbehavior. That’s why earlier, I was arguing that we should look at the applications and say whether or not the party’s offering applications based on those technologies should show evidence of safety for the consumers of those applications when they’re in the high risk space. Now, that might imply as I think you said, the party offering those services should be ready to assume some liability in the event that the high risk thing causes harm. 

Well, as long as we’re talking about potential harm, I’ll raise another question which has been hanging out in the chat for a minute, which is related to artificial intelligence, generalized artificial intelligence. The question is, how close are we to the singularity?

Well, it depends on who you ask. And of course, my colleague at Google will tell you that the singularity is nearer. In fact, [Ray Kurzweil’s] latest book is titled The Singularity Is Near. He makes a very, very potent argument. I will say that the metric by which we might understand where we are in terms of singularity is not just the number of components in the system. I don’t think we are quite as close to the singularity, as I would imagine it. However, what we have seen with the large language models is the capacity to do stuff that we would not have imagined the systems could do. Even if they are faulty at it, they are still doing pretty remarkable things. So the Turing test as it was previously formulated by Turing is no longer a very useful test. In fact, I would suggest for consideration that the fact that a large language model can pass the written bar exam or pass the what’s the other one, the bar or the medical exams, does not make a bot, a doctor or a lawyer, right. And so this verisimilitude of human discourse that comes out of the large language models should not be misunderstood as capacity to act in these special professions. And so we now need different and better metrics for assessing the capability of these kinds of large language models or, or whatever comes from. I think some people, including me, believe that the large language model formulation is inadequate to doing things that you and I would agree are comparable to human conduct. It can do things that we can’t do, for example, it can handle 100 languages. Very few human beings could do that. But that does not mean that it can do some of the things like being a doctor or being a lawyer.

There’s a question from Jay Allen in the audience. You briefly touched on quantum computing. But Jay Allen asks, what will be the eventual effect of quantum computing? 

Well, first of all, I’m very excited about quantum computing. And of course, Google has made considerable investment in designing and building this particular kind of computing engine. They are best at certain classes of problems like optimization. And without getting into a lot of the details, there is one thing that it can do as well known, and that’s the Shor’s algorithm that lets it break factorization. And the reason that’s important is that our current cryptographic mechanisms have historically used the difficulty of factoring the products of large primes as the work factor protecting encrypted information. Now, we can keep increasing the keys to defend against a quantum attack. But eventually, that won’t work. And so the National Institutes of Standards and Technology have already invited proposals for new encryption algorithms, and digital signature algorithms that do not suffer from the possibility of Shor’s quantum attack. And those are already being adopted at Google, for example, in order to protect information now that may need to be protected for the next 25 years. So I’m very excited about quantum computing. In some ways, the analogy I would use is that if you remember VisiCalc, on the Apple TV or the Apple two plus, which meant you could do real time exploration of the spreadsheet, plug in different values, see what happens in real time. I think quantum computing will allow us to explore solution spaces in real time. So we can crawl around looking at optimum solutions of complex problems using quantum computing, and that will be as exciting as it wants to do real time spreadsheets, for example.

How quickly do you think things will change on the Internet going forward? 

I wish that I had a really clear crystal ball. I don’t. I will say, though that, from what I have seen, quantum computing is already available in some measure. IBM has a commercial offering already. I don’t quite know when ours will become available. But I’ve seen scenarios that make five years very believable, and maybe even less than that. 

I think that the Internet of things, where we put computing capability and communications capability into devices that surround us, I think that’s already here in many respects, and it’s coming fast. That raises all kinds of questions – how brittle is that technology? How dependent is it on the network? What if the network doesn’t work? There’s a great book that everybody should read by EM Forster, The Machine Stops. It describes a society that looks a lot like ours did during the pandemic, everybody is at home, food gets delivered, we communicate online, we never see each other face to face. In the opening scenario, the machine stops working. And the question is what happens to that society. So we have the risk of that possibility, I think becoming very dependent on these systems, and having them not resilient enough. 

Now we’re describing many of the plot features of the largest blockbuster science fiction movies. And we are at the top of the hour, and I want to honor everybody’s time commitment that they’ve made for the hour, and especially yours spent. So I want to thank you very much for joining us and the Burnes Center and Northeastern. It’s been lovely. 

I wonder if we can ask everybody to indulge me with one more observation. You and I talked about this, but we’re all worried about artificial intelligence and everything else. (Adopts Freud personna). And so I’ve been pretending to be Sigmund Freud. But what we have now is artificial in one’s artificial ego. And missing now is the artificial super ego to control the uncontrollable impulses of the artificial ld. Sometimes a cigar is just a cigar. So I think we will be coping with these phenomena over the next decade or so. Imagine a more exciting time when technology is taking us for a new ride me

(Laughing) Thank you so much for joining us. And thanks to everybody and appreciate all your questions. Sorry, we couldn’t get to every one of them. And don’t forget to sign up for all of the lecture series that the Burnes Center has available to you all. Again, thank you very much.

See you on the net. John. See you there.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *