Former DOD Head: The US Needs a New Plan to Beat China on AI

In an interview with WIRED, former secretary of defense Ash Carter discussed how to build morality into AI—and make sure other countries do too.
Ashton Carter
"I hope we can find a way to have it both ways, where we can do business with [China] in some areas, but there are going to be other areas where there are two tech ecosystems."Photograph: Yasin Ozturk/Getty Images

On Wednesday, I hosted a discussion with former secretary of defense Ashton Carter, who is now the director of the Belfer Center for Science and International Affairs at the Harvard Kennedy School. The conversation was part of WIRED’s CES programming, which tackled the biggest trends that will shape 2021, from medicine to autonomous driving to defense. We took questions from viewers in real time. The conversation has been lightly edited for clarity.


Nicholas Thompson: You’ve had an incredible 35-year career in the US government and in the private sector, working for Republicans and Democrats, always trying to identify what the most important issue of our time is, the smartest solutions to it, and the fairest ways to think about it.

When you were secretary of defense, you had a very rational policy that in every kill decision, a human would have to be involved. So if there were an artificial intelligence weapon, it could not make a decision to fire from a drone. And the question I've been wondering about is whether that justification remains the same for defensive weapons. You can imagine a future missile defense system that can more accurately than a human identify that there is a missile incoming, more accurately than a human aim the response, and more quickly than a human make the decision. Do we need to have humans in the loop in defensive situations as well as offensive situations?

Ash Carter: Well, defense is easier in the moral sense than offense. No question about it. By the way, Nick, you used the phrase “person in the loop.” I don't think that's really practical. It's not literally possible, and hasn't been for quite some time, to have a human in the decision loop. What you're talking about instead, or we're both talking about, is how do you make sure that there is moral judgment involved in the use of AI? Or, said differently, if something goes wrong, what's your excuse? You stand before a judge, you stand before your shareholders, you stand before the press, and how do you explain that something wrong was not a crime or a sin? And so let's be very practical about that.

If you're selling ads, of course, it doesn't matter that much. OK, I pitched an ad to somebody who didn't buy anything—type-one error. Or I failed to pitch an ad to somebody who might have bought something—type-two error. Not a big deal. But when it comes to national security, use of force or law enforcement, or delivery of medical care, these are much too grave for that.

Now, within defense, offense is the most somber responsibility, and defense less so. For example, nuclear command and control is heavily human-loaded, starting with the president of the United States. I, as secretary of defense, had no authority, and the people below me had no authority. And to the extent it is possible, we're dis-enabled from launching nuclear weapons. We required a code from the president. I carried a code myself which would authenticate me, because that's the gravest act of all.

A simpler case is launching an interceptor missile at an incoming missile. Now, if that goes wrong or you do it by mistake, a missile goes up in the air and explodes and you wasted some money, and it's embarrassing, but there's no loss of life. That decision the president delegated to me, and I in turn delegated it to the commander of the United States Northern Command, General Lori Robinson. And it was OK that it went down to a lower echelon, because it's a less grave act. In between is the authority to shoot down an airliner, which is grave also, but the president did delegate that to the secretary of defense, so that was kind of an in-betweener, and I bore that on my shoulder every day. And you'd be surprised how many times that happens, where an airplane goes off course, the radio's not working, it's heading for the US Capitol, and that's a no-win situation. So it does depend.

But I think what we're talking about here in general is a problem that the whole audience has when they're using big data and AI, which is, how do you behave ethically? And there's no point in just preaching at people. They need to have a practical way. And so I hope we’ll break it down: How do we actually do this? Let's talk about how and not why, because we all want to be moral; we have to be or our businesses will crash and our reputations will crash. And if we have any conscience, our selves will crash.

Thompson: Let's talk about that, but let's first discuss the airplane question. So, there is an airplane that is potentially headed towards the White House or Pentagon. When you were secretary of defense you would get a certain amount of information: This is the plane's speed, this is where it is headed. You would presumably get a readout of the key facts. In the future, it'll be much more efficient to have automated systems that are tracking every plane and can take signals to identify what is likely a terrorist threat. Do you think there would ever be a moment where a very fast, very trusted, explainable AI system would have the authority to shoot down a plane without the direct authorization of a senior official?

Carter: I would not like that. And the time table is very tight. Remember, I'm up in the middle of the night. I'm on a secure phone. The FAA is on. The FBI is on. The CIA is on. My commanders are on. It's very tense. And sometimes there are only minutes to act. But there are minutes, not milliseconds.

In the other case, it's conceivable that there are things you'd have to do where the action would be faster than it is practical to have a human being intervene. So how do you get morality into that, if everything is happening too fast? The answer to that is you have to go back to the algorithms, the data sets, the design process, and your design criteria and say, are they defensible? I can't get myself in the decision at the moment, but I have to convince myself that what is going to happen is morally defensible. Not impeccable, but morally defensible.

So, how do you do that? You go to algorithms. Some of these machine-learning algorithms—it's very obscure, how they come to the recommendations that they come to. You need to do a certain amount of deconstructing of that, and you need to tell your design team that it needs to have a design criterion, otherwise you won't have a product that you could use for a serious application. People really need to scrutinize data sets. More is not better, you really need quality. In the end, you trace these things back, and they've been human-tagged somewhere along the line, and so there's a possibility of error there. You need a testing protocol and a design protocol. Take, for example, flight software. We had a bad example of that in the 737 Max—there's an example of software, it wasn't AI but it's complicated, that was designed to do something very serious and it screwed up. So how have we done that? What's the right way to do it? The typical way of doing that is to have competing design teams or an audit team run exhaustive testing.

All of these things don't prove that you won't do something wrong. But it sets you up to have a moral application of AI. We need to get to a point where you can apply something and credibly say that even if this makes a mistake, I can defend the morality of what was done. That's the important thing. As secretary of defense of the United States, I have to act morally. I have to act ferociously to protect our people. But they also expect moral action. And sometimes, if you can't build that into the moment, you have to build it into the design.

At any rate, you can't go out there and say to a reporter, “People were killed last night, turns out needlessly. But the machine did it. I can't really tell you why, the machine did it.” They’d crucify me, and they should. So, practically speaking, for us as technologists, it's doable. You go to the algorithm, you go to the data set, you go to the design process, and you make the combination of those something that you can take to a judge, to the public, to the press, and it sounds like a reasonable excuse. And people do that every day. After a fatal car accident, they go before a judge and say, “This is very unfortunate. But here is the circumstance. I wasn't drunk, I wasn't going too fast; it just happened.” And we all accept that bad things can happen with machinery. What we don't accept is when it happens amorally.

Thompson: So if I understand this correctly, the way to think about it is that there will be applications where AI and machine learning will be hugely important to warfare. They will be more on the defensive than the offensive side. They'll be more important in situations where we have to make very fast decisions, not slow decisions. And they will be based on algorithms that we have studied and understand and can explain in a courtroom. And they'll be based on data that we have vetted and that we trust. Is that roughly the framework?

Carter: Exactly right. Just a good engineer's way of saying, “Let's not just talk about ethics, let's figure out how to build it.” And you've just done it in a nutshell.

Thompson: So let's move to another big topic. You've written that dictators will shape the world's norms in artificial intelligence if the United States doesn't. What are the biggest and most dangerous voids in AI policy that you want to see the US fill in the coming years?

Carter: People ask me, “Are we going to make a splinternet? Are we going to break the Internet?” China has already made that choice. Xi Jinping has decided, and he says openly, he has an aspiration to be independent and to operate in their own system according to their own rules, which are not our values. We're not going to change his mind. And I don't think we ought to change our values. So what's up for grabs? Doing it ourselves, which we've been talking about, but there's also the rest of the world. Remember, China is half of Asia. And if I have the other half of Asia working within a system that I believe in—a rule of law so that we can have profitable companies that can be sure that their contracts are enforced, a free movement of people, a free movement of ideas, the progress of tech, justice, which in recent days is something that has to be staunchly defended—if we have that in the rest of the world, Europe, the other half of Asia, and so forth, it’s still possible to be an example and for them to be part of the tech ecosystem that we're in.

But let's be realistic about China. Of course, Russia's different. Russia is a downward-heading place demographically and economically, and it would be of diminishing significance were it not for its determination to be a spoiler. And weak spoilers are dangerous, because all they have is their ability to spoil. They have that with their nuclear weapons obviously, but I'm talking about cyber. But they don't constitute another pole in thinking about tech. China sadly does.

I've been working with the Chinese for 30 years, and I hoped at one time that they'd turn out differently. We all did. My hope for that evaporated a long time ago. That's not the way it's going to be. If you just read what he says, Xi Jinping wants to have his own system because it's a communist dictatorship and he wants tech to enable that. But that's not our system. So we need a competing system. That doesn't mean we're not going to trade with China. I hope we can find a way to have it both ways, where we can do business with them in some areas, but there are going to be other areas where there are two tech ecosystems. I think we're stronger now. I think we've got a better system. So I'm not worried about it. It's not ideal, but that's the way it's going to be. They've decided.

Thompson: There's an argument that many people make, and that has quite a bit of appeal, which is that the US needs to reverse course. Right now we're splitting from China, ripping up Huawei telecommunications lines. Instead, the argument goes, what we need to do is offer an olive branch and work together to develop norms over AI, share data sets, and be much more cooperative. But if I'm hearing correctly, your counter is that it is a beautiful idea, but it's certainly not appropriate as long as Xi Jinping runs the country. Is that right?

Carter: Yeah, that's basically right. You do cooperate where you can. And we do want to do business where it's good for us. But he's decided that he's going to do things his way. And our values when we’re at our best—and, of course, we have not shown our best in recent weeks, that's for sure—are the values of the Enlightenment. The phrase of that period was, “the dignity of man.” Now we would say, “the dignity of people.” Chinese political philosophy is really about being Chinese. Which is fine if you're Chinese, but it kind of doesn't work for the rest of us.

So, for those who say cooperate, I say, where possible. If they think that's going to lead to the Chinese changing their approach, I'm afraid that history is not on their side, and they're being naive in that regard. At the same time, this is not World War III, it's not even a cold war. We need a different playbook than we used during the Cold War. A lot of people ask if it is a cold war. I fought the Cold War; the Cold War was cleaner. It was a communist dictatorship in which we were locked in a moral struggle, but we didn't trade with it. We built an impermeable membrane around it and didn't want anything to go through. It was even a big deal when we agreed to sell Pepsi-Cola in Russia.

We do want to trade with China despite this. So we need a new playbook and a playbook needs to have a defensive side—export controls, the kind of stuff you're talking about with Huawei, limits on ownership, espionage, and all of that. But the biggest part is on offense. We need to be good. We need to be better. We need to lead in AI. We need to fund AI. We need to work with the others toward a common tech future where technology moves fast, but the arc is bent toward good and not toward bad or repression.

So we need a completely different playbook, and we haven't had that in the last four years. We've had groping around, tariffs, and stuff like that, which is generally in the direction of self-protection. But it's sloppy. We need a cleaner, more thoughtful playbook. I think we're going to get one out of the incoming administration. I've known all these people for a long time, and it's a hugely qualified group of people, including in tech. But we need to help them design that playbook, and that's doable.

What I do stress is that we need the offensive side. I don't mean the offensive side of attack—I mean the side of strengthening ourselves, not just trying to diminish them or protect ourselves from them, but be better ourselves. That's the key to the new playbook.

Thompson: My hope is that someone, maybe someone in government now, will write a new Long Telegram or X Article laying out an update on the strategy of containment for China today.

Carter: There are people who have that capability, but they need the help of the tech community. One of my big themes is to build a bridge between us in the tech community—and I consider myself part of that, I'm a physicist, I love technology—and the government. And I know that that can be hard. But that's the key to a world we all want, building those bridges.

Thompson: Let me go to an audience question that gets at that. How much recognition is there within the Department of Defense that they are a small, demanding customer that may not be seen as top tier? And I know that's something you very much worked to change, that perception.

Carter: When I started out, my first job was for Caspar Weinberger, and Star Wars was the big thing then. And most significant things that happened in technology at the time happened in America and happened because the government made them happen—the internet, GPS, et cetera. That's no longer true. And I certainly recognize that. We are the biggest dog in technology; we spend more than all the big tech companies combined on R&D. But you're right that to a typical tech customer, DOD is a relatively small and kind of problematic customer. We have, to our detriment, some onerous procedures, and a lot of companies just say, “This is too much of a pain in my neck to be worth it.”

So what that means for defense leaders is that they have to play the game that's on the field, and that means making themselves attractive to tech. One way they can do that is to fund R&D, so that cool stuff that is of broader applicability is sponsored by DOD. Another is to be an early adopter so that they buy stuff first, when it's more expensive, but they can afford it because they've got to do whatever they've got to do. And that helps lift up the boat, protecting the tech sector. So there are things that defense can do, but it can't do it if it's a little island. It needs to be involved. And it needs to have the humility to understand that it’s not the controller it once was. And I've seen this, I've lived this my whole life. And so I know it isn’t like it was when I started out.

Thompson: Let me go to another excellent question. How do we minimize interceptions of our AI defense system by a foreign power, particularly given all the recent hacks of government systems? Do you have a framework for thinking about something like this?

Carter: Yes, it's the same as cyber protection in general. And here again, the defense sector isn't really all that different. We have to do the same stuff. It tends to boil down a lot to hygiene and just doing the right thing. Most of these things start with sloppiness. And that was true of SolarWinds, it's been true of most of the big tragedies—somebody clicked on an attachment, all that stuff.

Another thing that we lack is a technical thing. I'm involved with lots of companies that are trying to do their own cyber management, as I’m involved with the Defense Department and the government trying to protect its own information systems, and the vendors tend to come with modules—module A does this, module B does that, and so forth. And you have the problem of systems integration. For your typical government agency or nontechnical company, it's really hard to get the talent to do that right. There are companies that tell you they're integrating things, but generally people tend to want to sell their own stuff. So what's really lacking, I think, in the overall ecosystem is good, trusted, branded systems integration. And companies I'm involved with know perfectly well that this is a huge vulnerability. Even if you're making widgets, if your company's data is compromised, your tradecraft is compromised, your customers are compromised in some way, and you're in big trouble. That's a big business risk. But unfortunately, a lot of this stuff is pedestrian in terms of what you have to do to protect yourself.

One bright spot—we are talking about AI and some of the problems and challenges of AI— but this is a place where AI can help. Because even as AI enables the exhaustive rat-in-the-maze-type exploration of your attack surface, it also provides you—by the same technology essentially—exhaustive defense. That is, constant perimeter surveillance, constant detection of irregular activities and intrusions. So AI can be your friend in cyber defense as well as your enemy.

Thompson: Absolutely. Let's go to another audience question. This is tying back to what you were talking about at the beginning. When something goes wrong, who has accountability? Who is responsible for the flaw in the AI? The programmer, the top of the organization, anyone else? Whoever it is, don't we need human accountability at some specific level?

Carter: Yes, we do need human accountability. And so I think the ultimate person who will be held responsible, just practically speaking, but this is probably morally true, is the person who applies it. So if you're the business owner and something is done wrong, you can go try to blame the vendor. But practically speaking, it's your business and your reputation that are smoldering. And so as a customer, as the application runner, you have to be demanding of the vendor and recognize that you can't pass the buck to them, practically speaking. You're going to get criticized, and rightly so, if your enterprise does something wrong. Pointing to the vendor isn't going to work, because people are going to say, “Well, wait a minute, you picked the vendor, you know what they were doing, it's your product.” So I think you have a chain of accountability. And I think if you're a customer or an applier, you need to be demanding with your vendors and say, “I need to be able to defend this. So tell me algorithmically or in terms of data set integrity or how you tested and evaluated this, how I can defend the rectitude of something I do with your product to the tech community?” Just break that down into two engineering pieces and make solutions to it. But if you're a business that's applying it, it's on your back. And as secretary of defense, when we screwed up, I told the president, “I am your responsible official. I'm the number two in the chain of command to the public. I have nowhere to hide, nowhere to run. And I shouldn't. You expect me to defend you, but you also expect me to do it morally.” And that's true for every business leader in the whole country. They can't just point to tech; they have to say to tech, “I need a product whose actions I can defend.”

Thompson: Thank you so much for joining us today, Ash. I hope that people in the government were watching. I expect that the new administration is thinking deeply about this.


More Great WIRED Stories