The Air Force Wants to Give You Its Credit Card

Will Roper, acquisition executive for the US Air Force, talks to WIRED's editor in chief about making the military more adaptive, the role of AI, and what he worries about every day.
United States Air Force F22 Raptor
Senior Airman Bryan Myhr/U.S. Air National Guard

Will Roper, assistant secretary of the Air Force for acquisition, technology and logistics, is something like Q for the Defense Department. He formerly ran the Strategic Capabilities Office, a secretive military skunkworks designed to figure out how to fight future wars. While there, he helped design swarms of tiny unmanned drones; he helped create Project Maven; and he tried to partner the Defense Department with the videogame industry. Now his new job may be even harder: Making the Air Force acquisitions process efficient.

He’s going to be leading a pitch day for the Air Force this week in New York City, and he spoke with WIRED about that and also where he sees the future of military technology going—from AI to hypersonic weapons to space.

Jahi Chikwendiu/The Washington Post/Getty Images

(This interview has been condensed and edited.)

Nicholas Thompson: You're launching a new system very soon to help get startups very quickly signed up to Air Force contracts. Tell me how it works and why you are doing it.

Will Roper: We've got to be able to work with the entire industry base, and even our fastest agreements still take a couple of months to get nailed down. That’s too long for a startup that needs cash flow quickly. And so we really worked hard to hack our system and we’ve gotten down to where we can do credit-card-based awards on a single day. That's what we're going to try to demonstrate in New York. We're obviously not investing in startups—we’re going to put them on projects where they're going to deliver technology to the Air Force. But we want it to feel like they're pitching to a venture capitalist.

NT: And so, specifically, it means that I will come to you with an idea, you’ll vet the idea, and if you agree, you will pay me right there on a Pentagon credit card?

WR: We’ll pay you right then and there. If you have a PayPal account, then you can work with the Air Force.

NT: What is the credit limit on the Air Force's credit card?

WR: We’re going to start by doing awards at $158,000 per transaction. We did a round of practice trials prior to going up to New York City. So we had a hundred companies come and give us ideas, and we were able to award 104 contracts in 40 hours using our credit card swipes. And they’re triaged into phases: Phase one are small awards, phase two are bigger awards, phase three even bigger. And rather than do it the normal government way, which is to do a traditional contract in a single-time, upfront cash award, we're going to do them in installment payments over time. The resounding feedback we've gotten from the small companies is that it's so much better for them to be able to tell investors and stakeholders that they're going to have consistent cash flow over time.

NT: Yep. What is an example of one of the ideas that you agreed to fund?

WR: We've had companies that have proposed AI solutions to help us with predictive maintenance. We want to predict maintenance issues before they occur. Well, that's an AI problem. We’ve gone operational on two systems: the C-5, which is a large cargo plane that moves stuff all around the world. It has over 105 algorithms that are operational today already predicting things we would have never found until after the fact. And the B-1 bomber: It has 40 algorithms that are operating. We’re finding issues with landing gear and wheels long before they would be inspected. So this kind of digital oracle is something we're excited about. We're hoping that at Pitch Day in New York we're going to get a lot more companies that are looking at the maintenance side of the house, not just development.

NT: So, for example, there might be a company that has expertise in how wheels fray, it would analyze the data on takeoffs and landings, and it would predict when you need to replace part of the wheel in the C-5?

WR: They don't even have to be an expert in our systems. If they're an expert in data analytics and machine learning, they really just need access to our data so that they can tell us what patterns it sees in the data.

NT: And predictive maintenance is, if I recall, the first place where the Pentagon started using AI, is that correct?

WR: Actually the first place we started was back in Project Maven. That was a project that I started at SCO, which proposed to try to get people out of the business of watching full motion video and use AI to recognize targets. And we started working with a variety of stakeholders in the intelligence community. Google was working with us and, as you know, that ended up going in a different direction.

NT: The reason why predictive maintenance is an early AI application is because there are steady data streams, because it's an area of huge investment, and because it's relatively low risk—at least for a Pentagon operation—in that it doesn't involve combat?

WR: Yeah, I think you have all the nails on the head. There's really no downside to doing it. The data is available. There's a mission imperative. It's not sensitive, we're not talking about classified data, and there's no operational risk. And when there's no downside, even the sclerotic bureaucracy of the Defense Department can manage to fast-track those things.

Senior Airman Philip Bryant/U.S. Air Force

NT: The reason Project Maven was an early use is because AI is so good at image recognition?

WR: Exactly. Computer vision came along quickly because of commercial applications. It was obvious that things that were inherent on smartphones, recognizing faces and pictures, could be applied to our intelligence, surveillance, and reconnaissance mission. We wanted to get people out of the brute-force task of finding targets, and move them into the business of recognizing targets of interest that were identified by AI. We weren’t trying to shift people out of the loop; we just wanted them doing higher-order tasks.

NT: Was Google's announcement that it would depart when its contract ended something that was a major blow or something that was quickly passed over?

WR: I would say it was a surprise. But there are many companies working in computer vision. In acquisition, we are used to having to switch between vendors based on a variety of issues, so it was really no big deal in terms of what people are trained to do. But I think there's a broader issue, which is just wanting to make sure that we have as many open doors as possible to work with us.

NT: Let’s talk theoretically about Maven and something you said a minute ago, which is that Maven took humans out of the business of just scanning endless video and moved them to a higher-level task. But you can imagine AI doing that higher-level task, and the one above that, and even identifying targets or carrying out a mission. Where do you stop using AI and say that humans have to be involved?

WR: Our policy right now is that lethal decisions are always retained by people. And I don't see that policy changing anytime soon. I think the only thing that would raise the discussion is if there were just simply no way to compete without thinking about other options. But I don't really see a future where humans are going to be out of the loop. We're just going to be increasingly out of the loop on brute-force tasks.

You could imagine right now that AI does a pretty good job identifying houses and cars of a different type and that, in the future, you might go from recognizing your car to the type of car, and then the type of car to a specific car. But I don't think this nation is going to want to take lethal decisions out of the hands of people. You can't ask AI why it made a choice. It made the choice because that's what its training data said, and that's not a sufficient answer for most people. We want to be able to judge the judgment of someone making a decision, and AI doesn't give us that ability.

NT: Let me speculate for a second more, though. We're not that far off from a time when AI is definitively better at image recognition than a human. And I can totally see the argument why you’d always want a human to make an offensive lethal decision. But what if you flip it around? What if it's a missile defense system? Would you still want a human in the loop to make a decision to shoot down an incoming missile even if we knew that AI would be better and quicker at recognizing it?

WR: I think those things will fall into a different category. I don’t see it as being an issue if you have to hand a decision over to a weapons system when there’s an incoming ballistic missile or cruise missile. But I think when we're making a decision about human targets, there is going to be a desire to hold the deciding entity accountable for what they've done. And in order to even think about having AI move up into that level of judgment, we're going to need a different kind of AI because we'll need to understand not just what it recommends but why it recommends it. Step one for the Air Force is, we've got to learn how to use the AI that exists today smartly. And until we start pushing it into programs and learning what's easy and hard, we're keeping it in the world of speculation. I've joked with the Air Force that you can't spell Air Force without ‘AI.’

NT: Explainability is becoming a pretty hot debate in AI, and there are a bunch of people, very smart people, who say, you know, explainability is an unfair standard. If you ask a human why they made a decision, they can give you a story, but it might not really be why they made a decision. And so if we are demanding explainability on our AI algorithms, A) we’ll be much slower and B) we might be setting them to a standard beyond even what we set humans.

WR: No, I agree. I’m glad the researchers are working on explainability, but there's no guarantee it will happen. So maybe rather than have explainable AI or auditable AI, maybe it's just AI that can do research and fill in its training set when it makes mistakes—an AI that continues to learn and research. We’ve got to get something down a level from just simply giving us the best pattern match, and I'm glad that our research labs are working on it. I'm glad that commercial industry is working on it.

NT: And how much do you worry that having burdens of explainability, making sure that humans are always in the loop, will slow down United States military advancement? And that, if we're setting all these rules and standards and requirements, and China or Russia is not, they will press ahead in this fast-moving technology?

WR: I worry every day, Nick. Our long-term competitiveness is my No. 1 worry, and it's the pace of technology change that drives that worry. We're not only in a competition with other nations, we're in a period where technology changes at a rate it never has before. And the development system that we currently use in the Pentagon is simply one of the Cold War. It moves in decade-long moves. And we now need to be able to make changes on a yearly basis. So I worry about anything that gives us the excuse to wait another year before we take this seriously.

DMITRY KOSTYUKOV/Getty Images

The reason why—the whole focus I brought into this job is simply trying to accelerate the process at which we buy and build things, which is a huge undertaking. But I've been pleased so far with how the Air Force has been able to accelerate. Step one: Let's get airman out of brute-force tasks and get our highly trained, highly capable airmen into higher-order thinking. That ought to be a sufficient first step to get us in the game. To your point, it's the first step in a long journey, and we can't get tired soon.

NT: And what is the most impressive AI you've seen from Russia and China? Are there specific advances that you've seen that make you sleep even less well?

WR: Well, I can't comment on the specifics of any country. I will say that China's announcement of its megaprojects should give us pause. If another nation sees the importance of AI for its economic competitiveness, its military competitiveness, we have to at least match that seriousness, if not eclipse it.

NT: I mean, it seems like the whole way the Pentagon works, with specific, precise commands being followed in specific, precise ways, is pretty complicated for AI.

WR: It is. I mean, the system that we've inherited out of the Cold War is accustomed to being able to forecast the threat, identify its strengths and weaknesses, and develop the countermeasures to them. That's how the Cold War was won. And as much as people knock the system, it did win the Cold War. So we can say it was sufficient for the challenge of its time.

But now, let's look at this century, let's look at this decade. Would you believe it if I told you what the 2030 threat to the Air Force is? Would you believe me? You probably wouldn't. Because technology is changing so rapidly the threat could go in a variety of ways. Technologies can mature we don't even know today. AI is a capability that evolves the more you use it. So the longer that we wait to get it fielded, the longer it's going to take us to start evolving it to be better, which is eroding our competitiveness against an adversary that has the forethought to do that. AI is a wonderful capability to make you more reactive without having to build a completely new system to get a new capability. We've got to get into the game now and make opponents react to what we're doing, so that we always had that first-mover advantage. It’s the only way I see to be able to stay ahead in a future that's not predictable.

NT: So the challenge for you, and for the department more generally, is to not necessarily be more accurate in predicting precise threats but to make the whole Defense Department, and the Air Force specifically, more adaptive. That way, whatever the threat evolves into, we can respond better.

WR: Yes. Maybe in the future AI is going to be the most significant technology for militaries, maybe it's quantum systems, maybe it's synthetic biology. It could be any of those things. There's strong evidence that all of those are going to be game-changers in the future. But can you predict which one is going to be the game-changer first? Well, if you can, then come work for the Air Force! We need clairvoyants in our camps. But if we can't, then we've got to be more adaptive than any military. Whatever the technology is that comes out of the commercial world, we've got to be able to take it, apply it, and get it into the hands of people who can use it. It's one reason why we are working on things like Pitch Day.

NT: So let's talk a little bit about the threats that could potentially face the Air Force in 2030. For example, will there be self-flying fighter jets?

WR: Oh, I hope so. That’s something I have dearly hoped to be able to push and start while I’m in this job. I think we're going to have to explore autonomy everywhere. And I don't think that the future Air Force is likely to be an Air Force of only unmanned systems, again because I think autonomous systems are going to be able to do certain things well, and people are going to be able to do different things well, and teams of them together will do things well.

NT: So do we have plans underway to develop the kind of planes that would be piloted by an AI system where you don't need a seat?

WR: So we've got a group working for us and they're working on a program called Skyborg—cute name—that is exploring that concept. What do smaller, unmanned tactical air vehicles look like? How should they be built? How should we integrate them with the F-35, for instance, which is able to network with other systems. I think it'd be pretty cool to explore an F-35 that is able to control small or tactical vehicles that are around it or ahead of it.

George Frey/Getty Images

NT: And so the idea is that either you can have a fleet of these or you can have them attached to the current F-35 and you control them through sensors inside the F-35, or something else?

WR: Absolutely that's what we're thinking. The F-35 is really more than a fighter—and we don't talk about this very much, but I wish we did—in that it has wonderful sensing, computing, and networking capabilities. It's able to see things other things can't see, but it can share that data and it's also able to connect with other systems through a protocol that's called Open Missions Systems, OMS, and UAV Control Interface, UCI.

NT: Fascinating. Where are we on hypersonic weapons? Both in developing them and in being able to defend against them.

WR: We've come a long way. So I think that was week one in the job for me. I had been pushing the Air Force when I was at SCO to use mature technology from OSD [Office of the Secretary of Defense] hypersonic programs to accelerate their programs. Don't build it again, use what's already worked. And now that I'm Air Force acquisition, those programs report to me. So it's been great, the team is doing a great job with the acceleration.

The one that I mentioned is called Hacksaw. It’s a Hypersonic Conventional Strike Weapon and it is on the path to being the department's first operational hypersonic weapon. We are 22 months away from full operational flight test, with early operational capability one year after. From the time you test there are other things you have to do—certifying, training—before we did declare it as a capability. But 22 months is wicked fast. So, knock on wood, this nation will have a hypersonic weapon in about two years.

NT: And what about the ability to defend against hypersonic weapons?

WR: So that's the initiative that's being taken on by Missile Defense Agency. Hypersonic weapons are challenging because they fly low. So they're under—if you just look at curvature of the Earth, the hypersonic weapon that's boosting gliding stays much lower than a ballistic missile, so it's harder for radars that are constrained by curvature of the Earth to pick them up. There are concepts to try to track hypersonic weapons from space that are being explored. They're very much in the S&T phase, but they ought to be explored. We should never quit trying to solve challenges. But those are probably a step further than programs I'm going to start while I'm in this job.

NT: And what about using AI for war planning, for laying out how to actually engage in combat?

WR: That’s a great idea, I've never thought of that to be honest. Makes a lot of sense. You could imagine doing tons and tons of permutations. One of our programs is the Minuteman replacement program, it’s called Ground Based Strategic Deterrent. It's an $80 billion procurement, so it’s huge. It places all of the ICBMs we have. It has these wonderful digital engineering tools that allow our team to explore millions of designs. It has some some analytic capability that allows optimization. I wouldn't call it full AI, but the hopes are there that if we took the next step, that it would be.

And I can imagine having something very similar in a war plan. The ICBM system’s extremely complicated, so if you make a design change, having the computer tell you not just what the performance is going to be like but what the cost is going to be—it's just eye-watering and in the future I want every program to have tools like that. They're worth their weight in gold, but I think it does make sense for our war planners to have that. So great idea! I'm going to go see if I can find the right place to plant it in the Air Force.

NT: Let's talk about the Space Force for a second. Tell me what you are looking for in the Space Force. If somebody is coming to Pitch Day and they're fascinated by space, what are some of the areas where you'll be building?

WR: Well, space is critically important and we can't treat space as if it’s an off-limits domain. Too much of the military support comes from space: We do communications from there; we do GPS; we do sensing from there. So the idea that those targets are off limits is simply not feasible or wise. A lot of our economy flows through space. I imagine most people don't think about the fact that they can't live a day of their life without reliance on space, whether it's cheap GPS for navigating, or weather. It comes from Air Force satellites in space communications. So, so much of our lives are tied to space. We're all people of space whether we want to think it or not, so I'm glad that we're having the discussion that we need to be ready for conflict to go there.

As the acquisition exec for the Air Force, we have to start building space systems that are ready to deal with space being a hostile environment. And that's the work we're doing now. We're focusing on making sure systems are resilient, that they can survive and fight through threats that will try to take them out.

NT: Last question. What else do you want to see on Pitch Day in two weeks?

WR: I hope we'll see ideas across a wide variety of missions. I hope we'll have a lot of software companies that can come help us for both software development and also improvement of how we do software. I hope that we're going to see additive manufacturing companies. There is a huge potential for companies that are working in additive manufacturing to work with the Air Force. And the great thing about working with us is we don't have any IP. When we push a new technique in partnership with a company, they get to see it. They get to use it. And I certainly hope we'll get more predictive maintenance ideas. We don't have enough of it yet. I would like to see what predictive maintenance across a whole fleet of aircraft does.

But the other thing I'm hoping for, Nick, to be honest, is that I find some things I didn't know I needed. I hope you can use these as a place or a magnet for good ideas that we're not smart enough to request. So I hope to be able to tell you afterwards that I was surprised and pleased. I'm sure I will be.

NT: All right, great. Well, good luck and thank you so much for taking the time to talk me through all of this. This was really fun.

WR: Hey, thanks, Nick. Anytime.


More Great WIRED Stories