A Navy SEAL, a Quadcopter, and a Quest to Save Lives in Combat

On the battlefield, any doorway can be a death trap. A special ops vet, and his businessman brother, have built an AI to solve that problem.
nova drone in front of doorway
Photograph: John Francis Peters

They call it the fatal funnel. When training for urban combat, they teach you it's any doorway you have to cross not knowing what's on the other side. Fifteen years ago, when I returned home after fighting in Iraq, a friend asked me to describe the bravest thing I saw anyone do. I had led a Marine platoon in the Second Battle of Fallujah, in 2004, and had seen plenty of heroism—Marines dragging their wounded off machine-gun-swept streets, or fighting room to room to recover a comrade’s body. But none of these compared to the cumulative heroism of the 19- and 20-year-old infantrymen who placed their bodies across that fatal funnel every day. Clearing the enemy from the city, house by house, was a game of Russian roulette played on a grand scale. You never knew who might be waiting on the other side of the door.

In the early days of the battle, we cleared houses by sending Marines through the front door and then proceeding room to room. Soon, however, we discovered this was too dangerous. Was any Marine’s life worth a building? We modified our tactics, so that if we sent a Marine through the front door and he found an insurgent inside, we retreated and made no effort to clear the structure. Instead we brought up an armored bulldozer or tank and leveled it.

This feature appears in the December 2020/January 2021 issue. Subscribe to WIRED.

Illustration: Carl De Torres, StoryTK

However, the enemy always has a say. The insurgents quickly adapted to this tactic. They realized that if they revealed their positions, we’d bury them in concrete. They took to barricading a shooter inside the house with his rifle aimed at the front door. They would then hide someone else next to that door. When the Marine stepped inside, one insurgent would shoot him while the other—who was hiding by the door—would drag him deep into the house. Not knowing whether our comrade was alive or dead, we were now forced to fight room to room to recover him. This situation played out time and again in what became known to us as “hell houses.”

You would think that the US military, with all its technological prowess, would have long ago developed a solution to this problem. But you’d be mistaken. War at its most intimate—as it unfolds in the close quarters of urban combat—has until very recently remained a distinctly low-tech affair. So it was with great personal interest that I traveled to San Diego this past June to meet Brandon Tseng, a former Navy SEAL and cofounder of Shield AI, a company that claims to have solved the problem of the fatal funnel.

“Hold the button and wait for the green light,” Brandon tells me. We’re near the headquarters of Shield AI at an urban training facility that approximates conditions in an Afghan village. The two of us stand, one behind the other, outside several shipping containers welded together—“a multistory house”—as though we are about to make entry across its fatal funnel. A steering-wheel-sized quadcopter rests on my palm. I hold the button on its side as instructed. A green light turns on. The rotors of the quadcopter begin to buzz menacingly as the drone gently lifts off. Brandon opens the door in front of us. With a predatory swiftness, the drone darts inside the house. No human is controlling it.

“The noise is pretty creepy,” I say, as we listen to the drone humming between open rooms.

“Our customers tell us the noise frightens people,” answers Brandon, who, with his brother Ryan, runs Shield AI. The customers Brandon refers to are members of US Special Operations Command who have been using Shield’s first product, the Nova quadcopter, and its onboard artificial intelligence, Hivemind, to help clear rooms on missions overseas for the past two years.

While we stand at the entry of the doorway, Brandon takes a smartphone out of his pocket. On half of his screen is a live videofeed from the Nova as it sweeps through the building, which has been stocked with furniture and dummies. On the other half of his screen is a real-time map of the building’s floor plan that the Nova draws via its onboard sensors, including its camera and lidar. As the drone moves from room to room, Brandon annotates the map, tapping the screen for possible threats—a person here, a weapon there, a suspicious box in the corner. This information can then be passed along to other members of the team as they prepare to make entry. The Nova moves through the building at a rate of 2,000 square feet a minute; in just under 60 seconds, it shoots out the front door and turns toward Brandon, as if recognizing an old friend. Brandon reaches out his hand, allowing the quadcopter to land in his palm. Its rotors shut off automatically. Silence returns. It becomes, for me, a surprisingly emotional moment.

“This would have saved a lot of guys’ lives,” I say.

Brandon nods. “I know.”

When the Nova was deployed in 2018, it was likely the first time an AI-driven quadcopter of this scale was used in combat.

Photograph: John Francis Peters

Brandon and his brother Ryan grew up in Houston, Seattle, and Orlando. Their father, a Taiwanese immigrant and son of a diplomat, moved around when he was growing up, and he often told them that “being born and raised in the United States is like winning the lottery. You should know how lucky you are. Don’t take the opportunities this country gives you for granted.” As a boy, Brandon dreamed of becoming a Navy SEAL. And after high school, he got one of those opportunities his dad had always talked about: an appointment to the US Naval Academy. That led to multiple deployments overseas, including two in Afghanistan. Ryan, meanwhile, went to the University of Florida to study engineering and became a businessman.

After seven years in the Navy, when he was 29, Brandon left the service, and Ryan started helping him transition to civilian life. “Between deployments, he never talked much about the war,” Ryan said. It wasn’t until Brandon started applying to business schools that Ryan began to learn the details of his brother’s experiences. “I was prepping him for interviews,” Ryan said. “I asked him for an example of a complex work decision he’d had to make. That’s when he started opening up, not only with his stories but with what his friends had gone through … It was all this stuff I never knew.”

Brandon was accepted to Harvard Business School for the fall of 2015, but he already had an idea of what he wanted to do. When he was overseas, he spent time working with sensors and inexpensive computers. “When I realized that, used together, the two could reason and take action,” he said, “my mind started racing with a sense of new possibilities.” He had come to believe that certain battlefield tasks could be accomplished with artificial intelligence, and this, he felt, would save lives.

He’d identified a specific problem, one he believed was solvable: that physical act of searching structures, which had bedeviled troops in the urban combat that characterized so much of the post-9/11 wars.

“No one was really working on this,” Brandon said, so as he entered business school he took his idea to Ryan. At 31, Ryan was already a proven entrepreneur. He had founded and sold a wireless charging company, WiPower, to Qualcomm, and had started a time-lock container company, Kitchen Safe, that had led to “the most enthusiastic pitch ever” on Shark Tank (at least according to Business Insider). When Brandon hit up his brother, Ryan was between ventures (though he did have a dishwashing robot in development). Brandon, who is the gregarious T-shirt-and-jeans-wearing counterpoint to his brother’s more analytical, collared-shirt-and-khakis persona, initially encountered some skepticism from Ryan. “I assumed this was a solved problem, that we were already doing this,” said Ryan, explaining his initial hesitation. “Also,” he joked, “the idea was coming from my little brother.”

Brandon managed to convince Ryan that his idea was viable and that the component technologies already existed, so in the spring of 2015 they set about finding an engineer who could take it on. “Everyone we talked to,” Ryan recalled, “kept mentioning this guy Andrew.” That was Andrew Reiter, a chemical engineer turned roboticist who had cycled through prestigious research programs at Northwestern and Harvard and was currently at Draper Laboratories, in Cambridge, Massachusetts, working on camera-based navigation in autonomous robots.

“They sent me an email out of the blue,” Andrew said, “and I also thought, isn’t the military already doing this?” Although university labs had experimented with quadrotor autonomy, and a few high-profile small-drone projects had dabbled with military applications, AI-driven drones had yet to be put to use. That is partly because applying artificial intelligence to actual environments can still be a difficult feat: Machine learning is good at predictable and repetitive tasks, but the real world is insanely unpredictable. Over the past two decades, the military had come to rely on human-controlled drones for everything from intelligence collection to air strikes. Despite numerous conceptual papers imagining the role that systems powered by artificial intelligence will play in the future of warfare, the military had yet to field a single autonomous drone.

The brothers flew to Cambridge to meet Andrew in person. Within six hours the three had the outlines of a business plan: They would create an AI-powered quadcopter (they won’t say much about technological specifics) to solve the problem of room-clearing. Their goal was to then expand the use of the AI—what they later branded Hivemind—and apply it to other military problems. A month later, Andrew moved to San Diego and took up residence in Ryan’s guest room for about a week.

By late August 2015 the three had a proposal in hand, and in a two-week period they’d scheduled 30 meetings with potential investors in Silicon Valley. Twenty-nine passed. The investor who bit had no interest in saving lives on the battlefield; instead, they wanted to develop a selfie-snapping drone. The capital was there, but the mission wasn’t. When I asked whether they considered going in a different direction, Brandon said, “We were building a company to make a dent in this mission.”

Ryan Tseng (left) was initially skeptical of his brother Brandon's (right) business idea. "I assumed this was a solved problem, that we were already doing this."

Photographs: John Francis Peters

Without professional investors, the three cofounders decided to lean on friends and family. They scraped together a little over $100,000 to assemble a prototype. “Finances were tight for a long time,” Ryan explained. And the tight budget created engineering obstacles. For instance, they had purchased a $2,000 lidar device, which helps autonomous vehicles measure distances from objects, from the manufacturer Hokuyo. Ryan, who was keeping an eye on the cash, insisted they’d eventually have to return it to keep their nascent business going. But to install the lidar on the Nova, Andrew needed to shorten its cable. That would mean they couldn’t return it. Not only did he have to figure out how to piece together an autonomous room-clearing AI system onto a quadcopter, he had to do it with a multifoot-long cable lashed to its side.

While Ryan focused on keeping the business afloat and Andrew focused on the prototype, Brandon began trying to navigate the byzantine world of defense contracting. He came across the recently formed Defense Innovation Unit, or DIU, the brainchild of then defense secretary Ash Carter headquartered in Mountain View, in Silicon Valley. “I didn’t know much about them,” Brandon said. All he had was a press release that announced the formation of the office. It turned out that one of the Innovation Unit’s core missions is to “accelerate the adoption of commercial technology” for the Department of Defense in five key areas, three of which—artificial intelligence, autonomy, and human systems—aligned with Shield’s mission. As luck would have it, DIU also had been created specifically to circumvent the laborious defense contracting process with approved funding for small projects within 60 to 90 days.

DIU opened in August 2015, and Brandon headed to Mountain View. Except he didn’t have an appointment; he simply showed up. “The press release had a photo of their headquarters but no address,” he said. With a little sleuthing on Google Earth he’d nailed down the location. He made it as far as the receptionist before being turned away. A year later, after a formal request for funding, Shield was invited to demonstrate its prototype Nova drone at an urban combat testing facility.

Jameson Darby, the director of DIU’s autonomy program, was at the testing facility that day, along with a senior officer from Special Operations Command, who happened to have come to DIU looking for better ways to clear rooms and respond to barricaded shooters. At the demonstration, which was similar to the one I saw, Darby noted, “It was pretty obvious that Shield AI was far out in developing the capability.” After the event, DIU granted Shield AI its first contract, for $1 million. Small in military-contract terms, but it was a start.

In fact, the capability that Brandon, Ryan, and Andrew had demonstrated was something Darby and his colleagues had been searching for. In 2014 the Center for a New American Security released a paper titled “20YY: Preparing for War in the Robotic Age.” Its authors predicted, “To a degree that US force planners are simply not accustomed to, other global actors are in a position to make significant headway toward a highly robotic war-fighting future in ways that could outpace the much bigger and slow-moving US defense bureaucracy.”

With the backing of DIU and private investors that followed, Shield AI deployed the Nova and Hivemind with special operators in the Middle East during the winter of 2018 (they say the details of those missions are generally classified). This marked a potential milestone in US military history: It was likely the first time an AI-driven quadcopter of this scale was used in combat.

For engineering expertise, the Tsengs turned to Andrew Reiter, who was working at Draper Labs on camera-based navigation in autonomous robots.

Photograph: John Francis Peters

Shield AI’s manufacturing facility—which the company calls the Hive—sits in an anodyne San Diego strip mall, across the street from a Home Depot. Five years after it started, Shield AI still retains a scrappy, entrepreneurial culture that you usually don’t see in the defense industry. Still, the precise, assembly-line organization of the Hive, with its teams of engineers and extensive diagnostic tests on each Nova drone and Hivemind software update, is a far cry from the bare-bones, couch-surfing early days of the company. About 150 people—including many military veterans—work there. When I visited, engineers were pulling long hours in the midst of a Covid-19 lockdown to ensure their customers received the Nova II, slated to enter service in early 2021.

The first Nova is what I’d watched enter the ersatz building at the test facility. The Nova II has new capabilities, including swarming and longer flight times, and reconfigured controls based on feedback from operators in the field. But it is Hivemind, the AI driving the quadcopter, that is the technological advance the team believes has the potential to change the nature of modern war. (Brandon likens the relationship between their Nova drones and their Hivemind software to the relationship between a Google phone and Android.)

Technology often belies war’s true nature, one that, according to the seminal military theorist Carl von Clausewitz, is “slaughter.” My own experience backed up Clausewitz’s observation, which caused me to arrive in San Diego a skeptic, harboring all the obvious doubts about how well an autonomous quadcopter could work in practice, on the ground, in the midst of combat: Is the technology both rugged and reliable? What happens if the Nova reaches a closed door? What happens if an enemy simply swats it from the air?

But then I saw the drone in action. When I told Brandon that the Nova would have saved lives, I was thinking of those hell houses in Fallujah and how we were forced to fight room to room to recover our men. If we had had the Nova (or something comparable), it wouldn’t have mattered if an insurgent swatted it from the air. Simply knowing the enemy was there would have given us the upper hand, as would the knowledge of every closed door. Opening each and sending an intelligent quadcopter inside would have saved us from being exposed to the threat.

The answer to my concerns, I realized, strikes at the true promise for technology like the Nova and Hivemind: enhanced situational awareness, which in the past has come at a steep cost in human lives.

The left half of Brandon’s screen is a live feed from the Nova as it sweeps through the building. On the right is a real-time map of the floor plan that’s drawn using data from the drone’s camera, lidar, and other onboard sensors.

Photograph: John Francis Peters

It’s one thing to clear a building, which is a tactical problem, but what happens when we apply this technology strategically? That’s what could make the Nova, but particularly Hivemind—or a system like it—transformative.

The defended interior of a building is what could be called a denied area, a place we cannot go and where we believe there’s a threat. The idea applies more broadly, to entire geographic regions. In the past, soldiers entering denied areas—by air, land, or sea—would typically learn about their adversaries’ defenses when those same defenses fired on them, often at the cost of lives. Despite advances in sensor technology, limitations remain, and the live feed from a human-piloted drone is often the equivalent of searching for a marble in your backyard by looking down through a soda straw.

But imagine a network of enemy air defenses containing surface-to-air missiles, antiaircraft guns, and all the attendant sensors to detect incoming aircraft. Instead of flying a human-piloted aircraft into that network with the hope of identifying and then evading those systems, Shield AI is hoping to deploy swarms of drones—of all sizes—to map threats in real time. Now you aren’t searching the earth with a single soda straw, but with thousands. These drones wouldn’t be reliant on satellite-based navigation (which is easy to disrupt), and they’d communicate among themselves, as their own network, while mapping the battlefield. It’s the same concept as clearing a room, except the room could be the entirety of a nation’s air, ground, or sea defenses.

According to retired Navy SEAL vice admiral Bob Harward, a member of Shield AI’s board, “If I’m able to apply artificial intelligence to these problems, that drastically enhances our state of competitiveness.” When asked why the larger defense contractors, such as Boeing or Raytheon, have yet to take on this problem, Harward said, “The defense-industry focus of AI has been on metadata, not operations.” In other words, collecting and analyzing information.

Shield AI, on the other hand, has chosen to target that very specific problem of room-clearing as it gets its start. This past September, the company landed a $7.2 million contract from the US Air Force to develop technologies that would allow autonomous drones to partner with humans in the collection of intelligence in GPS-denied environments. Its Silicon Valley investors now include Andreessen Horowitz, Breyer Capital, Homebrew, and Silicon Valley Bank. “That’s the value of Brandon as an operator,” Harward says. “He saw this need and went after it to keep our guys alive.” Indeed, one obstacle to solving this problem was that many people outside the military assumed it had already been solved.

To be sure, in the past few years a handful of companies have been building AI-powered quadcopters for various military applications. Anduril, the company run by Palmer Luckey and funded by Peter Thiel and Andreessen Horowitz, has military contracts to expand the capabilities of the autonomous drones it built to detect people crossing borders illegally. It aims to apply the tech to finding enemy personnel and equipment on the battlefield. The US drone maker Skydio (ironically, known for its selfie capabilities) has hired a cadre of roboticists and is, as WIRED wrote in July, “vying to become the Army’s standard-issue short-range surveillance drone to help infantry peek over the next hill or look around corners in urban combat.”

The great fear, of course, is that autonomous unarmed drones like the Nova, whose core mission is force protection, will be the proverbial camel’s nose through the tent, leading to something more troubling: autonomous armed drones—a dystopian swarm of killer robots that are essentially making their own decisions. Shield says it has no immediate plans to develop armed drones.

Michèle Flournoy, a former under secretary for defense policy in the Obama administration, who advises Shield AI, has helped the company develop an ethical framework, guided by the concept of human-machine teaming. “You don’t take the human out of the loop,” she explained. “You make the human more effective.” She readily acknowledges that AI has the potential for dystopian applications. But so does any technology—from the sword to the gun to the nuclear bomb. “I do worry,” she said, “about where China and Russia might go without a human in the loop. The Department of Defense doesn’t want to remove the human; it wants to make the human better.”

In February the Pentagon adopted a set of ethics principles for its use of AI that were proposed by the Defense Innovation Board, an entity within the Department of Defense that includes representatives from companies like Google, Microsoft, and Facebook. The principles included things such as keeping humans at the helm and having a well-defined domain of use. However, as even the report itself notes, “These principles are designed neither to gloss over contentious issues nor restrict the Department’s capabilities.”

Anika Binnendijk of the Rand Corporation, who coauthored a recent study on brain-computer interfaces, has doubts as to whether humans will ultimately be able to keep up with their robotic counterparts on the battlefield. She told me, “Once humans and machines work more closely during the heat of combat, it may be extremely difficult to determine the substance of ‘meaningful human control’ or ‘appropriate levels of human judgment.’”

When I interviewed Brandon, Ryan, and Andrew at the Shield AI headquarters, I asked Brandon about the story he’d told his brother when preparing for his business school interviews. In the conference room that day, Brandon had mentioned something about having to evacuate an injured civilian during a firefight in Afghanistan, but then he quickly changed the subject. When I asked again, he demurred. So I left it alone. I figured I’d follow up when he wasn’t surrounded by his colleagues.

I got him on the phone a few days later. I wanted to hear this story, and I pressed him. What happened in Afghanistan? What events had led him to dedicate himself to solving this problem? What was the story that had so affected his older brother that he’d also dedicated himself to this mission?

Brandon still hesitated. Only after more prodding did he tell me about a mission he was on in Afghanistan, where the Taliban fired on his SEAL platoon during a tribal shura. An 8-year-old Afghan boy, caught in the crossfire, was shot in the stomach. Brandon, who had little situational awareness of the village he was trapped in, couldn’t call a medevac for fear the helicopter would be shot down. So he and his platoon and Afghan partner forces carried the boy to a base 10 kilometers away. Miraculously, the boy survived.

But before Brandon finished that story, he’d launched into a different one, not about him but about a friend, a fighter pilot, who’d flown missions in Syria. Hovering over a target—an ISIS training camp where a surveillance drone had confirmed the presence of more than a hundred fighters—the pilot’s superiors had cleared him to drop his ordnance and return to his carrier. Except something didn’t feel right. With only minutes of fuel remaining, he continued to hover. Then, dozens of children began to exit the building; the compound was also a school. The pilot returned to his carrier without dropping his bombs. To this day he is haunted by that event.

There was more. Brandon told me about a group of special operators who took fire from a house on a raid in Afghanistan, in 2012. While deployed, he had watched this mission from the Joint Operations Center in real time. After surrounding the building, the operators tried to call out the fighters inside, to convince them to surrender. When the fighters refused and continued to fire back, the operators, fighting for their lives and after exhausting every other option, called in an air strike, destroying the building. Only after picking through the rubble did they discover that the fighters had held a family hostage inside.

Brandon has other stories, but he’s made his point. That night, he sent me an email: It wasn’t any single mission I did that led me to found Shield AI, it was after reflecting on my time in the military and everything I had experienced … the missions I did, the missions my friends and teammates did. Visiting friends in the hospital who had lost their sight … going to memorial services, talking with Gold Star families, seeing the joy and relief on my friends’ families when their loved ones returned home safely, talking with Afghan families while on missions and learning about what they had endured.

My expectation that Brandon might offer a single, harrowing story that explained Shield AI’s founding was misguided. Like my friend who had asked me to name the bravest thing I’d seen in Fallujah. But there is no single story. There remains a series of closed doors to open, fatal funnels to cross, uncleared compounds to search, a chain of memories, and, hopefully, a solution. Brandon’s work—with that of Ryan, Andrew, and the team at Shield AI—is to ensure that in the next generation’s wars, there will be fewer of these stories. And that those of us lucky enough to come home won’t have to live with them.


ELLIOT ACKERMAN (@elliotackerman) is a former Marine and intelligence officer who served five tours of duty in Iraq and Afghanistan. He is also the author of six books. His latest novel, 2034, written with Admiral James Stavridis and out in March, imagines a coming war between the US and China.

This article appears in the December 2020/January 2021 issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.


More Great WIRED Stories