There Are Spying Eyes Everywhere—and Now They Share a Brain

Security cameras. License plate readers. Smartphone trackers. Drones. We’re being watched 24/7. What happens when all those data streams fuse into one?
A city scene with a camera watching over everything.
Illustration: Yoshi Sodeoka

One afternoon in the fall of 2019, in a grand old office building near the Arc de Triomphe, I was buzzed through an unmarked door into a showroom for the future of surveillance. The space on the other side was dark and sleek, with a look somewhere between an Apple Store and a doomsday bunker. Along one wall, a grid of electronic devices glinted in the moody downlighting—automated license plate readers, Wi-Fi-enabled locks, boxy data processing units. I was here to meet Giovanni Gaccione, who runs the public safety division of a security technology company called Genetec. Headquartered in Montreal, the firm operates four of these “Experience Centers” around the world, where it peddles intelligence products to government officials. Genetec’s main sell here was software, and Gaccione had agreed to show me how it worked.

He led me first to a large monitor running a demo version of Citigraf, his division’s flagship product. The screen displayed a map of the East Side of Chicago. Around the edges were thumbnail-size video streams from neighborhood CCTV cameras. In one feed, a woman appeared to be unloading luggage from a car to the sidewalk. An alert popped up above her head: “ILLEGAL PARKING.” The map itself was scattered with color-coded icons—a house on fire, a gun, a pair of wrestling stick figures—each of which, Gaccione explained, corresponded to an unfolding emergency. He selected the stick figures, which denoted an assault, and a readout appeared onscreen with a few scant details drawn from the 911 dispatch center. At the bottom was a button marked “INVESTIGATE,” just begging to be clicked.

Citigraf was conceived in 2016, when the Chicago Police Department hired Genetec to solve a surveillance conundrum. Like other large law enforcement organizations around the country, the department had built up such an impressive arsenal of technologies for keeping tabs on citizens that it had reached the point of surveillance overload. To get a clear picture of an emergency in progress, officers often had to bushwhack through dozens of byzantine databases and feeds from far-flung sensors, including gunshot detectors, license plate readers, and public and private security cameras. This process of braiding together strands of information—“multi-intelligence fusion” is the technical term—was becoming too difficult. As one Chicago official put it, echoing a well-worn aphorism in surveillance circles, the city was “data-rich but information-poor.” What investigators needed was a tool that could cut a clean line through the labyrinth. What they needed was automated fusion.

Gaccione now demonstrated the concept in practice. He clicked “INVESTIGATE,” and Citigraf got to work on the reported assault. The software runs on what Genetec calls a “correlation engine,” a suite of algorithms that trawl through a city’s historical police records and live sensor feeds, looking for patterns and connections. Seconds later, a long list of possible leads appeared onscreen, including a lineup of individuals previously arrested in the neighborhood for violent crimes, the home addresses of parolees living nearby, a catalog of similar recent 911 calls, photographs and license plate numbers of vehicles that had been detected speeding away from the scene, and video feeds from any cameras that might have picked up evidence of the crime itself, including those mounted on passing buses and trains. More than enough information, in other words, for an officer to respond to that original 911 call with a nearly telepathic sense of what has just unfolded.

Gaccione turned to a second console, this one loaded with a program called Valcri. Where Citigraf is designed for relaying early leads to patrol officers rushing to the scene of a crime, Valcri is for the detectives working long cases at the precinct. Originally developed to root out sex-trafficking rings, its fusion algorithms hunt for subtler, more elaborate patterns that might stretch across years of unstructured data. Gaccione told me about one counterterrorism unit, which he wouldn’t name, that had used the system to build a detailed profile of “a middle-aged unemployed individual with signs of radicalization,” using “various databases, CCTV, phone records, banking transactions, and other surveillance methods.” If done manually, he estimated, this kind of investigatory grunt work would take a couple of weeks. In this instance, it took “less than a day.”

The market for fusion technology has been enjoying a quiet boom in recent years. Genetec says that Citigraf is deployed in “many cities.” A growing number of established tech giants, including Cisco, Microsoft, and Motorola, sell fusion systems globally, often in the guise of “smart city” modernization packages. (Cisco sometimes even sweetens the pot with no-interest financing.) Palantir, which bills itself as a “data integration” firm, reportedly counts among its clients the Central Intelligence Agency, Immigration and Customs Enforcement, and the Centers for Disease Control and Prevention. Anduril has built a “virtual wall” along parts of the border with Mexico, using fusion software to link together a network of surveillance towers. Last fall, the four-year-old company won a flexible contract, capped at $950 million, to contribute elements of the technology to the US military’s Advanced Battle Management System.

For all these customers, a central appeal of fusion is that it can scale to new sources of data. You can add fuel to your “correlation engine” by, say, hooking up a new network of sensors or acquiring a privately owned library of smartphone location data. (The Pentagon’s Special Operations Command was recently revealed to be a buyer of many such libraries, including those from a Muslim prayer app with tens of millions of users.) Organizations with their own coders can develop capabilities in-house. In New York, for instance, the police department’s analytics division created a custom plug-in for its fusion system. The feature, called Patternizr, draws on more than a decade’s worth of departmental data to match property crimes that could be related to each other. When a new report comes in, all the investigator has to do is click “Patternize,” and the system will return a list of previous incidents, scored and ranked by similarity.

Mind-bending new breakthroughs in sensor technology get a lot of buzzy press: A laser that can covertly identify you from two football fields away by measuring your heartbeat. A hack that makes your smartphone spy on anything nearby with a Bluetooth connection, from your Fitbit to your smart refrigerator. A computer vision system that will let the authorities know if you suddenly break into a run within sight of a CCTV camera. But it’s a mistake to focus our dread on each of these tools individually. In many places across the world, they’re all inputs for a system that, with each new plug-in, reaches a little closer to omniscience.

That idea—of an ever-expanding, all-knowing surveillance platform—used to be a technologist’s fantasy, like the hoverbike or the jetpack. To understand how this particular hoverbike will finally be built, I began by calling up the people who designed the prototype.

ILLUSTRATION: YOSHI SODEOKA

The Department of Defense was among the first organizations to face large-scale surveillance overload. By the decade after September 11, its arsenal of spy technologies had grown to galactic proportions. The department had experimented with computerized fusion since at least the 1970s, but the most advanced systems still couldn’t handle more than two or three data inputs. A modern intelligence unit had to contend with hundreds. According to Erik Lin-Greenberg, who ran an elite fusion team for the Air Force from 2010 to 2013, the old ways still ruled. Each human analyst was typically responsible for a single data stream. They compared their findings in chats and phone calls, or sometimes by yelling to one another across the room. In one case, Lin-Greenberg said, another team in his squadron identified an IED just in time to halt a convoy less than 500 feet up the road.

One of the people who was supposed to help fix intractable problems like this was Dan Kaufman, the director of information innovation at the Defense Advanced Research Projects Agency, the Pentagon’s storied R&D hub. With his sunny manner and bowl of shimmering silver hair, Kaufman wasn’t cut from the camo-speckled cloth of the typical military-industrial denizen. In his previous life, he had run the video game developer DreamWorks Interactive, where he helped launch what would become the Medal of Honor series. Later, as a consultant, he had worked with the CIA’s venture capital fund, In-Q-Tel. At Darpa, Kaufman was known for championing complex computing projects with a distinct commercial flavor. He felt that the Pentagon’s fusion efforts were due for a shake-up.

In the winter of 2010, Kaufman was introduced to Ben Cutler, an experienced engineer and tech entrepreneur who was considering a tour of duty in government. Over the phone, Kaufman explained the problem to Cutler and outlined his vision for what to do about it: He wanted a software platform that could integrate all available intelligence in a single, consolidated interface and grow as new capabilities came online. For Cutler, who had spent the previous year working on a new operating system at Microsoft, the idea clicked right away. What the Pentagon needed, he realized, was an OS for surveillance.

Cutler was intrigued enough to write a pitch. The document, which he completed in a day, opens with a theatrical flourish: “A patrolling group of soldiers pursues a pickup into a village; it stops at a mosque.” At this point, in real life, the soldiers might have to wait for an old-fashioned fusion team to deliver its assessment. But in Cutler’s scenario, they would simply log their geographic coordinates and the pickup’s license plate number into a tablet. The operating system would then return a description of the neighborhood around the mosque (“known insurgent meeting area”), a profile of the imam (“has worked well with friendly forces”), and any records connecting the vehicle with known terrorist groups.

Weeks later, Cutler was offered the job. I asked him whether he believed at the time that he had the expertise to build what he had pitched. “No!” he said, laughing a bit wildly.

To be fair, nobody did. The project, officially called Insight, would depend on a science fiction novel’s worth of technical breakthroughs. A study commissioned the previous year by the National Geospatial-Intelligence Agency had concluded that many of the features that Darpa was now proposing were still far from feasible.

ILLUSTRATION: YOSHI SODEOKA

Among Insight’s most difficult tasks was to find a way of associating “hard” data, meaning sensor-derived physical information like radar hits and GPS coordinates, with “soft” data such as terror watchlists and reports from informants. To link different data points in different formats, while at the same time accounting for flaws in the data itself (gaps in coverage, ambiguous signals), required extraordinarily complex math. And in order to operate on the scale Kaufman wanted—to, say, pick out a mobile phone call and correlate its soft metadata with the hard satellite imagery of the people on either end of the line—the Insight platform would need to be capable of processing thousands of gigabytes at a time.

That was just the first step. Next, Cutler’s engineers would have to find a way of hardwiring the accumulated knowledge of decades of spycraft into algorithms that could interpret, like a seasoned analyst, the subtle cues that give away the enemy’s intent. These algorithms would track targets across sensor feeds and databases, tracing their every move in digital and physical space, assembling what Michael Pagels, an engineer who participated in the project’s drafting, called “life histories.”

Still the Darpa team wouldn’t be done. Cutler wanted the system to display the assembled casefile for every fighter and vehicle on a “grand chessboard”—a digital template of the physical battle space with thousands of moving pieces. Analysts would be able to click on any single one and know exactly what it was and where it had been, and take a best guess of what it might do next. And because the dynamics of battle were always shifting, the software would need a programming interface simple enough to let analysts code in new algorithms as needed.

Finally, to run simulated tests of the platform, someone would have to create a millimetrically accurate virtual battlefield populated with thousands of realistic avatars. Will McBurnett, an engineer for one of Insight’s contractors, described this to me as “Sim City for adults.” (Here, at least, the team had a head start: One of Kaufman’s earlier Darpa ventures was a made-for-the-military video game called RealWorld, in which soldiers could rehearse missions in a detailed virtual battlefield before heading out for the real thing. Insight drew on the same code.)

Cutler threw himself into the program, he told me, as if it were any other startup taking an idea “from inception to early market introduction.” He set tight deadlines. He constantly reminded outside vendors that they were spending taxpayer dollars. To keep the focus on substance, he banned overdesigned PowerPoint presentations. By 2013, he had secured a deal to transfer the technology to the Army, which in Darpa terms was like receiving a buyout offer from Facebook. That year, an agency press release declared that Insight would “dispel the fog of war.”

Darpa tested the platform repeatedly at Fort Irwin National Training Center, a vast mock battlefield in Southern California. In each weeklong exercise, “red teams” of highly trained soldiers would hide out among thousands of actors, like insurgents blending into the civilian population. Insight’s job was to find them. Analysts would run the fusion system 24 hours a day, searching for the red teams in radar and lidar sweeps, drone footage, cell phone and internet data, and encyclopedic intelligence records that, as Cutler put it, “no analyst can possibly read.” The system might, for instance, alert its operators whenever a vehicle from an enemy watchlist entered a certain neighborhood. It could also generate a “normalcy model” of the observed areas so that it could alert analysts to anomalies, like a car driving erratically. (The more complex patterns remain secret; many are still used to identify targets in counterterrorism operations today.)

By the time of Insight’s final disclosed test, in September 2015, the Army had pivoted the program to what McBurnett called “1980s-style, full-on, armored-brigades-on-armored-brigades kind of action.” I obtained a short video of one of these later iterations of the software from BAE Systems, the prime contractor for Insight. It shows Fort Irwin in “grand chessboard” mode, with an enemy artillery unit moving across the terrain. Each vehicle, tracked relentlessly through multiple data feeds, is marked with a “likely identity” and a detailed tactical life history. In the video, analysts use the software to figure out whether the red teams will come at their forces head-on from the north or attempt a flanking maneuver from the south. As new intelligence streams in, Insight recalculates the relative likelihood of each eventuality. Soon, an alert appears in the corner of the screen: Insight predicts an 82 percent chance of an attack from the north.

ILLUSTRATION: YOSHI SODEOKA

This video clip is the outside world’s first close look at Insight, and maybe also its last. In 2016, after BAE issued a short press release declaring that it was “on track to deliver” Insight to the military, the program disappeared from view. By that point, both Kaufman and Cutler had returned to the private sector; they now run blue-sky research labs at Google and Microsoft, respectively. Their successors at Darpa declined to be interviewed for this story.

Public disclosures and interviews reveal a patchy story of what happened next. According to Brian Pierce, a former senior Darpa official, the Army may have been forced to put Insight on ice because of a lawsuit that Palantir brought in 2016. The Army wanted a new surveillance platform, built in part on the foundation that Insight had laid, and Palantir protested that its off-the-shelf tools hadn’t received fair consideration. (Three years later, it was awarded the replacement contract.) But elements of the Insight system did find their way to the battlefield. Its vehicle-tracking tool, for instance, was adopted by an Air Force aerial surveillance program in Afghanistan called Blue Devil, which assisted in the capture or killing of at least 1,200 people between 2011 and 2014. Dave Logan, a vice president at BAE who manages its intelligence and surveillance programs, confirmed that Insight lives on in some form: The company recently received a contract from the Air Force Research Laboratory to continue development, he said, “with the goal to market the product to Department of Defense end user communities in the future.”

Insight’s most enduring legacy, Pierce said, is philosophical. He likened it to the agency’s work on driverless vehicles. Although Darpa never succeeded in building a fully autonomous car, he argued, it cleared a path for others by declaring—if not quite proving—that the pipe dream was within reach.

Automated fusion now stands at the center of how the Pentagon plans to wage any future war. “It’s kind of seeped into our psyche,” Michael Kanaan, an intelligence officer who runs a joint Air Force and MIT AI accelerator, told me. He gave credit to Insight’s “trailblazing efforts” for, among other things, inspiring an automated fusion program that he oversaw at the Office of the Director of National Intelligence. Developed for the campaign against ISIS, that system condensed 24 databases into three and collapsed a 170-step investigative checklist for every tip the analysts received into a simple five-minute click-through. Kanaan said that rather than scrambling to respond to new threats as they emerged—bringing the convoy to a halt 500 feet from the IED—his team was able to “piece together the puzzle” days or weeks in advance.

Eventually, the Department of Defense hopes to link every plane, satellite, ship, tank, and soldier into a huge, mostly automated Internet of Wartime Things. Cloud-connected sensors and weapons will correlate among themselves while commanders direct the action on a rich, continuously updated digital chessboard that senior leaders hope will look like Waze. As part of the effort, the Air Force and the Army have earmarked billions of dollars for fusion networks from dozens of defense and technology companies, including Amazon, BAE, and Anduril.

The early results of these new efforts have been striking. In one exercise in late 2019, a Pentagon fusion system found and identified an enemy ship by linking intelligence from several airplanes and a satellite. Then it passed the information to the bridge of a nearby destroyer, where all the commanding officer had to do was decide whether to launch an attack. A more recent Army experiment condensed what was traditionally a manual, 20-minute process for targeting decisions into a largely automated cycle that took just 20 seconds.

Ten years after the telephone call that kicked Insight into gear, developments like these spark a sense of validation among its creators. But their pride is tinged with something a little darker. “The vision works,” Kaufman told me over the phone. “Whether you want it to work in a civilian situation or not is”—he paused—“a debatable question.” Cutler was firmer. “I would not undertake something like Insight in a civilian context,” he said.

ILLUSTRATION: YOSHI SODEOKA

The first time I was given the keys to a correlation engine, I was standing not in Genetec’s flashy Experience Center but in a grubby Irish pub in midtown Manhattan. On the edges of a social gathering, an NYPD official pulled me aside, said he had something to show me, and took an iPhone out of his pocket.

The phone, he explained, was loaded with a mobile version of the Domain Awareness System, the NYPD’s multi-intelligence fusion network. The network was launched in 2009 as a relatively modest attempt by the NYPD’s Counterterrorism Bureau to process CCTV footage from around Ground Zero in a central command hub. Microsoft got the main contract, and Genetec and other companies chipped in. Over the years, the system’s mandate expanded from counterterrorism to general police work, and the NYPD extended the dragnet from Lower Manhattan to all five boroughs. (In a profit-sharing agreement with the city, Microsoft has also sold the system to several US federal security agencies, along with the governments of Bulgaria, Rio de Janeiro, and Singapore, among others.) The software, which draws on many of the same sources as Citigraf, is available to all 36,000 officers on the force.

The NYPD official showed me how he could pull up any city resident’s rap sheet, lists of their known associates, cases in which they were named as a victim of a crime or as a witness, and, if they had a car, a heatmap of where they tended to drive and a full history of their parking violations. Then he handed me the phone. Go ahead, he said; search a name.

A flurry of people came to mind: Friends. Lovers. Enemies. In the end, I chose the victim of a shooting I’d witnessed in Brooklyn a couple of years earlier. He popped right up, along with what felt like more personal information than I, or even perhaps a curious officer, had any right to know without a court order. Feeling a little dizzy, I gave the phone back.

A couple of months later, I met up with Christian Schnedler, a hulkish computer engineer with wraparound Oakleys and a tribal shoulder tattoo that peeked out from one arm of his tight khaki tee. Schnedler had his first encounter with the Domain Awareness System in 2011, as a Genetec employee, and he’d found the experience dizzying too. Far from being spooked, though, he thought the whole concept of fusion was “genius.” The following year, he joined IBM, another NYPD contractor, and was posted to Dubai to help drive sales of the company’s “city management” products in the Middle East and North Africa. He set out on the job thoroughly convinced of the technology’s unmitigated potential for good, even in a civilian context. But that soon changed.

One of Schnedler’s first sales meetings was with the Egyptian Ministry of the Interior, not long after the Muslim Brotherhood swept to power in the wake of the Arab Spring. In Schnedler’s retelling, ministry officials said that they wanted software to pick out the networks of “terrorists,” “protesters,” and “agitators” threatening the country’s newfound peace.

Schnedler knew that this was technically feasible. But he also imagined that “protesters,” “terrorists,” and “agitators” in this context probably also referred to various political and religious minority groups, including Egypt’s long marginalized Coptic community. A devout Christian, Schnedler realized that the same technology that had so thoroughly persuaded him in New York could be turned into a sharp instrument of algorithmic authoritarianism, just as useful for rounding up networks of congregants as it was for mapping criminal organizations. He was relieved when the Egyptian government ultimately failed to follow through with a formal solicitation.

The following year, Schnedler was invited to Turkey, where police at Ankara’s central surveillance center proudly showed him a system that seemed directly inspired by New York’s. In a back room, a chain-smoking senior officer asked Schnedler whether he could build software that would identify masked protesters by correlating the tattoos on their forearms—which they’d often expose momentarily when throwing rocks—with a database of such markings that his government had been assembling. Again, Schnedler knew that this was technically feasible. Again, he worried about how it might be used against Turkey’s Christian population. (The officer didn’t follow up in the remaining time that Schnedler was at IBM.)

The final straw came for Schnedler in the spring of 2015, when he was invited back to Cairo. A new government greeted him this time—the Muslim Brotherhood had been ousted in 2013—and a new set of Interior Ministry officials wanted to stamp out threats to the country’s security. Threats, Schnedler noted with a tinge of amusement, that would now also include members of the very party that had tried to enlist his services before. (To his knowledge, no deal materialized.) He returned to the United States that fall having learned an important lesson: “To the extent that you do not trust your government, you do not want your government to build these systems.”

Such encounters are common in the fusion industry. Genetec’s Gaccione says he often has to tell prospective customers “that’s not what we do.” Particularly with the computing power and analytics tools readily available through the cloud, fusion could enable “a lot of crazy stuff,” he said. One government, which he refused to name, issued a solicitation for a tool that would mesh facial recognition cameras and mobile phone networks to track citizens wherever they went. “I didn’t get through like eight pages before we threw it out,” he said.

ILLUSTRATION: YOSHI SODEOKA

In the United States, there are no specific national rules governing fusion technology. Absent a legal challenge to test its constitutional integrity, there’s little to say that you can’t blend data sets together, even if doing so might generate information that investigators would otherwise have needed a court order to obtain. In the absence of stricter regulations, Genetec has developed a series of safeguards for its software. One feature, which is optional, automatically blurs all faces in CCTV footage. And if an analyst wants to see where a vehicle has been, he needs to enter a case number to activate the search; that way, he can’t snoop on his girlfriend.

But the uncomfortable truth is that fusion’s more dystopian incarnations are already out in the world. Dahlia Peterson, a research analyst at Georgetown’s Center for Security and Emerging Technology, told me that fusion architectures are central to the Chinese government’s campaign against dissidents and minority citizens, particularly the Uighur Muslim group. One such system, the Integrated Joint Operations Platform, fuses together facial recognition scans from CCTV cameras; financial, medical, and criminal records; hardware identifiers from smartphones and computers; even mandatory questionnaires that ask residents, among other things, how many times they pray each day. According to reporting by The New York Times, a cloud computing center in Xinjiang, powered in part by chips from Nvidia, can comb through hundreds of millions of photos and reports from the area’s many checkpoints while applying real-time analytics to up to 1,000 CCTV cameras simultaneously. Authorities use the life histories generated by these systems to determine who is “trustworthy.” Those who aren’t often risk being sent to prison or a reeducation camp.

In modern life, we’re rarely not in the crosshairs of some spying device or other. We rush by a license plate reader on our way to work, a few blocks from a burglary that’s being patternized. As we walk from the parking lot to the gym, or the mosque, we’re picked up on a dozen CCTVs. We attend a protest under the watchful eye of a drone. Our smartphones log our every move, our every click, and our every like. But no single one of these machines, when used in isolation, is omniscient. The fact that intelligence can be difficult and tedious to correlate was perhaps the last natural rampart standing between us and total surveillance. The little privacy we have left exists in the spaces between each data point.

Fusion technology eviscerates those spaces. With the click of an “INVESTIGATE” button, our digital footprints, once scattered, become a single uninterrupted life history, leaving not only our enemies, but also our friends and our lovers, with nowhere to hide.

This story has been updated to clarify Dave Logan’s VP role at BAE Systems.


More Great WIRED Stories