One Data Scientist’s Quest to Quash Misinformation

Sara-Jayne Terp uses the tools of cybersecurity to track false claims like they’re malware. Her goal: Stop dangerous lies from hacking our beliefs.
Portrait of SaraJayne Terp
Photograph: Jovelle Tamayo

One day in early June 2018, Sara-Jayne Terp, a British data scientist, flew from her home in Oregon to Tampa, Florida, to take part in an exercise that the US military was hosting. On the anniversary of D-Day, the US Special Operations Command was gathering a bunch of experts and soldiers for a thought experiment: If the Normandy invasion were to happen today, what would it look like? The 1944 operation was successful in large part because the Allies had spent almost a year planting fake information, convincing the Germans they were building up troops in places they weren't, broadcasting sham radio transmissions, even staging dummy tanks at key locations. Now, given today's tools, how would you deceive the enemy?

Terp spent the day in Florida brainstorming how to fool a modern foe, though she has never seen the results. “I think they instantly classified the report,” she says. But she wound up at dinner with Pablo Breuer—the Navy commander who had invited her—and Marc Rogers, a cyber­security expert. They started talking about modern ­deception and, in particular, a new danger: campaigns that use ordinary ­people to spread false information through social media. The 2016 election had shown that foreign countries had playbooks for this kind of operation. But in the US, there wasn't much of a response—or defense.

This story appears in the October 2020 issue. Subscribe to WIRED.

“We got tired of admiring the problem,” Breuer says. “Everybody was looking at it. Nobody was doing anything.”

They discussed creating their own playbook for tracking and stopping misinformation. If someone launched a campaign, they wanted to know how it worked. If ­people worldwide started reciting the same strange theory, they wanted a sense of who was behind it. As hackers, they were used to taking things apart to see how they worked—using artifacts lurking in code to trace malware back to a Russian crime syndicate, say, or reverse engineering a ­denial-of-service attack to find a way to defend against it. Misinformation, they realized, could be treated the same way: as a cybersecurity problem.

The trio left Tampa convinced there had to be a way of analyzing misinformation campaigns so researchers could understand how they worked and counter them. Not long after, Terp helped pull together an international group of security experts, academics, journalists, and government researchers to work on what she called “misinfosec.”

Terp knew, of course, there's one key difference between malware and influence campaigns. A virus propagates through the vulnerable end points and nodes of a computer network. But with misinfo, those nodes aren't machines, they're humans. “Beliefs can be hacked,” Terp says. If you want to guard against an attack, she thought, you have to identify the weaknesses in the network. In this case, that network was the people of the United States.

So when Breuer invited Terp back to Tampa to hash out their idea six months later, she decided not to fly. On the last day of 2018, she packed up her red Hyundai for a few weeks on the road. She stopped by a New Year's Eve party in Portland to say goodbye to friends. A storm was coming, so she left well before midnight to make it over the mountains east of the city, ­skidding through the pass as ­highway workers closed the roads behind her.

Thus began an odyssey that started with a 3,000-mile drive to Tampa but didn't stop there. Terp spent almost nine months on the road—roving from Indianapolis to San Francisco to Atlanta to Seattle—developing a playbook for tackling misinformation and promoting it to colleagues in 47 states. Along the way, she also kept her eye out for vulnerabilities in America's human network.

Terp is a shy but warm ­middle-aged woman, with hair that she likes to change up—now gray and cropped short, now a blond bob, now an auburn-lavender hue. She once gave a presentation called “An Introvert's Guide to Presentations” at a hacker convention, where she recommended bringing a teddy bear. She likes finishing half-completed cross-stitches she buys at ­second-hand stores. She is also an expert at making the invisible visible and detecting submerged threats.

Terp began her career working in defense research for the British government. Her first gig was developing algorithms that could combine sonar readings with oceanographic data and human intelligence to locate submarines. “It was big data before big data was cool,” she says. She soon became interested in how data shapes beliefs—and how it can be used to manipulate them. This was during the Cold War, and maintaining the upper hand meant knowing how the enemy would try to fool you.

After the Cold War ended, Terp shifted her focus to disaster response; she became a crisis mapper, collecting and synthesizing data from on-the-ground sources to create a coherent picture of what was really happening.

It was during disasters like the Haiti earthquake and the BP oil spill in 2010, when Terp's job included amassing real-time data from social media, that she started to notice what seemed to be intentionally false information engineered to sow confusion in an already chaotic situation. One article, citing Russian scientists, claimed the BP spill would collapse the ocean floor and cause a tsunami. Initially, Terp considered them isolated incidents, garbage clogging her data streams. But as the 2016 election drew near, it became clear to her—and many others—that misinformation campaigns were being run and coordinated by sophisticated adversaries.

As Terp crisscrossed the country in 2019, it was a little like she was ­crisis-mapping the US. She'd stop to ­people-watch in coffee shops. She struck up conversations over breakfast at Super 8. She wanted to get a feel for the communities people belonged to, how they saw themselves. What were they thinking? How were they talking to each other? She gathered her impressions slowly.

In Tampa, Terp and Breuer swiftly got down to plotting their defense against misinfo. They worked from the premise that small clues—like particular fonts or misspellings in viral posts, or the pattern of Twitter profiles shouting the loudest—can expose the origin, scope, and purpose of a campaign. These “artifacts,” as Terp calls them, are bread crumbs left in the wake of an attack. The most effective approach, they figured, would be to organize a way for the security world to trace those bread-crumb trails.

Because cybercriminals tend to cobble together their exploits from a common inventory of techniques, many cyber­security researchers use an online database called the ATT&CK Framework to analyze intrusions—it's like a living catalog of all the forms of mayhem in circulation among hackers. Terp and Breuer wanted to build the same kind of library, but for misinformation.

Terp stayed in Tampa for a week before hitting the road again, but she kept working as she traveled. To seed their database, the misinfosec team dissected earlier campaigns, from 2015's Jade Helm 15 military training exercise—which on social media was twisted into an attempt to impose martial law in Texas—to the ­Russia-linked Blacktivist accounts that stoked racial division before the 2016 election. They were trying to parse how each campaign worked, cataloging artifacts and identifying strategies that showed up again and again. Did a retweet from an influencer give a message legitimacy and reach? Was a hashtag borrowed from another campaign in hopes of poaching followers?

Once they could recognize patterns, they figured, they would also see choke points. In cyberwarfare, there's a concept called a kill chain, adapted from the military. Map the phases of an attack, Breuer says, and you can anticipate what they're going to do: “If I can somehow interrupt that chain, if I can break a link somewhere, the attack fails.”

The misinfosec group eventually developed a structure for cataloging misinformation techniques, based on the ATT&CK Framework. In keeping with their field's tolerance for acronyms, they called it AMITT (Adversarial Misinformation and Influence Tactics and Techniques). They've identified more than 60 techniques so far, mapping them onto the phases of an attack. Technique 49 is flooding, using bots or trolls to overtake a conversation by posting so much material it drowns out other ideas. Technique 18 is paid targeted ads. Technique 54 is amplification by Twitter bots. But the database is just getting started.

Last October, the team integrated AMITT into an international, open source threat-sharing platform. That meant anyone, anywhere, could add a misinformation campaign and, with a few clicks, specify which tactics, techniques, and procedures were at play. Terp and Breuer adopted the term “cognitive security” to describe the work of preventing malefactors from hacking people's beliefs—work they hope the world's cybersecurity teams and threat researchers will take on. They foresee burgeoning demand for this sort of effort, whether it's managing a brand's reputation, guarding against market manipulation, or protecting a platform from legal risk.

As Terp drove, she listened to a lot of talk radio. It told one long story of a nation in crisis—of a liberal plot to ruin America and of outsiders intent on destroying a way of life. Online, people on the left, too, were constantly agitated by existential threats.

This kind of fear and division, Terp thought, makes people perfect targets for misinformation. The irony is that the folks who hack into those fears and beliefs are typically hostile outsiders themselves. Purveyors of misinformation always have a goal, whether it's to destabilize a political system or just to make money. But the ­people on the receiving end usually don't see the big picture. They just see #5G trending or a friend's ­Pizzagate posts. Or, as 2020 got off the ground, links to sensational videos about a new virus coming out of China.

This February, Terp was attending a hacker convention in DC when she started feeling terrible. She limped back to an apartment she'd rented in Bellingham, north of Seattle. A doctor there told her she had an unusual pneumonia that had been moving through the area. Weeks later, Seattle became the first coronavirus hot spot in the US—and soon the Covid pandemic began to run in parallel with what people described as an “infodemic,” a tidal wave of false information spreading along with the disease.

Around the same time Terp fell sick, Breuer's parents sent him a slick Facebook video claiming that the novel virus was a US-made bioweapon. His parents are from Argentina and had received the clip from worried friends back home. The video presented a chance to put AMITT through its paces, so Breuer began cataloging artifacts. The narration was in Castilian Spanish. At one point the camera pans over some patent numbers the narrator claims are for virus mutations. Breuer looked up the patents; they didn't exist. When he traced the video's path, he found it had been shared by sock-puppet accounts on Facebook. He called friends in South and Latin America to ask if they'd seen the video and realized it had been making its way through Mexico and Guatemala two weeks before showing up in Argentina. “It was kind of like tracking a virus,” Breuer says.

As Breuer watched the video, he recognized several misinformation techniques from the AMITT database. “Create fake social media profiles” is technique 7. The video used fake experts to seem more legitimate (technique 9). He thought it might be planting narratives for other misinformation campaigns (technique 44: seeding distortion).

As with malware, tracing misinformation back to its source isn't an exact science. The Castilian Spanish seemed designed to give the video an air of authority in Latin America. Its high production value pointed to significant financial backing. The fact that the video first appeared in Mexico and Guatemala, and the timing of its release—February, right before migrant workers leave for spring planting in the US—suggested that its goal might be undermining American food security. “They targeted the US by targeting somebody else. It's somebody who really understood geopolitical consequences,” Breuer says. This all led him to believe it was a professional job, likely Russian.

Of course, he might be wrong. But by analyzing a video like this, and putting it into the database, Breuer hopes the next time there's a polished video in Castilian Spanish making its way through South America and relying on sock puppets, law enforcement and researchers can see just how it spread the last time, recognize the pattern, and inoculate against it sooner.

A month or so into her recovery, Terp got a message from Marc Rogers, with whom she'd had dinner after the D-Day event. Rogers had helped organize an international group of volunteer researchers who were working to protect hospitals from cyberattacks and virus-related scams. They'd been seeing a flood of misinformation like the video Breuer analyzed, and Rogers wanted to know if Terp would run a team that would track campaigns exploiting Covid. She signed on.

On a Tuesday morning in August, Terp was at home trying to dissect the latest misinformation. A video posted the previous day claimed that Covid-19 was a hoax perpetrated by the World Health Organization. It had already racked up nearly 150,000 views. She also got word about a pair of Swiss websites claiming that Anthony Fauci doubted a virus vaccine would be successful and that doctors thought masks were useless. Her team was searching for other URLs linked to the same host domain, identifying ad tags used on the sites to trace funding and cataloging particular phrases and narratives—like one claiming German authorities wanted Covid-infected kids to be moved to internment camps—to pinpoint where else they appeared. All of this will be entered into the database, adding to the arsenal of information for battling misinformation. She's optimistic about the project's momentum: The more it's used, the more effective AMITT will be, Terp says, adding that her group is working with NATO, the EU, and the Department of Homeland Security to test-drive the ­system.

She's also cautiously optimistic about the strength of the network that's under assault. On her road trip, Terp says, the more she drove, the more hopeful she became. ­People were proud of their cities, loved their communities. She saw that when people have something concrete to fight for, they are less prone to end up in phantom battles against illusory enemies. “You have to involve ­people in their own solution,” she says. By creating a world where misinformation makes more sense, Terp hopes more people will be able to reject it.

During the George Floyd protests, Terp's team was tracking another rumor: A meme kept resurfacing, in various forms, about “busloads of antifa” being driven to protests in small towns. One of the things she saw was people in small, conservative communities debunking that idea. “Somebody went, ‘Hang on, this doesn't seem right,’” she says. Those people understood, on some level, that their communities were being hacked, and that they needed defending.


SONNER KEHRT (@etskehrt) is a freelance writer in California. This is her first story for WIRED.

This article appears in the October issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


Special Series: A More Perfect Election

Fraud-proof. Hacker-proof. Doubt-proof. Across the country, people are working hard to reboot the American voting system.