Facebook Ad Services Let Anyone Target US Military Personnel

Researchers warn that an advertising platform with categories like “Army” and “United States Air Force Security Forces” could be abused.
US army soldier on his smartphone device
The stakes are much higher should active duty members of the US military face misinformation online that could impact their understanding of world events or expose them to scams.Photograph: Maja Hitij/Getty Images

The spread of misinformation on social media platforms has fueled division, stoked violence, and reshaped geopolitics in recent years. Targeted ads have become a major battleground, with bad actors strategically distributing misleading information or ensnaring unassuming users in scams. Facebook has worked to eliminate or redefine certain targeting categories as part of a broader effort to address these threats. But despite warnings from researchers, its ad system still lets anyone target a massive array of populations and groups—including campaigns directed at United States military personnel. Currently categories for major branches include “Army,” “Air Force,” and “National Guard,” along with much narrower categories like “United States Air Force Security Forces.”

At first blush it may seem innocuous that you can target ads at these groups as easily as you can most other organizations. But independent security researcher Andrea Downing says the stakes are much higher should active duty members of the US military—many of whom would likely get caught up in broader Facebook targeting of this sort—face misinformation online that could impact their understanding of world events or expose them to scams. While Downing hasn't detected such malicious campaigns herself, the interplay between ads and misinformation on Facebook is consistently murky. 

In the wake of the Capitol riots, for example, researchers at the Tech Transparency Project found that Facebook's systems had shown ads for military equipment like body armor and gun holsters alongside updates on the insurrection and content that promoted election misinformation. Even when lawmakers called on Facebook to halt military equipment ads, and the company agreed to a temporary ban, some ads still seemed to be slipping by

"Through the ad targeting system on Facebook, I could craft ads or send direct messages to current and former military through large umbrella categories or more granular permutations," says Downing. "A nation state actor could abuse this to run influence operations against US military members at a large scale or in a more targeted way."

There are roughly 1.3 million active duty United States military members and 18 million veterans living in the US, all of which could amount to as much as $1.2 trillion in buying power, according to the marketing firm SheerID. Facebook gives the option to run targeted ads based on the job titles and employers users list, as well as “interests,” which it draws from user activity like clicking a relevant ad or liking a page. In both cases, that includes military branches. For job titles, that would include retired personnel who still reference that experience in their profiles but also active duty members who have filled out that field. In addition to regular targeting, Facebook offers tools for advertisers to reach out to users through its Messenger chat platform.

Many tech giants make their money from ads and offer similar features to facilitate targeted marketing. Google's and Twitter's platforms do not offer granular military categories, though.

"Let's say you have younger service members, regardless of the branch of the military, and they're deployed away from their family and looking for some sort of kinship. Facebook offers that," says Bill Hagestad, an independent security engineer at Red Dragon 1949 and a retired Marine Corps lieutenant colonel. "So targeted ads could allow malevolent individuals to use the lingo, embed themselves favorably, and manipulate service members regardless of age or rank. And this could compromise operational security, which is as important as the safety of those being manipulated themselves." 

Downing, who is also a cofounder of the social media health support nonprofit The Light Collective, says she first attempted to notify Facebook about her concerns in December 2019 through informal connections. Within days, she says, Facebook had removed many of the military ad-targeting groups she had highlighted. The issue seemed resolved. At the end of August 2020, though, she noticed that many of these targeting groups had reemerged, even after Facebook pruned its military categories in mid-August. “We’ve combined several options representing military bases or regiments, because the specific interests were rarely used, and instead, advertisers can still reach an audience with an interest in the military,” Facebook wrote at the time.

On September 24, Downing submitted a vulnerability report about her findings through Facebook's bug bounty disclosure portal. Six weeks later, on November 5, a Facebook representative replied to say that the company does not view the finding as a vulnerability.

Facebook told WIRED that it has no record of Downing's original efforts in December 2019 to communicate about her research. A spokesperson added that the company continually reviews the targeting options it offers and assesses how they're being used and that advertisers cannot specifically target an active-duty status. Additionally, all ads on Facebook must comply with the company's advertising policies, which forbid manipulation and abuse. The Department of Defense also offers extensive social media guidelines and recommendations in an effort to keep military members and operations safe. 

“Demographic targeting, such as job title and employers, is based on the information people opt to provide in their profile,” the Facebook spokesperson told WIRED. “People can also choose whether this profile information can be used to show them ads based on these categories through our Ad Preferences.”

Downing has seen the cycle repeat multiple times now that Facebook's military ad-targeting categories come and go. Some come back again, and others are replaced with different sub-groups. WIRED has corroborated Downing's observations about these inconsistencies. Facebook did not comment on the explanation for these fluctuations. After a report by ProPublica in 2017 about anti-Semitic ad-targeting categories, Facebook temporarily removed the ability to target based on categories related to job titles and education. The company revised and restored these mechanisms in 2018. 

“There's no incentive for Facebook to fundamentally change the design of ad targeting,” Downing says. “And as we've seen it's 'out of scope' in the cybersecurity disclosure channels that can prevent such a problem.”

More than a decade of detection work on Facebook's part has made it harder to run blatantly malicious or scammy ads. But the company has struggled to get a handle on misinformation in ads and its impacts. It took until the end of September, for example, for Facebook to finally ban ads related to QAnon and other militarized campaigns. Even so, QAnon ad content was still showing up weeks later. Ad fraud or even malware distribution campaigns also slip past Facebook's defenses occasionally.

Malicious advertising schemes broadly are an active threat across the web. In recent guidance, the United States Cybersecurity and Infrastructure Security Agency warned that malvertising, as it's often known, uses “malicious or hijacked website advertisements to spread malware and is a significant vector for exploitation.” CISA added that, “Adversaries can use carefully crafted and tailored malicious ads as part of a targeted campaign against a specific victim, not just as broad-spectrum attacks.”

These types of tailored campaigns are what Downing had in mind when she also submitted her Facebook findings to the CERT Coordination Center at Carnegie Mellon University in late September. CERT is an organization that helps researchers catalog vulnerabilities and coordinate their public disclosure.

“On the one hand, targeted ads and profiling users is how Facebook works; there’s no surprise about that,” says Art Manion, a vulnerability analysis technical manager for CERT. “Depending on the nature of the target there could be greater or lesser security and privacy concerns. If pretty specific groups within the US military could be targeted with ads that brings up the question of influence campaigns, misinformation, or potentially someone could try to slip in a malicious ad. Facebook doesn’t allow or support those things, but it is conceivable.”

Nonetheless, Manion says that Downing's findings are “out of scope” for the types of vulnerabilities CERT tracks—typically technical flaws in software that can be patched or otherwise remediated by vendors. This means that CERT didn't assign the research a “common vulnerability and exposures” number to keep track of the issue. He adds, though, that the research raises important questions about gaps in the security community's vulnerability tracking mechanisms. 

Recently, Manion and his colleagues at CERT have been investigating the security community's collective lack of ability to assess and deal with potential exposures in machine learning and artificial intelligence algorithms or other profiling systems. The power and complexity of these platforms, like Facebook's ad-targeting service, make it difficult for regular people and security experts alike to think ahead to all the potential interactions and ramifications—even if users understand conceptually that some of their information will be used for content targeting.  

“There’s not going to be a patch for [Downing's findings], but it doesn’t mean it’s not a problem of some kind,” Manion says. “Do the existing models for security vulnerability resolution work here? I would suggest they don’t and we need to figure out some new approaches.”


More Great WIRED Stories