A Spy Wants to Connect With You on LinkedIn

Russia, North Korea, Iran, and China have been caught using fake profiles to gather information. But the platform’s tools to weed them out only go so far.
A window with the LinkedIn user symbol inside an eye icons under it show a briefcase a tick and a cross.
ILLUSTRATION: ANJALI NAIR

There is nothing immediately suspicious about Camille Lons’ LinkedIn page. The politics and security researcher’s profile photo is of her giving a talk. Her professional network is made up of almost 400 people; she has a detailed career history and biography. Lons has also shared a link to a recent podcast appearance—“always enjoying these conversations”—and liked posts from diplomats across the Middle East.

So when Lons got in touch with freelance journalist Anahita Saymidinova last fall, her offer of work appeared genuine. They swapped messages on LinkedIn before Lons asked to share more details of a project she was working on via email. “I just shoot an email to your inbox,” she wrote.

What Saymidinova didn’t know at the time was that the person messaging her wasn’t Lons at all. Saymidinova, who does work for Iran International, a Persian-language news outlet that has been harassed and threatened by Iranian government officials, was being targeted by a state-backed actor. The account was an imposter that researchers have since linked to Iranian hacking group Charming Kitten. (The real Camille Lons is a politics and security researcher, and a LinkedIn profile with verified contact details has existed since 2014. The real Lons did not respond to WIRED’s requests for comment.)

When the fake account emailed Saymidinova, her suspicions were raised by a PDF that said the US State Department had provided $500,000 to fund a research project. “When I saw the budget, it was so unrealistic,” Saymidinova says.

But the attackers were persistent and asked the journalist to join a Zoom call to discuss the proposal further, as well as sending some links to review. Saymidinova, now on high alert, says she told an Iran International IT staff member about the approach and stopped replying. “It was very clear that they wanted to hack my computer,” she says. Amin Sabeti, the founder of Certfa Lab, a security organization that researches threats from Iran, analyzed the fake profile’s behavior and correspondence with Saymidinova and says the incident closely mimics other approaches on LinkedIn from Charming Kitten.

The Lons incident, which has not been previously reported, is at the murkiest end of LinkedIn’s problem with fake accounts. Sophisticated state-backed groups from Iran, North KoreaRussia, and China regularly leverage LinkedIn to connect with targets in an attempt to steal information through phishing scams or by using malware. The episode highlights LinkedIn’s ongoing battle against “inauthentic behavior,” which includes everything from irritating spam to shady espionage. 

Missing Links

LinkedIn is an immensely valuable tool for research, networking, and finding work. But the amount of personal information people share on LinkedIn—from location and languages spoken to work history and professional connections—makes it ideal for state-sponsored espionage and weird marketing schemes. False accounts are often used to hawk cryptocurrency, trick people into reshipping schemes, and steal identities.  

Sabeti, who’s been analyzing Charming Kitten profiles on LinkedIn since 2019, says the group has a clear strategy for the platform. “Before they initiate conversation, they know who they are contacting, they know the full details,” Sabeti says. In one instance, the attackers got as far as hosting a Zoom call with someone they were targeting and used static pictures of the scientist they were impersonating.

The fake Lons LinkedIn profile, which was created in May 2022, listed the real Lons’ correct work and education histories and used the same image from her real Twitter and LinkedIn accounts. Much of the biography text on the fake page had been copied from profiles of the real Lons as well. Sabeti says the group ultimately wants to gain access to people’s Gmail or Twitter accounts to gather private information. “They can collect intelligence,” Sabeti says. “And then they use it for other targets.” 

The UK government said in May 2022 that “foreign spies and other malicious actors” had approached 10,000 people on LinkedIn or Facebook over 12 months. One person acting on behalf of China, according to court documents, found that the algorithm of one “professional networking website” was “relentless” in suggesting potential new targets to approach. Often these approaches start on LinkedIn but move to WhatsApp or email, where it may be easier to send phishing links or malware.

In one previously unreported example, a fake account connected to North Korea’s Lazarus hacking group, pretended to be a recruiter at Meta. They started by asking the target how their weekend was before inviting them to complete a programming challenge to continue the hiring process, says Peter Kalnai, the senior malware researcher at security firm ESET who discovered the account. But the programming challenge was a scam designed to deploy malware to the target’s computer, Kalnai says. The LinkedIn messages sent by the scammers didn’t contain many grammatical errors or other typos, he says, which made the attack more difficult to catch. “Those communications were convincing. No red flags in the messages.”

It’s likely that scam and spam accounts are much more common on LinkedIn than those connected to any nation or government-backed groups. In September last year, security reporter Brian Krebs found a flood of fake chief information security officers on the platform and thousands of false accounts linked to legitimate companies. Following the reporting, Apple and Amazon’s profile pages were purged of hundreds of thousands of fake accounts. But due to LinkedIn’s privacy settings, which make certain profiles inaccessible to users who don’t share connections, it’s difficult to gauge the scope of the problem across the platform. 

The picture gets clearer at an individual company level. An analysis of WIRED’s company profile in January showed 577 people listing WIRED as their current employer—a figure well above the actual number of staff. Several of the accounts appeared to use profile images generated by AI, and 88 profiles claimed to be based in India. (WIRED does not have an India office, although its parent company, Condé Nast, does.) One account, listed as WIRED’s “co-owner,” used the name of a senior member of WIRED’s editorial staff and was advertising a suspicious financial scheme. 

In late February, soon after we told LinkedIn about suspicious accounts linked to WIRED, approximately 250 accounts were removed from WIRED’s page. The total employee count dropped to 225, with 15 people based in India—more in line with the real number of employees. The purpose of these removed accounts remains a mystery.

“If people were using fake accounts to impersonate WIRED journalists, that would be a major issue. In the disinformation space, we have seen propagandists pretend to be journalists to gain credibility with their target audiences,” says Josh Goldstein, a research fellow with the CyberAI Project at Georgetown University’s Center for Security and Emerging Technology. “But the accounts you shared with me don’t seem to be of that type.” 

Without more information, Goldstein says, it’s impossible to know what the fake accounts linked to WIRED may have been up to. Oscar Rodriguez, LinkedIn’s vice president in charge of trust, privacy, and equity, says the company does not go into detail about why it removes specific accounts. But he says many of the accounts linked to WIRED were dormant. 

Fighting Fakes

In October 2022, LinkedIn introduced several features meant to clamp down on fake and scam profiles. These included tools to detect AI-generated profile photos and filters that flag messages as potential scams. LinkedIn also rolled out an “About” section for individual profiles that shows when an account was created and whether the account has been verified with a work phone number or email address

In its most recent transparency report, covering January to June 2022, LinkedIn said that 95.3 percent of the fake accounts it discovered were blocked by “automated defenses,” including 16.4 million that were blocked at the time of registration. LinkedIn’s Rodriguez says the company has identified a number of signs it looks for when hunting fake accounts. For instance, commenting or leaving messages with super-human speed—a potential sign of automation—might cue LinkedIn to ask the account to provide a state-issued ID and make the account inaccessible to other users.

Similarly, when an account is being created, a mismatch between its IP address and listed location wouldn’t automatically be a trigger—someone could be traveling or using a VPN—but it might be a “yellow flag,” Rodriguez says. If the account shares other characteristics with previously removed accounts from a particular region or set of devices, he adds, that might be a clearer signal that the account is fraudulent.

“For the very small percent of accounts that managed to interact with members, we retrace our steps to understand the common characteristics across the different accounts,” Rodriguez says. The information is then used to “cluster” groups of accounts that may be fraudulent. Sabeti says LinkedIn is “very proactive” when human rights or security organizations report suspicious accounts. “It’s good in comparison with the other tech companies,” he says.

In some cases, LinkedIn’s new defenses appear to be working. In December, WIRED created two fake profiles using AI text generators. “Robert Tolbert,” a mechanical engineering professor at Oxford University, had an AI-generated profile photo and a resume written by ChatGPT, complete with fake journal articles. The day after the account was created, LinkedIn asked for ID verification. A second fake profile attempt—a “software developer” with Silicon Valley credentials and no photo—also received a request for an ID the following day. Rodriguez declined to comment on why these accounts were flagged, but both accounts were inaccessible on LinkedIn after they received the request for ID.

But detecting fake accounts is tricky—and scammers and spies are always trying to stay ahead of systems designed to catch them. Accounts that slip past initial filters but haven’t started messaging other people—like many of the scam accounts claiming to be WIRED staff—seem to be particularly hard to catch. Rodriguez says dormant accounts are generally removed through user reports or when LinkedIn discovers a fraudulent cluster. 

Today, WIRED’s page is a fairly accurate snapshot of its current staff. The fake Camille Lons profile was removed after we began reporting this story—Rodriguez did not say why. But in a process similar to the Lons impersonators, we conducted one additional experiment to try to slip past LinkedIn’s filters. 

With his permission, we created an exact duplicate of the profile for Andrew Couts, the editor of this story, only swapping out his photo for an alternative. The only contact information we provided when creating the account was a free ProtonMail account. Before we deleted the account, Fake Couts floated around on LinkedIn for more than two months, accepting connections, sending and receiving messages, browsing job listings, and promoting the occasional WIRED story. Then, one day, Fake Couts received a message from a marketer with an offer that seems too good to be true: a custom-built “professional WordPress website at no cost.”