Much @Stake: The Band of Hackers That Defined an Era

Today's cybersecurity superstars share a common thread—one that leads back to early hacking group Cult of the Dead Cow.
Image may contain Human Person Coat Suit Clothing Overcoat Apparel Advertisement Collage Poster and Crowd
Elena Lacey; Steve Marcus, Reuters

Many of today's cybersecurity luminaries—including former Facebook chief security officer Alex Stamos—have roots in a firm called @stake. The following excerpt, from Joseph Menn's upcoming Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World, traces the company's lasting influence.

Two years before 9/11, an intelligence contractor I will call Rodriguez was in Beijing when NATO forces in the disintegrating state of Yugoslavia dropped five US bombs on the Chinese embassy in Belgrade, killing three. Washington rapidly apologized for what it said had been a mistake in targeting, but the Chinese were furious. In a nationally televised address, then Chinese vice president Hu Jintao condemned the bombing as “barbaric” and criminal. Tens of thousands of protestors flowed into the streets, throwing rocks and pressing up against the gates of the American embassy in Beijing and consulates in other cities.

Excerpted from Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World. Buy on Amazon.PublicAffairs

The US needed to know what the angry crowds would do next, but the embassy staffers were trapped inside their buildings. Rodriguez, working in China as a private citizen, could still move around. He checked with a friend on the China desk of the CIA and asked how he could help. The analyst told Rodriguez to go find out what was happening and then get to an internet café to see if he could file a report from there. Once inside an internet café, Rodriguez called again for advice on transmitting something without it getting caught in China’s dragnet on international communications. The analyst asked for the street address of the café. When Rodriguez told him exactly where he was, the analyst laughed. “No problem, you don’t have to send anything,” he explained. “Back Orifice is on all of those machines.” To signal where he wanted Rodriguez to sit, he remotely ejected the CD tray from one machine. Then he read everything Rodriguez wrote as he typed out the best on-the-ground reporting from Beijing. Rodriguez erased what he had typed and walked out, leaving no record of the writing.

Some hackers felt great fulfillment in government service. Serving the government in the wake of the terror attacks gave them a chance to fit in when they hadn’t before, united by a common cause. But for too many of this cohort, what started with moral clarity ended in the realization that morality can fall apart when governments battle governments. That was the case with a cDc Ninja Strike Force member I will call Stevens. As Al-Qaeda gained notoriety and recruits from the destruction, the US Joint Special Operations Command, or JSOC, stepped up the hiring of American hackers like Stevens. Some operatives installed keyloggers in internet cafés in Iraq, allowing supervisors to see when a target signed in to monitored email accounts. Then the squad would track the target physically as he left and kill him.

After 9/11, the military flew Stevens to another country and assigned him to do everything geek, from setting up servers to breaking into the phones of captured terrorism suspects. Though he was a tech specialist, the small teams were close, and members would substitute for each other when needed. Sometimes things went wrong, and decisions made on the ground called for him to do things he had not been trained in or prepared for mentally. “We did bad things to people,” he said years later, still dealing with the trauma.

As the American government ramped up its spying efforts after 9/11, it needed to discover new vulnerabilities that would enable digital break-ins. In the trade, these were often called “zero-days,” because the software maker and its customers had zero days of warning that they needed to fix the flaw. A ten-day flaw is less dangerous because companies have more time to develop and distribute a patch, and customers are more likely to apply it. The increased demand for zero-days drove up prices.

After the dollars multiplied, hackers who had the strongest skills in finding bugs that others could not—on their own or with specialized tools—could now make a living doing nothing but this. And then they had to choose. They could sell directly to a government contractor and hope that the flaw would be used in pursuit of a target they personally disliked. They could sell to a contractor and decide not to care what it was used for. Or they could sell to a broker who would then control where it went. Some brokers claimed they sold only to Western governments. Sometimes that was true. Those who said nothing at all about their clients paid the most. For the first time, it was relatively straightforward for the absolute best hackers to pick an ethical stance and then charge accordingly.

It was in no one’s interest to describe this market. The government’s role was classified as secret. The contractors were likewise bound to secrecy. The brokers’ clients did not want attention being paid to their supply chain. And the majority of hackers did not want to announce themselves as mercenaries or paint a target on themselves for other hackers or governments that might be interested in hacking them for an easy zero-day harvest. So the gray trade grew, driven by useful rumors at Def Con and elsewhere, and stayed out of public sight for a decade.

The first mainstream articles on the zero-day business appeared not long before Edward Snowden disclosed that it was a fundamental part of US government practice, in 2013.

As offensive capabilities boomed, defense floundered. Firms like @stake tried to protect the biggest companies and, more importantly, get the biggest software makers to improve their products. But just like the government, the criminal world had discovered hacking in a big way. Modest improvements in security blacklisted addresses that were sending the most spam. That prompted spammers to hire virus writers to capture thousands of clean computers that they could use to evade the spam blocks. And once they had those robot networks, known as “botnets,” they decided to see what else they could do with them. From 2003 on, organized criminals, a preponderance of them in Russia and Ukraine, were responsible for most of the serious problems with computers in America. In an easy add-on to their business, the botnet operators used their networks’ captive machines to launch denial-of-service attacks that rendered websites unreachable, demanding extortion payments via Western Union to stop. They also harvested online banking credentials from unsuspecting owners so they could drain their balances. And when they ran out of ideas, they rented out their botnets to strangers who could try other tricks. On top of all that, international espionage was kicking into higher gear, sometimes with allies in the criminal world aiding officials in their quests.

Out of @stake came fodder for both offense and defense. On offense, Mudge pulled out of his tailspin and worked at a small security company, then returned to BBN for six years as technical director for intelligence agency projects. His @stake colleague and NSA veteran Dave Aitel started Immunity Inc., selling offensive tool kits used by governments and corporations for testing, and for spying as well. He also sold zero-days and admitted it in the press, which was seldom done in those days due to ethical concerns and fear of follow-up questions about which customers were doing what with the information. Aitel argued that others would find the same vulnerabilities and that there was no reason to give his information to the vendors and let them take advantage of his work for free. From the defender’s perspective, “once you accept that there are bugs you don’t know about that other people do, it’s not about when someone releases a vulnerability, it’s about what secondary protections you have,” Aitel said, recommending intrusion-detection tools, updated operating systems, and restrictive settings that prevent unneeded activity.

A London @stake alum moved to Thailand near Bangkok, assumed the handle the Grugq, and became the most famous broker of zero-days in the world. Rob Beck, who had done a stint with @stake between Microsoft jobs, moved to Phoenix and joined Ninja Strike Force luminary Val Smith at a boutique offensive shop that worked with both government agencies and companies. Careful thought went into what tasks they took on and for whom. “We were pirates, not mercenaries,” Beck said. “Pirates have a code.” They rejected illegal jobs and those that would have backfired on the customer. One of @stake’s main grown-ups, CEO Chris Darby, in 2006 became CEO of In-Q-Tel, the CIA-backed venture capital firm in Silicon Valley, and Dan Geer joined as chief information security officer even without an agency clearance. Darby later chaired Endgame, a defense contractor that sold millions of dollars’ worth of zero-days to the government before exiting the business after its exposure by hackers in 2011.

On defense, Christien Rioux and Wysopal started Veracode, which analyzed programs for flaws using an automated system dreamed up by Christien in order to make his regular work easier. After Microsoft, Window Snyder went to Apple. Apple’s software had fewer holes than Microsoft’s, but its customers were more valuable, since they tended to have more money. Snyder looked at the criminal ecosystem for chokepoints where she could make fraud more difficult. One of her innovations was to require a developer certificate, which cost $100, to install anything on an iPhone. It wasn’t a lot of money, but it was enough of a speed bump that it became economically unviable for criminals to ship malware in the same way.

Going deeper, Snyder argued that criminals would target Apple users less if the company held less data about them. But more data also made for a seamless user experience, a dominant theme at Apple, and executives kept pressing Snyder for evidence that consumers cared. “It was made easier when people started freaking out about Snowden,” Snyder said. “When people really understand it, they care.” In large part due to Snyder, Apple implemented new techniques that rendered iPhones impenetrable to police and to Apple itself, to the great frustration of the FBI. It was the first major technology company to declare that it had to consider itself a potential adversary to its customers, a real breakthrough in threat modeling. Still later, Snyder landed in a senior security job at top chipmaker Intel.

David Litchfield feuded publicly with Oracle over the database giant’s inflated claims of security. He went on to increasingly senior security jobs at Google and Apple. @stake’s Katie Moussouris, a friend to cDc, stayed on at new owner Symantec and then moved to Microsoft, where she got the company to join other software providers in paying bounties to hackers who found and responsibly reported significant flaws. Moussouris later struck out on her own and brought coordinated-disclosure programs to many other organizations, including the Department of Defense. She also worked tirelessly to stop penetration-testing tools from being subject to international arms-control agreements.

Private ethics debates turned heated and even escalated into intramural hacking. Some highly skilled hackers who found zero-days and kept them condemned the movement toward greater disclosure. Under the banner of Antisec, for “antisecurity,” the most enthusiastic of this lot targeted companies, mailing lists, and individuals who released exploit code. In the beginning they argued that giving out exploits empowered no-talent script kiddies, like those who might have been responsible for SQL Slammer. But some of them simply didn’t want extra competition. The mantle was taken up by hacker Stephen Watt and a group calling itself the Phrack High Council, which made the Antisec movement pro-criminal. Watt later did time for providing a sniffer, which recorded all data traversing a network, to Albert Gonzalez, one of the most notorious American criminal hackers. In a 2008 Phrack profile that used his handle only, Watt bragged about starting Project Mayhem, which included hacks against prominent white hats. “We all had a lot of fun,” Watt said. Later on, the Antisec mission would be taken up by a new breed of hacktivists.

Ted Julian, who had started as @stake marketing head before it merged with the L0pht, cofounded a company called Arbor Networks with University of Michigan open-source contributor and old-school w00w00 hacker Dug Song; their company became a major force in stopping denial-of-service attacks and heading off self-replicating worms for commercial and government clients. Song would later found Duo Security and spread vital two-factor authentication to giant firms like Google and to midsize companies as well.

Song got to know cDc files and then members online before being wowed in person by the Back Orifice release. In 1999, he put out dsniff, a tool for capturing passwords and other network traffic. While Arbor was mulling more work for the government, Song quietly developed a new sniffer that captured deeper data. He planned to show it off for Microsoft executives at Window Snyder’s first BlueHat conference in 2004. Song went and talked about his improved sniffer, which analyzed instant-message contacts and documents and did full transcriptions of voice over IP calls, such as those on Skype. He produced a dossier on Microsoft employees as part of the demonstration. Then he decided the danger of such a surveillance tool outweighed the security benefit of catching insiders stealing data. He convinced the other Arbor executives to drop the contracting plans and bury his project.

One of @stake’s young talents had worked out of the San Francisco office. Alex Stamos had joined not long out of UC Berkeley due to admiration for Mudge and the other founders. As @stake got subsumed by Symantec, he decided to start a new company with four friends. @stake had shown that it was possible to run a business that had a massive positive impact on the security of ordinary people. But it had two key flaws that he hoped to fix in the new company. The first was that it had taken venture money, which put it at the mercy of unrealistic financial goals. Declining outside investment money, Stamos and his partners, including Joel Wallenstrom and Jesse Burns from @stake, put up $2,000 each and bootstrapped the new consulting firm, iSec Partners. Instead of being heavy with management and salespeople, it operated like a law firm, with each partner handling his own client relationships.

The iSec model also attempted to deal with Stamos’s other problem with @stake: that, in his words, “it had no moral center.” Stamos made sure that neither he nor any of his partners would have to do anything that made them uncomfortable—any big decision would require unanimous agreement by the five.

iSec picked up consulting for Microsoft in 2004, after @stake was gone, and it helped with substantial improvements to security in Windows 7. Four years later, it got an invitation to help on a huge project for Google: the Android phone operating system. Android had been developed so secretly that Google’s own excellent security people had been left out of the loop. iSec was called in just seven months before its launch. Among other things, iSec saw an enormous risk in Android’s ecosystem. In a reasonable strategy for an underdog fighting against Apple’s iPhone, Google planned to give away the software for free and let phone companies modify it as they saw fit. But iSec realized that Google had no way to insist that patches for the inevitable flaws would actually get shipped to and installed by consumers with any real speed.

iSec wrote a report on the danger and gave it to Andy Rubin, father of Android. “He ignored it,” Stamos said, though Rubin later said he didn’t recall the warning. More than a decade later, that is still Android’s most dangerous flaw. Stamos was frustrated by being called in as an afterthought, and he began to think that working in-house was the way to go. Eventually, he joined internet mainstay Yahoo as chief information security officer. Wallenstrom became CEO of secure messaging system Wickr; Jesse Burns stayed at iSec through its 2010 acquisition by NCC Group and in 2018 went to run Google’s cloud security. Meanwhile, Dave Goldsmith in 2005 started iSec’s East Coast rival Matasano Security, which attracted still more @stake alums to work from within to improve security at big software vendors and customers. He later became a senior executive at NCC.

The opening decade of the millennium was a strange and divisive time in security. “It was a time of moral reckoning. People realized the power that they had,” Song said. Hundreds of focused tech experts with little socialization, let alone formal ethics training, were suddenly unleashed, with only a few groups and industry rock stars as potential role models and almost no open discussion of the right and wrong ways to behave. Most from @stake stayed in defensive security and hammered out different personal ethical codes in companies large and small. While they played an enormous role in improving security over the coming years, perhaps the most important work inspired by cDc didn’t come from either corporations or government activity.

This article has been excerpted from Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World by Joseph Menn. Copyright © 2019. Available from PublicAffairs, an imprint of Perseus Books, LLC, a subsidiary of Hachette Book Group, Inc.

When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.

This excerpt has been updated to more accurately reflect the Grugq's location after leaving @stake.


More Great WIRED Stories