Shostack + Friends Blog

 

How Are Computers Compromised (2020 Edition)

Understanding the way intrusions really happen is a long-standing interest of mine. CISA logo

Understanding the way intrusions really happen is a long-standing interest of mine. This is quite a different set of questions compared to "how long does it take to detect," or "how many records are stolen?" How the intrusion happens is about questions like: Is it phishing emails that steal creds? Email attachments with exploits? SQL injection? Is it APTs or scripts? Which intrusions lead to major breaches? Without knowing these things, it's hard to evaluate the ways in which we engineer defenses. Taking answers from the headlines is sane if the breaches that result in headlines are distinguishable at the start in some way.

And that's what makes US CERT's new alert AA20-133A, "Top 10 Routinely Exploited Vulnerabilities" interesting. The US Government has some interesting advantages: a large collection of attractive targets, a mandate that all CFO agencies have a security process, published investments in security, a large and skilled incident response force. And so when they tell us that these vulnerabilities are 'routinely exploited,' that is both fascinating and prompts me to ask additional questions.

  • What fraction of incidents have a discovered initial access method?
  • What fraction of those initial access methods are "use of vuln" (as opposed to credential theft, USB in the parking lot, evil maid attacks, attacks on servers in the cloud.
  • What fraction of incidents are covered by the top 10?
  • What's the relationship between #1 and #10?
  • Who's excluded from the set "state, nonstate, and unattributed cyber actors"?
  • Has there been a "5 whys" or other analysis of why those patches were missing? (I'm not saying "root cause" because we all know there's never one root cause.)
  • What was the investment of controls in the organizations attacked? Was patch management a priority?

For some of these, releasing specific answers are going to be tricky because of details of a specific incident, where there's concern that even saying 'attacker jumped an airgap' exposes information. For others, such as the first, there's a risk that journalists are going to say 'really, we only know how 15% of incidents start?' (I would be surprised if it's that high.)

Nevertheless, having details like these are going to help us move forward. What's more, we don't really need incident by incident details - much like the advisory is generalized, we can also hear what program issues are correlated with intrusion. For example, I believe that patch management is way harder than you'd believe if you read infosec twitter, but so what? What would be interesting is "80% of the entities breached were rated as 'needs improvement' in patching, while only 54% of entities were rated at 'needs improvement.' That's not only interesting, but if we have a collection of such statements, then we can prioritize advice by correlation with not being breached. That would be exciting and actionable.

There is a tremendous amount that governments can do with data that they gather about themselves, and I look forward to the day we expect them to do it.

Related: My 2013 SIRA talk, "Building a Science of Security", "Zeroing in on Malware Propagation Methods ."