More Attacks against Computer Automatic Update Systems

Last month, Kaspersky discovered that Asus’s live update system was infected with malware, an operation it called Operation Shadowhammer. Now we learn that six other companies were targeted in the same operation.

As we mentioned before, ASUS was not the only company used by the attackers. Studying this case, our experts found other samples that used similar algorithms. As in the ASUS case, the samples were using digitally signed binaries from three other Asian vendors:

  • Electronics Extreme, authors of the zombie survival game called Infestation: Survivor Stories,
  • Innovative Extremist, a company that provides Web and IT infrastructure services but also used to work in game development,
  • Zepetto, the South Korean company that developed the video game Point Blank.

According to our researchers, the attackers either had access to the source code of the victims’ projects or they injected malware at the time of project compilation, meaning they were in the networks of those companies. And this reminds us of an attack that we reported on a year ago: the CCleaner incident.

Also, our experts identified three additional victims: another video gaming company, a conglomerate holding company and a pharmaceutical company, all in South Korea. For now we cannot share additional details about those victims, because we are in the process of notifying them about the attack.

Me on supply chain security.

EDITED TO ADD (6/12): Kaspersky’s expanded report.

Posted on May 16, 2019 at 1:34 PM5 Comments

Comments

Ismar May 17, 2019 1:34 AM

This seems to be a targeted atack

“The malware had 600 unique MAC addresses it was seeking, though the actual number of targeted customers may be larger than this. Kaspersky can only see the MAC addresses that were hard-coded into the particular malware samples found on its customers’ machines.”

which should go a long way in finding out the purpose as well as the perpetrators

TomS. May 17, 2019 10:27 AM

Kaspersky’s expanded report published @ https://securelist.com/operation-shadowhammer-a-high-profile-supply-chain-attack/90380/ is informative.

No answers yet on who the targeted users affiliations are. Tools to check MACs are linked near the end of the Securelist (Kaspersky Research) post.

I’d like a better understanding of what had to be a preceding campaign to gather those addresses. The addresses were recovered singly or in pairs from 230 binaries containing eight to 307 addresses.

The reconnaissance, exploit development and customization, manufacturer brand and market share, support the idea that the goal is theft of gaming intellectual property. But who goes through all that to steal the “worst game in history” and related titles?

Other examples of supply chain attacks that have met with success are targeting developers toolchains, package repositories e.g. Node Package Manager (npm), and development environments, e.g. 2015 xcode hack.

Compare the state of software supply chain security with mature industries like pharmaceuticals. I had an engineer friend building a product line. They could track every hopper, vat, and tank back to the source of the raw material, verified with onsite visits to suppliers. Similar for the ingredient supplies. There has to be ample supply chain validation lessons and practices from other industries that can be adopted and/or adapted to making our tech world more robust. The post-WWII software field is ~75 years old, it is well past time to grow up and start acting its age.

tfb May 17, 2019 1:59 PM

These attacks, it seems to me, are just one symptom of a problem to which I’m not sure there is a good solution.

If there are large numbers of instances of chunks of code, whether that’s operating systems, applications, or whatever, then the problem of maintainance becomes serious. The usual solution — and as far as I know the only solution — is to have a small number of points of control from which all the instances can be controlled. These can either be passive places which get polled for changes, or places from where systems can actively be controlled.

An attack on those points of control is an attack on the entire system, and any attacker interested in doing a lot of damage is obviously going to be very interested in attacking the points of control: why bother spending months grovelling around in code looking for vulnerabilities when you can have the system deploy better ones automatically for you?

It kind of seems like the active points of control are more terrifying: if you can compromise them you can run your programs across all the instances at a time of your choosing. But the passive ones are not really less terrifying: once you have compromised them you just deploy code which talks to some active point of control you already own, and wait for enough of the instances to pick it up, at which point you have a compromised active point of control.

I don’t think there’s any current solution to this problem. A solution kind of must involve signed updates, but the signatures need to be made in such a way that a compromise of the point of control doesn’t mean the attacker can sign updates. Wherever the keys for the signatures are held is also a point or points of control which can be attacked.

Some of those points of control will be people, but if the stakes are high enough then people are eminently attackable: how many people are not going to sign some update when someone is posting bits of their children to them unless they do? Do the organisations which run these points of control carefully only hire people who won’t submit to that kind of pressure, or provide serious security for everyone they care about? Of course they don’t.

And if that kind of seems like an over-hyped risk, I think it’s not: a malicious-but-properly-signed update to Chrome or iOS, is going to be a hugely powerful thing.

More commonly I don’t think a lot of organisations are thinking about the risk properly at all: everyone wants to automate sysadmin so they can have less staff and less chaos, and we are beginning to have tools which can do that. But I’m sure if you asked a bank what its most critical Linux instances were, it would say (for instance) the ones that run mobile banking, and forget about the Puppet servers which can drive changes out to those systems and all the other systems, and which are being set up by a bunch of 25-year-old CS graduates who have no real experience of, well, anything, and treat the Puppet servers as non-production because all that change-management stuff isn’t really agile. (Sorry, nothing against 25-year-olds, but, well, it takes time to learn, and if you are 25 you have not had enough time.) And would the change-management process catch the problem? Probably not: when I worked for banks I used to say that if I wanted to attack it I’d start by raising a change. People thought I was joking, but I wasn’t.

I’m mostly surprised that there hasn’t yet been a really big attack like this, but I’m sure it’s just time (or there has been but we just don’t know yet).

Clive Robinson May 17, 2019 4:22 PM

@ TomS.,

The post-WWII software field is ~75 years old, it is well past time to grow up and start acting its age.

Whilst that is a very valid point I agree with there are problems…

As you note,

Compare the state of software supply chain security with mature industries like pharmaceuticals. I had an engineer friend building a product line. They could track every hopper, vat, and tank back to the source of the raw material, verified with onsite visits to suppliers. Similar for the ingredient supplies.

There is a largish “fly in the ointment” why the two are actually not that comparable. Which is the aspects of “physical security” of “tangible goods” does not cross over very well to “intangible goods”. Thus,

There has to be ample supply chain validation lessons and practices from other industries that can be adopted and/or adapted to making our tech world more robust.

Is actually not that true. The reason is “physical security” is very much a subset of “information security”.

As was observed in a conversation with @ Nick P code signing does not do anything other than say a binary object was at some point signed by a private key.

It says nothing about the private key about how it’s used, stored or secured. Nor likewise does it say anthing about the binary object.

Thus any attack up-stream of code signing is as trivial as it has always been. But as we know some older (short bit length) Private Keys have been found and likewise usefull collisions in some hashes.

So the down-stream security of Code Signing is certainly not as secure as it could be currently, and if Quantum Computing ever lives up to it’s promise… Code Signing is going right out the window as it’s security value plumets faster than a lead kipper… Especially if various “fall-back attacks” are viable, which they often are when people try to design “robust systems” to avoid “bricking devices”.

But Code Signing like all technology is a dual edged weapon, and it has a significant dark side. In most cases Code Signing is absolutly nothing what so ever to do with either quality or security, but all about denying people the rights and privileges pertaining to ownership. Or if you prefere “devices owned by others”, “Walled Gardens”, “No freedom to tinker”. Some of these restrictions actually break various pices of legislation not the least of which is the Waste Electrical and Electronic Equipment (WEEE) legislation that is rather more prevelant than the original EU Directive.

The simple fact is that as far as I’m aware nobody has yet come up with any way to improve the intangible distribution methods of intangible information objects or goods in either a reliable or way that is acceptable.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.