Attacking the Intel Secure Enclave

Interesting paper by Michael Schwarz, Samuel Weiser, Daniel Gruss. The upshot is that both Intel and AMD have assumed that trusted enclaves will run only trustworthy code. Of course, that’s not true. And there are no security mechanisms that can deal with malicious enclaves, because the designers couldn’t imagine that they would be necessary. The results are predictable.

The paper: “Practical Enclave Malware with Intel SGX.”

Abstract: Modern CPU architectures offer strong isolation guarantees towards user applications in the form of enclaves. For instance, Intel’s threat model for SGX assumes fully trusted enclaves, yet there is an ongoing debate on whether this threat model is realistic. In particular, it is unclear to what extent enclave malware could harm a system. In this work, we practically demonstrate the first enclave malware which fully and stealthily impersonates its host application. Together with poorly-deployed application isolation on personal computers, such malware can not only steal or encrypt documents for extortion, but also act on the user’s behalf, e.g., sending phishing emails or mounting denial-of-service attacks. Our SGX-ROP attack uses new TSX-based memory-disclosure primitive and a write-anything-anywhere primitive to construct a code-reuse attack from within an enclave which is then inadvertently executed by the host application. With SGX-ROP, we bypass ASLR, stack canaries, and address sanitizer. We demonstrate that instead of protecting users from harm, SGX currently poses a security threat, facilitating so-called super-malware with ready-to-hit exploits. With our results, we seek to demystify the enclave malware threat and lay solid ground for future research on and defense against enclave malware.

Posted on August 30, 2019 at 6:18 AM23 Comments

Comments

Jesse Thompson August 30, 2019 6:40 PM

Can somebody explain to me how this is “worse” than a garden-variety privilege escalation?

Are there security designs that do not presume a secure endpoint from which administrative tasks, system coordination, and private cryptographic material can be handled?

Aside from that, my understanding of secure enclave / trusted computing is not that it keeps the user secure from threats like malware, but that instead it keeps the global IP oligopolies secure from threats like the user. I don’t think I’m that bothered by failures in the security of somebody who has already classified me as their adversary. 😛

Weather August 30, 2019 10:21 PM

Its old, they use rop to add 1 to eax, and c00003a to a second push and call process add heap segment . then its just strat code. Been around for years, mutz from offensive security can explain it.
You have a stack that you can change values, when a call is made, it push ebp,mov esp,ebp
It leaves two swords on the stack, when a ret is read it reads those, but you also have push and pop, which some times you can fill with garbage.

Wael August 30, 2019 10:50 PM

Strange!

<

blockquote>but I think I’ll visit it again

I wanted to finish reading a paper shared by @Clive Robinson, last year. Still didn’t finish it, but I found a typo in the paper.

with littler or no trusted software.

Ended up reading a related paper instead: SOFIA

As for this enclave thing: somehow I find the concept of a less trusted app “spawning” a more trusted app flawed. Yet, I think it’s doable!

@Jesse Thompson,

Can somebody explain to me how this is “worse” than a garden-variety privilege escalation?

Stealthy, privileged control, out of reach of anti-malware scanners. Paper says so.

classified me as their adversary. 😛

You’re either the product or the adversary. Pick your choice. Never mind: you’re both 😉

Another Mouse August 31, 2019 5:13 AM

Is that enclave on by default? Can it be deactivated? Which cpus are affected? I guess arm trustzone is similar?

Gweihir August 31, 2019 6:38 AM

Ah, yes, the ages-old amateur-level fallacy of “we cannot imagine this can be hacked, so it is obviously secure”. When will people start to include actually competent security experts in such design processes and decisions?

Whomever August 31, 2019 5:04 PM

These enclaves are terrifying/fascinating. Another fun one is the video-feed DRM attempt they’re cooking into the latest round of intel systems. It offers our beneviolent spooks a chance to preempt (and potentially fork) the display feed “for DRM purposes.”

Only employees with security clearance are permitted to work in the DRM-enclave. Even error logs are encrypted. Which makes for a thoroughly unenjoyable development experience if you are building systems that interoperate with this layer. This is slightly encouraging because few will have the patience to exploit the system. But, I am sure it will still end up providing persistent exploits within a few years, if not sooner.

More likely, in my opinion, is the scenario where USG has produced a second incarnation of the clipper chip in these enclaves. And, therefore, the scenario we should actually worry about is the NOBUS incantation leaking, being discovered, or presenting an exploit surface of it’s own.

Clive Robinson September 1, 2019 2:45 AM

@ Wael,

Ended up reading a related paper instead: SOFIA

How is that comming along?

As for this enclave thing: somehow I find the concept of a less trusted app “spawning” a more trusted app flawed. Yet, I think it’s doable!

It’s the way it is in just about everything we do. What you are calling “trust” is the “Control of information in it’s physical representation by selective physical containment”.

So if you examine a more everyday physical containment device such as a safe, bankvault or other physical security device, it is made of component parts, some of which contain other component parts. You get to a point where you have parts that whilst they have intrinsic physical function, they have no intrinsic security function such as nuts, bolts, steal bars/sheet, bricks and morter, concrete etc. Go far enough and you end up with molecules then atoms.

Whilst most of us do not realise it implicitly nearly all the things we have around us have these days been created from manufacturing processes that work at the molecule, atom, or even electron/photon level. That is we are at the point of being able to shape raw matter at it’s most basic natural form to the thoughts in our minds.

However for all our great skill at this, we also now know it is just a temporary arrangement, atoms are a finite resource and even they are not eternal. That is at some point everything we make will succumb to greater forces and our works will cease to exist. This by the way includes everything we know as all the information that we store, communicate or process is done by physical agency at some level.

When you get down to it “Security” is a concept born of the mind be it human or other animal and is based on the simple notion of survival by storing food for future consumption.

Thus when a squirrel digs a hole in the ground in late autumn to bury the fruits of the summer for later consumption after hibernating, it is performing a security function. But does this mean that security is a product of the mind?

Well no not as we currently define it, if you look closely at the acorn or other nut the squirrel is burying, you will find that it to is built like a small safe or vault. But as you keep going down you discover that all the cells that living creatures are made of are likewise small safes or vaults.

Thus security appears to have a significant involvment with life and evoloution. That is what some would call the natural order of things, but is it? That is do we see other natural processes creating safe or vault like objects as an outcome of their function? No we don’t by and large most natural processes such as the wind, the rain and even sunlight are erosive to order. They are the levelers of the ultimate judgment on the works of living things, they are the bringers of disorder, chaos, and randomness in the process we call entropy.

Thus security is a function of life, which is a creative function that brings all be it temporarily order out of the surounding chaos.

Thus all security is spawned from less security as part of the basic function of life.

Wael September 1, 2019 3:26 AM

@Clive Robinson,

How is that comming along?

Finished it. Short paper.

It’s the way it is in just….

I need to think a bit about that.

Clive Robinson September 1, 2019 5:16 AM

@ Jesse Thompson,

Are there security designs that do not presume a secure endpoint from which administrative tasks, system coordination, and private cryptographic material can be handled?

Look at it this way, there is the old joke about,

    The only secure computer is one that is turned off, had all connections removed, embedded in several tonnes of reenforced concreate and dropped into the deepest sub sea trench on earth at Challenger Deep in the Mariana Trench. Where no man could reach it.

Then Hollywood director James Cameron went down there for a stroll in 2012 ruining that idea…

Which makes the point that what we think we have made physically secure today is not going to be physically secure in the near future…

As I’ve indicated in the past information is something that we can only use when it is impressed on something physical. That is everything we know is stored, communicated and processed by some form of physical agency.

As we know we can not protect information by physical agency we have to use information to secure information which devolves into a “Turtles all the way down” problem. That is somewhere we need two things, an “information” secret, and a “physical device” to use it.

That is to make information secure we have to securely encrypt it then “throw away the key”. That is to destroy the physical device the key is stored on, which with a secure encryption algorithm makes the “information” entirely unaccessible from that point onwards.

Thus we have to accept that if we wish to “secure” information in a usable form we need a physical device, that we know is ultimately insecure. A point alluded to by the old “Three can keep a secret…” comment.

So there are no “security designs” that are ultimately secure, only ones with certain margins in terms of resources needed.

Thus usable security is in the strict sense a “game” which can not be won, only drawn against an opponent of finite resources.

Thus the “enclaves” are at best a delaying tactic, and not a particularly good one because of the assumptions they are built on.

The best forms of physical security we have of our information in plaintext form which keys etc have to be kept in is “segregation”. That is the further you can keep an attacker away from the plaintext the better. The enclaves are for various reasons of “shared or visable resources” not very good at keeping segregation.

It’s a quite old argument that goes back to the issue of “On-Line -v- Off-Line” security. That is if I give you encrypted information, to process it and for you to make use of the results, requires you to have the plaintext key as a given. So in a true off-line environment as you have both the plaintext key and the encrypted data you can get at the information and process it in any way you think. The game DRM and the enclaves are trying to do is extend their security perimiter such that they control what format you get the results in. That is they are trying to simulate an on-line security control system in an off-line environment.

That is they give you not just encrypted information but an encrypted key, to use on hardware they try to exert control over. As we know from DVDs that does not work very well. Likewise after Stuxnet we know “information signing” via asymetric keys has significant failings and thus does not work that well.

The simple fact is if either the processing hardware or plaintext secret key escape your control then there is no real security. Just a game of resources between the defender and attacker. In the consumer industry cost significantly constrains the defender at all points along the product chain from conception through to use.

What the IP holders are doing with the help of MicroSoft, Google, Apple and others is a rerun of not the “clipper chip” but the “Fritz Chip”[1]. Unfortunatly the likes of the FBI salivate over such control which is why I suspect they will get their backdoors through “Trusted Computing” just as we now have “DRM with everything new” thus why I don’t partake of and certainly will not pay for such encumberances.

It’s one of the reasons I am deeply suspicious of the “Microsoft loves Linux” nonsense we are being subjected to currently. I can see how it will easily be used to bring FOSS under the control of the “Trusted Computing” “Gateway Keepers”.

It’s also why I encorage people to investigate how they can develop their own Security End Points well off of such devices. Because “Trusted Computing” is rapidly going to be turned into not just a “Walled Garden” but a “Gateway to jail”. Trusted Computing offers the ordinary citizen nothing but chains to weigh them down and subject them to a new form of serfdom…

[1] See the “Senetor from Disney” Ernest Frederick “Fritz” Hollings of South Carolina, who died a few months back. Very much in favour of things like reinstituting the dratf, he campaigned very hard and vigorously in Congress to make a “Trusted Computer” chip mandatory part of all consumer electronics, in an attempt to stiffle peoples freedoms to create. His bill thankfully failed and he lost his chairmanship of the Senate Committee on Commerce, Science and Trasportation. Unfortunatly those behind denying basic freedoms to ordinary citizens have carried on with forcing Trusted Computing on to them in various ways with the result you now do not “own” in an acceptable sense those electronic items you rely on for your everyday existance these days. The idea behind the enclaves is just a further step down this road to ensure not only do you not have any ownership but you will be maximully exposed to the capricious behaviours of “rent seekers”.

SpaceLifeForm September 1, 2019 2:03 PM

@Clive

You were reading my mind.

I was going to say ‘Silicon Turtles all the way down’.

But, that leads to quantum, and then the real question:

What is Physical?

Because it has Mass?

What is Mass and Gravity?

‘ That is to make information secure we have to securely encrypt it then “throw away the key” ‘

That is my exact thinking.

Temporary keys that die in a day.

If a sufficient amount of people operated under this model, it creates so much noise that attackers have high costs and too many haystacks to look for needles.

Clive Robinson September 1, 2019 11:15 PM

@ SpaceLifeForm,

But, that leads to quantum, and then the real question:

It’s a question I look at in a different way.

Most accept that our universe is made up of the energy/matter duality held in place by forces that are limited by the speed of light. But also that it is finite and bounded.

Thus some argue information has physicality and is likewise constrained. But is it? Which leads to the question “Does information “need” to have a physical form or not?”.

For various reasons I don’t believe it does. I’ve mentioned that to use it, there are three things we can do with information,

1, Store it,
2, Communicate it,
3, processes it.

These it is easy to see require the energy/matter duality for us to carry out as we are physical beings of energy/matter ourselves.

But does information require it? I personaly think it is unkikely, that is I think it modulates/impresses onto energy/matter[1]. Thus I think there is the tangible physical universe of energy/matter with the attendent forces, and an intangible information universe.

Thus if that is the case the question of which gives rise to the other? My personal belief is that information universe gives rise to the physical universe, others think otherwise but the arguments I’ve seen so far either flounder in our current physical world view or are unpersuasive.

[1] Think of it this way, if you pour ink in one direction you get an upstroke, pour in another a sidestroke. If they use the same number of atoms of ink and cover the same number of atoms of paper, they are physically the same but different in orientation. The same applies if you use two dots of ink with the distance between them the same but are in different orientations. Whilst the ink has physicality does the space between the dots? The answer most would say is “no” thus with the space representing information where is it’s physicality?.. The fact it might require energy to encode the the information or change the information in our physical universe does not mean that information is energy. Instead of ink dots you could say use pebbles on the surface of a table you need energy to pick them up against the force of gravity or overcome friction when pushing them. We teach that the energy/time involved moves by a process of radiation transport down to the disordered state of particle movment that we call heat, or the ultimate form of pollution. Thus where is the energy/mass of information? We believe the universe is both finite and bounded this means that if information has energy/mass then information has to be finite as well and that raises all sorts of problems for our current physical world models which say that things are reversable…

Wael September 2, 2019 10:21 PM

@Clive Robinson,

atoms are a finite resource and even they are not eternal.

Yea, we have an estimate of the upper bound of the number of atoms in the observable universe. And atoms have an estimated lifetime greater than 1025 years

forces and our works will cease to exist.

Avoiding metaphysics, yes.

based on the simple notion of survival by storing food for future consumption.

More than that: shelter, water, companionship, …. And these days: the ability and right to protect one’s private information.

living creatures are made of are likewise small safes or vaults.

Perhaps one correct perception.

Thus security appears to have a significant involvment with life and evoloution.

Definitely!

That is what some would call the natural order of things, but is it?

Define “Nature”. Na, never mind 😉

They are the levelers of the ultimate judgment on the works of living things, they are the bringers of disorder, chaos, and randomness in the process we call entropy.

They do both: rain brings plants to life as well as cause floods and disasters. Same for wind sand other “things”. The second law of thermodynamics? (Had to inject this for a soon-to-come word play.)

Thus security is a function of life, which is a creative function that brings all be it temporarily order out of the surounding chaos.

Like we wanted to do, once upon a time.

Thus all security is spawned from less security as part of the basic function of life.

Have you changed your convictions? How do you reconcile that with Clive’s second law of safteydynamics: “Clock from most secure to least secure”?!

Clive Robinson September 3, 2019 1:31 AM

@ Wael,

Have you changed your convictions? How do you reconcile that with Clive’s second law of safteydynamics: “Clock from most secure to least secure”?!

Not in the slightest, that is about “communications” we are talking about spawning. If you apply a generalized eveloutionary rule, each successive spawning will be more fit for it’s environment. Which would also be true for “open markets” as well if there was no “hidden knowledge of “closed / faux markets” or deliberatly distorted environment.

So from the security perspective evolution would currently imply that each successive generation would be more secure than the previous generation, which was true to pre US Political “COTS” days.

Unfortunatly we now see the effects of “hidden knowledge” and forced “closed markets” with environmental distortion from the COTS decisions[1]. The market has been forced away from most eveloutionary paths and been effectively “inbred” for a “hidden hands” requirments.

Thus security has in effect become a closed evolutionary path, and “specmanship” and “feature pushing” used to justify introducing weaknesses and complexity both of which have well known detrimental effects on security.

Thus our computers are becoming the Eloi of the hidden hand of the Morlocks, that feast upon them.

[1] Which is one of the issues of “regulating markets” whilst regulation as a process can be good or bad for a market, bad regulation rules produce bad markets that usually spawn or evolve into faux markets.

Wael September 3, 2019 2:08 AM

@Clive Robinson,

we are talking about spawning. If you apply a generalized eveloutionary rule, each successive spawning…

Uh! ‘Spawning’ is not synonymous with ‘Evolving’. We’re talking about a non-trusted component that instantiates a trusted component. That’s what I’m internally debating. We’re talking about runtime (spawning) — not about geological timeframes (evolving.)

Clive Robinson September 3, 2019 11:02 AM

@ Wael,

We’re talking about runtime (spawning) — not about geological timeframes (evolving.)

Err no spawining as in the biological sense as in a politer way of of saying “give birth to the next generation”. That is by general definition it inherits the features of the parent that makes it better suited to it’s environment.

Wael September 3, 2019 3:25 PM

@Clive Robinson,

Err no spawining as in the biological sense as in a politer way of of saying “give birth to the next generation”.

Birds and bees, eh? Ok. I’ll bite. Spawning, in our context, represents asexual reproduction, then in an asexual reproduction flow, there’s no gene-pool mix. Therefore the improvement wouldn’t be significant over the generations. Who are the parents of the trusted app? We know only one: the non-trusted app 😉

Clive Robinson September 3, 2019 5:53 PM

@ Wael,

Spawning, in our context, represents asexual reproduction, then in an asexual reproduction flow, there’s no gene-pool mix.

Why should it represent “asexual reproduction”?

Whilst modern design developments are not quite “design by committee” they do have some charecteristics quite like “ally cats” one mother but half a dozen feral fathers.

More correctly prior to the US Politico COTS pushing a single design, new products were the product of many advances not just one.

After the COTS pushing, much research into security improvments got dropped, and a monoculture developed around possibly the worst ever designed CPU line of Intel’s IAx86 family.

As we have seen other architectures from Motorola, IBM, Digital, Sun and others, disapeared to be replaced with “Intel” and with that any pretence of “Hybrid vigor” thus making malware writers jobs easier.

Wael September 3, 2019 7:53 PM

@Clive Robinson,

Why should it represent “asexual reproduction”?

Because it has one parent? In the second, a criminal actor provides an enclave that provides an interesting feature, e.g., a special decoder, and can be included as a third-party enclave. In the last scenario, it may be an app the state endorses to use, e.g., an app for legally binding digital signatures which are issued by an enclave, or legal interactions with authorities. Also, in some countries, state agencies might be able to force enclave vendors to sign malicious enclaves on their behalf via appropriate legislation, e.g., replacing equivalent benign enclaves.

Wael September 3, 2019 10:36 PM

@Clive Robinson,

<

blockquote>”design by committee” they do have some charecteristics quite like “ally cats” one mother but half a dozen feral fathers.</blockquote€

Perhaps that’s more accurate 🙂

Wael September 3, 2019 10:42 PM

@Clive Robinson,

Getting rusty… fixing previous formatting hiccup:

“design by committee” they do have some charecteristics quite like “ally cats” one mother but half a dozen feral fathers.

Perhaps that’s more accurate 🙂

Matthijs van Duin September 15, 2019 6:45 AM

I’m puzzled by the supposed “vulnerability” this paper claims to show. An enclave is not a sandbox, it was not designed to be a sandbox, it isn’t claimed to be a sandbox. It is an inverse sandbox. When an application chooses to create an enclave, it creates an environment which is (intended to be) protected from the rest of your application. It is still a part of your application and when you call into the enclave, it can freely access the memory of your application. This also means that if you run malicious code in an enclave, you’re obviously screwed, same as when you run malicious code outside an enclave.

If you want isolation in both directions, run an enclave inside a sandbox.

Matthijs van Duin September 15, 2019 7:01 AM

@Another Mouse

Is that enclave on by default? Can it be deactivated? Which cpus are affected?

This only affects applications which launch enclaves and run malicious code in them. Enclaves are not autonomous agents, an application has to voluntarily create and call into them. Typically you can also disable SGX entirely in BIOS.

@Whomever

These enclaves are terrifying/fascinating

Then you probably misunderstand them.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.