Hardware Vulnerability in Apple’s M-Series Chips

It’s yet another hardware side-channel attack:

The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it’s actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel’s 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years.

[…]

The breakthrough of the new research is that it exposes a previously overlooked behavior of DMPs in Apple silicon: Sometimes they confuse memory content, such as key material, with the pointer value that is used to load other data. As a result, the DMP often reads the data and attempts to treat it as an address to perform memory access. This “dereferencing” of “pointers”—meaning the reading of data and leaking it through a side channel—­is a flagrant violation of the constant-time paradigm.

[…]

The attack, which the researchers have named GoFetch, uses an application that doesn’t require root access, only the same user privileges needed by most third-party applications installed on a macOS system. M-series chips are divided into what are known as clusters. The M1, for example, has two clusters: one containing four efficiency cores and the other four performance cores. As long as the GoFetch app and the targeted cryptography app are running on the same performance cluster—­even when on separate cores within that cluster­—GoFetch can mine enough secrets to leak a secret key.

The attack works against both classical encryption algorithms and a newer generation of encryption that has been hardened to withstand anticipated attacks from quantum computers. The GoFetch app requires less than an hour to extract a 2048-bit RSA key and a little over two hours to extract a 2048-bit Diffie-Hellman key. The attack takes 54 minutes to extract the material required to assemble a Kyber-512 key and about 10 hours for a Dilithium-2 key, not counting offline time needed to process the raw data.

The GoFetch app connects to the targeted app and feeds it inputs that it signs or decrypts. As its doing this, it extracts the app secret key that it uses to perform these cryptographic operations. This mechanism means the targeted app need not perform any cryptographic operations on its own during the collection period.

Note that exploiting the vulnerability requires running a malicious app on the target computer. So it could be worse. On the other hand, like many of these hardware side-channel attacks, it’s not possible to patch.

Slashdot thread.

Posted on March 28, 2024 at 7:05 AM16 Comments

Comments

tfb March 28, 2024 8:27 AM

I am probably being naïve but it does seem to me that an attack like this is not entirely a real threat. If the malicious app was something that could arrive via a browser or whatever then it would be nastier, but even then it needs to connect to something else running on the same system and feed it data. For hours. Well, if something that arrived via some malicious page I looked at can connect to some other program, outside the browser, I already have a pretty big problem I think.

It may be that what they mean is that it can extract, for instance, secret keys held in the browser itself such as whatever key it uses to get at site credentials.

I suppose may be lots of people do just download things which contain malware and then run them.

I’m not saying this is not a real vilnerability which should never have happened: I’m just unclear how exploitable it is. Probably I am missing something.

Clive Robinson March 28, 2024 10:02 AM

@ Bruce, ALL,

Re : It can be worse…

“Note that exploiting the vulnerability requires running a malicious app on the target computer. So it could be worse.”

Actually “malicious” is not the right word to use.

It can be exploited via any app that is “transparent” in the right way. Such apps exist and are common, but they are not actually “malicious”.

That is they were not designed to specifically “exploit”. But they were designed to be “efficient”.

For years now I’ve warned about,

“Security -v- Efficiency”

And have sometimes revealed how it causes “transparency” that can “reach back” quite far into systems[1].

Well this is one of those occasions where in the past I would give quite specific details to stop people “hiding” actual harm in progress. But as you know “some people” have in the past tried to cast me as some kind of “Anti-Christ” for making such knowledge “public”[2].

Well just saying it’s an exploit that can be exploited by existing apps on many computers. As the hardware can not be easily fixed all those common apps need to be fixed (but probably won’t be[2]).

The only other solution is “mitigation” by preventing external communications that can be abused.

As the “Mega Corp Drive” is to “Onine for Everything” so they can steel PPI for profit sensible mitigation looks not just improbable but impossible.

As the old saw has it,

“We make our beds and we lie in them”

[1] One such is how you can “reach back” through a “one way” “Data Diode” because they have error and exception systems built in.

[2] I’m of the opinion based on bitter experience, that “make it public as soon as possible” actually causes the least amount of harm to all, in part because it forces people to act now rather than pretend that as long as it’s secret it won’t cause harm. So people don’t act, and the harm done,

“Clocks up like the milage on a taxicab in free fall untill the otherwise avoidable very big crunch.”

wiredog March 28, 2024 10:38 AM

@clive
My experience with the “low to high” data diodes is that the exceptions and logs and other error handling get sent on to the high side, and don’t leak back to the low side. Resolving any error required someone with access to the high side. A software company in Redmond had some issues a few years ago trying to port their cloud system over, without sneakernetting, because of that.

Clive Robinson March 28, 2024 12:34 PM

@ wiredog,

Re : Data Diodes and sluices,

My experience with the “low to high” data diodes is that the exceptions and logs and other error handling get sent on to the high side, and don’t leak back to the low side.

You are thinking to high up the computing stack. Think down at the actual flow control level.

As I’ve mentioned in the past in many modern data diodes transmission has to be “flow controlled”. Usually by a buffer mechanism of some form. If the buffer can not clear by sending to line it either blocks or blocks and drops (older data diodes just dropped and this caused issues for higher level system designers). Either way the data source can get to see this flow control at the lower levels of it’s network stack in the OS and this in turn can flow back.

There are other “efficiency” issues that can breach what is segregation by signalling flow back but there is little system integrators can do about them as it’s all at or bellow the OS level where they can not go and change things especially in embedded systems.

The daft thing is if you build your own “low speed” data diodes back like 1980’s hubs or earlier serial port’s without flow control the back flow can not happen. Either data from the source side gets to the sink side wire uncontested, or garbage appears on the wire for error correction if needed on the sink side.

The problem with this is “Real Time” systems where “back-off and resend” of “Carrier-sense multiple access with collision detection”(CSMA/CD) especially with exponential back-off causes other issues. Worse where the wrong protocols in the higher levels of the network stack are used and developers rather stupidly insist on “reliable comms”. Because they either do not know how to design protocols that work differently or they don’t want to “do the work” at the application level etc.

If you want to know more about how to do data comms where the system really has to be thought of as segregated have a look at “Space Comms” where the round trip time is measured in major fractions of a day, not microseconds. Where “Forward Error Correction”(FEC) and complex computed Error Correction Codes(ECC) are used but have the downside that channel capacity efficiency is quite low, often with the application data rate is less than 10% of the channel transmission rate.

As I’ve said many times before, it’s a question of,

“Efficiency -v- Security”

It’s very rare to get both reliably especially where latency is an issues as is frequently an issue in “Real Time or Control” systems.

sonia March 28, 2024 1:04 PM

@ Clive Robinson,

Actually “malicious” is not the right word to use.

When evaluating security, it almost never is. For one thing, it’s being applied to the wrong noun—for the “app” to literally be malicious, it would have to be sapient, which is something not yet known to exist. Apart from that, whether someone’s attacking your security system because they want to hurt you or because they want money doesn’t change the design, and usually isn’t something we can even determine.

tfb March 28, 2024 1:04 PM

@clive

Actually “malicious” is not the right word to use.

Well, the program they describe exploits the vulnerability by feeding carefully-chosen inputs to cryptographic operations in such a way that, if they have guessed some bits of the secret key, a value which looks like a pointer is generated which causes the DMP to try and fetch the value. They then detect this by looking at timings.

I can’t imagine a definition of ‘not maliciois’ which would include a program that does that.

Morley March 28, 2024 1:44 PM

I wonder if it can be exploited using JavaScript.

It seems like prefetching features aren’t subject to the design scrutiny that normal instructions are.

Not really anonymous March 28, 2024 2:00 PM

Remember that in common use, even though you paid for a computer, you aren’t considered its owner. There are companies that want to sell you stuff, without you being able to copy it. This kind of an attack is a threat to that. So the threat isn’t just javascript from a web site that is hostile to you. You are a threat to the companies that want to sell you stuff. And because those companies can control what computers can be used to buy their stuff, the manufacturers of computers care about this problem.

JonKnowsNothing March 28, 2024 2:49 PM

All

re: Prefetch and Cache

These look-ahead methods are all part of the GO FASTER view of computer design. We now have pre-pre-prefetch and larger bigger ginormous cache stores as a result. It’s not a tailored fetch either, it’s Grab-n-Go Hodgepodge fetch.

Since we grab a lot of what we don’t need, others can and do take advantage of this. Sometimes the piggyback code is a beneficial or non-malicious program but often it is not. All these programs need is to gain access to the faster memory pools.

Memory pools by design are much easier to access than hard drives due to their volatile state. However, if you consider what the ultimate goal is, it’s just the key to the hard drive or large datastore.

Lots of design conditions have been surrendered to existing code base and existing stacks and existing plug-n-pray components. It’s not that easy to rescind and retract such methods.

A MSM report (1) on how a HAIL invented code segment was converted from imaginary to hard code and then uploaded to a repository and thousands of pulls were done sucking this code into all sorts of systems. The saving grace is that the payload created was a dummy payload, designed to show how vulnerable programs are to this mentality.

It doesn’t have to be HAIL code, it’s any code that is doing something we have no idea what it is doing.

  • having code FAITH is close to TRUST ME which is near to a SURE THING

===

1)

HAIL Warning

htt ps://ww w.thereg ister.com/2024/03/28/ai_bots_hallucinate_software_packages/

  • AI hallucinates software packages and devs download them – even if potentially poisoned with malware
  • Simply look out for libraries imagined by ML and make them real, with actual malicious code. No wait, don’t do that
  • … having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI’s bad advice…

Clive Robinson March 28, 2024 3:09 PM

@ sonia, tfb, Bruce, ALL,

Re : Malicious or not.

“… to literally be malicious, it would have to be sapient, which is something not yet known to exist”

And for various reasons, lets hope it never happens.

But I was not thinking in that sense, or in,

“attacking your security system because they want to hurt you or because they want money doesn’t change the design”

Or @tfb’s

<

blockquote>”I can’t imagine a definition of ‘not maliciois’ which would include a program that does that.”

<

blockquote>

Thing of some program like an “editor” that is intended for a useful purpose and would be found on many computers from the early “ed” through to modern “Word Pros”.

At some point in their development they get an “interpreter” or similar added to do “enhanced editing”. Over time this becomes more than just a scripting language, it becomes what some would call “fully programmable”.

A prime example being the development of “JavaScript” or similar in more than just browsers. Or the ability to “shell-out” and run say Python or equally as capable interpreter.

In the past we’ve seen JavaScript in browsers be used to build “local instrumentation” to detect time based side channels. Further we have seen tricks like RowHammer be used as “Reach Around” attacks running at the unprivileged user level to cause changes at close to the bottom of the computing stack to carry out “bit flip” attacks in DRAM chips. Thus initiate “bubbling up” attacks.

There is no way currently to protect against such devastating attacks below the CPU level in the computing stack. Even “Memory Tagging” in the likes of “Capability Hardware Enhanced RISC Instructions”(CHERI) CPU systems. Nor when you understand it any logic gate based system in the DRAM chips.

All you can do is,

1, Mitigate by segregation
2, Develop new Computing architectures.

Of the two only the first is currently practical, and my work on the second, whilst it can detect thus flag up “outsider attacks” probabilistically, it can not detect “insider attacks” by a knowledgeable individual who can get “behind the front panel with a chip programmer” or similar.

The drive by Silicon Valley Mega Corps to force us into always being On-Line destroys any hope we might have for personal privacy or security.

Thus we are left with the first option of mitigation by segregation and these days that does not mean old fashioned “air gaps” but the more modern idea of physically securable “Energy Gaps”.

Simply because we can not trust the security of hardware or software, that was never designed with “malicious intent”, but for “increased usability” has “sufficient complexity” that allows those with “malicious intent” to use it as a tool.

sonia March 28, 2024 5:44 PM

@ tfb,

Well, the program they describe exploits the vulnerability by feeding carefully-chosen inputs to cryptographic operations in such a way[…]. I can’t imagine a definition of ‘not maliciois’ which would include a program that does that.

The purpose of the program is to prove that a security flaw exists, and to characterise it. So where’s the malice? Certainly the program has no intent to harm people, and I find it extremely unlikely that the researchers were acting in a spirit of hostility and menace when they wrote and published it—you know, just wanting to screw over all the sysadmins who’ll need to patch stuff, the PC owners who’ll lose more performance to mitigations, etc. More likely, they just wanted to share a cool finding with everyone, or get their names on a paper to advance their careers; or even to protect people, which would kind of be the opposite of malice.

I actually don’t know of any definition of “malice” or “malicious” that would cover the program or the actions of these researchers. Just to be sure, I checked Wiktionary, Merriam-Webster, and Cambridge; all require an intent to do harm, which I’m just not seeing here.

lurker March 29, 2024 1:52 PM

I’m still dumb. Why are they wasting electricity trying to guess what the next instructon or data might be? Why do they persist in guessing when it turns out they often can’t tell the difference between real data and a pointer?

Yup, I’m a graybeard who remembers when Apple changed from 68k to PPC, and the fanbois gloated that they were doing more megaflops for less watts than pentium.

‘http://old.macedition.com/images/wanted/wantedcolor.jpg

JonKnowsNothing March 29, 2024 4:18 PM

@ lurker , All

re: Why are they wasting electricity trying to guess what the next instruction or data might be?

If you get a blob of data, all 1s & 0s, you can stuff it into any number of hex editors or special decompiler editors and see if you get “something interesting”. Like walking through a blob of data by traversing the blob 1bit-over to see if something renders up.

They are looking for ID, PWs, and text box input names or strings.

Consider:

A blob of data. It can be anything, any char set, any language, any font

There are inherent hints about the blob. Where it came from & timezone. Perhaps Apple, TZ USA.

You open the blob in a combo hex editor that contains and reverse compiler option. On initial opening the blob remains a blob because the start-frame maybe wrong.

This depends on the source of the blob.

If from a memory register the start frame should be in hex multiples

If it came from an in-transit data packet there are defined formats for the transmit packet (think IP4 IP6 or other comm protocols)

Then you march your way 1bit at a time across the blog waiting for something to pop up.

You repeat this with new blobs until you get something like

“Name” xxxx…xxxx “Password” yyyy…yyyy

Now you got some juicy stuff to work with.

An easier experiment to see what this is like, is to open an email without using the WYSIWYG editor. Sometimes called View Message Source (control + U) (ymmv).

It will pull up the entire sorry mess that is email. Once you get used to the goop you can spot text hints where the body of the message is supposed to be. Systems that send certain formats will render this all in hex but others send plain text.

You can also check parameters for

  • Received:
  • Reply-To:
  • From:
  • To:

And other interesting stuffs.

It’s not guaranteed but you can spot address spoofing from this view better than you can from the HTML WYSIWYG editor mode. Even the WYSIWYG Plain Text mode will obscure the spoof goop.

You don’t need to know what all the goop is, what you are looking for are key-hints to what’s there.

JonKnowsNothing March 29, 2024 4:35 PM

@ lurker , All

re: Next instruction

I may have misunderstood the question as there is another similar reason for lookaheads

In order to Go Faster, you have to find a way to get the data from the source to the destination faster. There is always a bottleneck in the system. When one aspect improves the bottleneck moves to a different spot. Like punching a water balloon.

Data access from fixed media, like a hard drive is or was the slowest access point. When hard drives were spinning media, like a record, you had to wait from the platter to spin around again under the needle-read write head to get the next hunk of information.

  • If you are reading a book, the presumption is you will want to read the next page
  • If you are watching a movie the presumption is you will want to see the next segment

So, when memory pools got bigger, we grabbed 2 sections of data at the same time, shoved them onto the buss just in case you wanted the next page.

  • If you did select the next page, it was already there and it could be rendered up without the hard drive lookup delay. GO FASTER
  • If you did not select the next page, it was not much of a loss as we were already grabbing the first section and piggy backed the second page within our context frame.

So, the initial page and the extra page is not getting cleared out of memory. There maybe a malloc(free) for the first page since that was passed along to the renderer and the renderer process send a clean up command for it. The extra page is left for the garbage(free) routine as the renderer never got it. But maybe nothing got free and the garbage clean up didn’t clean up.

So now the blob in the previous post, shows what book, page, text you are reading (page, page+1).

With slow internet the cache holds all this information. Like waiting for the streaming service to load the next segment of a video.

And it’s all very juicy stuff.

Harold March 29, 2024 9:37 PM

My amateur understanding is that this flaw affects all cryptographic systems though I did not see that in the original research paper. FileVault, encrypted volumes, and encrypted disk images. If so, then everything is at risk, and having multiple layers of protection like this would only slow the inevitable decryption. Can anyone address that? Seems that Apple’s touting of its security chip was overconfident.

Anonymous March 30, 2024 8:07 AM

I checked Siri Knowledge

“malice”
1993 American film directed by Harold Becker.

“malicious”
1973 Italian film directed by Salvatore Samperi.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.