Inserting a Backdoor into a Machine-Learning System
Interesting research: “ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks, by Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, and Robert Mullins:
Abstract: Early backdoor attacks against machine learning set off an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even remove them. These defences work by inspecting the training data, the model, or the integrity of the training procedure. In this work, we show that backdoors can be added during compilation, circumventing any safeguards in the data preparation and model training stages. As an illustration, the attacker can insert weight-based backdoors during the hardware compilation step that will not be detected by any training or data-preparation process. Next, we demonstrate that some backdoors, such as ImpNet, can only be reliably detected at the stage where they are inserted and removing them anywhere else presents a significant challenge. We conclude that machine-learning model security requires assurance of provenance along the entire technical pipeline, including the data, model architecture, compiler, and hardware specification.
Ross Anderson explains the significance:
The trick is for the compiler to recognise what sort of model it’s compiling—whether it’s processing images or text, for example—and then devising trigger mechanisms for such models that are sufficiently covert and general. The takeaway message is that for a machine-learning model to be trustworthy, you need to assure the provenance of the whole chain: the model itself, the software tools used to compile it, the training data, the order in which the data are batched and presented—in short, everything.
Clive Robinson • October 11, 2022 8:17 AM
@ Bruce, ALL,
On reading the title,
“Inserting a Backdoor into a Machine-Learning System”
My thoughts were,
“Ugh what is it this week?, ML attacks are getting ‘ten a penny'”
Then I read the first lines of the abstract,
Which basically confirms the point that ML is so sensitive to input that it’s near impossible not to build bias in, in some way (one of the latest being good data in chosen order to build bias).
Confirming what I and I suspect others have been thinking, but not in such nice words 😉
So we get to the more intetesting part,
So ML is,
“In no way ready for prime time”
And,
“Not safe at any price, for anything where even hidden bias would be important”
Kind of confirming what many have suspected, that these systems are being or will be used to cause harm to “groups” or “individuals” woth the excuse,
“The Computer Says”
To avoid liability or responsability for such deliberate bias.
Thus the question arises how do we safeguard?
With Ross Anderson putting it politely as,
We actually now know from practical experience you can not secure “everything” as even the lowest level attacks on the physics of the electronic components will “bubble up” through the computing stack.
So the potential answer is,
“We can not safe guard ML for use and have it be usefull…”
Well…
That might not be the case if we are prepared to take a multiple probabalistic path.
Some are aware of the work of protecting against attacks on compilers by in effect using two.
Others might be aware of my past work on “Castles -v- Prisons” where the base assumption was that the hardware and upwards could not be trusted therefore security had to be attained in another way.
Maybe it’s time to dust off some of these ideas, before ML starts causing real harm via the hands of at the very least “Politicians and their mantras” that need excuses to turn everyone no matter how innocent into criminals.