Comments

Clive Robinson May 11, 2021 11:04 AM

@ ALL,

Why don’t they call it “Mission Impossible”?

One quote from the Microsoft blog,

“By 2024, organizations that implement dedicated AI risk management controls will successfully avoid negative AI outcomes twice as often as those that do not.”

Only “twice as often”, I would not put it that highly, and I suspect quite a few others would not either.

Do I dare say “Black Swans” in massive flocks?

That is do people see the problem with,

Counterfit started as a corpus of attack scripts written specifically to target individual AI models, and then morphed into a generic automation tool to attack multiple AI systems at scale.

It’s the,

“Testing for ‘known knowns’ problem.”

That is it only tests for “known instances in known classes of attack. It does not test for “unknown instants in known classes” and it’s very unlikely that even by chance it will catch “unknown instances in unknown classes”.

Whilst it’s going to be usefull, it’s going to be usefull like “lint” once was.

lurker May 11, 2021 12:36 PM

@Clive

“By 2024, organizations that implement dedicated AI risk management controls will successfully avoid negative AI outcomes…”

Reads like a man page. I never suspected those were written by failed salesdroids, but too often I see things happen that are not described in the literature.

wumpus May 11, 2021 12:47 PM

How does it compare to:

grep ‘Copyright $$$$ Microsoft Corporation’ /dev/sda

You might want to check for any “Google Play” or any other means of inserting Android code in as well.

Winter May 11, 2021 1:02 PM

@Clive
“That is it only tests for “known instances in known classes of attack. It does not test for “unknown instants in known classes” ”

At the moment, AI is still a method to do statistical analysis of very large datasets. It will find what is in the datasets. It will return noise when its input is not in the training set.

Clive Robinson May 11, 2021 2:36 PM

@ Winter,

It will return noise when its input is not in the training set.

You left the word “biased” out before “noise”. Also “complex” and “determanistic” could be usefully squeezed in there as well 😉

Winter May 12, 2021 5:31 AM

@Clive
“You left the word “biased” out before “noise”.”

Actually, there is little you can say about what comes out of the AI if the input is truly unseen.

Say, you trained a edible berry classifier with Red=poisonous and Blue=edible. Now you show it a yellow berry. What will be the answer depends on all kinds of coincidences and accidents. Even the order of the examples during training might have an effect.

SpaceLifeForm May 12, 2021 4:13 PM

@ Clive

Let me know when it can translate JAVA to COBOL and do garbage collection.

A Nonny Bunny May 15, 2021 3:17 PM

@Winter

At the moment, AI is still a method to do statistical analysis of very large datasets. It will find what is in the datasets.

With a bit of good will, one might say that’s technically correct, but also true of humans. I’m not really sure what more you’d expect. Should it fantasize beyond what the dataset offers evidence for?
(I mean, even when people try that, they’re stuck with analogies based on what they know. Which is why aliens in scifi are never all that alien, but weird mixes and variants of things we know.)

Say, you trained a edible berry classifier with Red=poisonous and Blue=edible. Now you show it a yellow berry. What will be the answer depends on all kinds of coincidences and accidents. Even the order of the examples during training might have an effect.

Is the input RGB? Then the answer is almost certainly poisonous, because there is positive input in the R channel, and none in the B channel.
Now, if you have green berries, then the answer would highly likely be 50% poisonous, 50% edible, because the weights were initialized randomly and there was no training on the G channel to bias it one way or the other.

Well, actually you probably wouldn’t give the berries as input in isolation. And plants being green, there’s probably a lot of shades of green in the input 😛
But based on my experience with training neural networks, I’d have to say they are not as unpredictable as you make them out to be. Given a certain dataset, training will give a model that mostly behaves the same, even on unseen (and invalid) input. Because the statistics of the dataset don’t change.

Maybe I’ll give it a try tomorrow. Finding a dataset is probably the hardest part; it usually is.

ResearcherZero May 16, 2021 7:28 PM

@A Nonny Bunny

Like putting an image of the target in the memory of a cruise missile for instance.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.