Comments

MarkH April 25, 2019 8:51 AM

More evidence (if such were needed) that “Artificial Intelligence” is a bull$hit marketing term. There ain’t no such thing.

Supposed visionaries who worry about some future AI catastrophe are starting on the wrong foot, by framing the problem on the basis of fantasy.

What’s really happening — carrying many grave hazards with it — is the reckless delegation to brainless computers, of decisions and actions formerly made by people.

Forget AI … and worry about UA (Unintelligent Automation).

Joseph Weizenbaum’s book “Computer Power and Human Reason: From Judgment To Calculation” is more than 40 years old; I think the passage of time has done nothing to erode its arguments and wisdom, and it’s more relevant than ever.

Weizenbaum was a pioneer of computer science, and is often considered one of founders of “AI.”


I respectfully suggest that people who are concerned about the human impact of automation refrain from using the term AI. When we adopt the crooked language of the fraudsters, we concede half the territory to them before we have even engaged their falsehoods.

Mark V April 25, 2019 9:11 AM

This is the problem with “AI” – it’s not actually intelligent, it just simulates the results of intelligence. Clearly no human being (not even a toddler!) would be fooled by these images – hell, not even a DOG would be fooled!

There are certain areas where using this type of “AI” makes sense, like medical diagnostics: if the computer can out-perform human doctors, then it’s a net win, even if it’s not perfect – especially since very few people will go out of their way to disguise their symptoms in an explicit effort to get an incorrect diagnosis.

But for something like driving a car? Where trivial changes to stop signs makes them look like they’re NOT stop signs? That’s downright dangerous! And imagine if you were crossing the road wearing (or carrying) something like this adversarial image: the car would happily run you over, because it can’t see you!

MarkH April 25, 2019 10:00 AM

@Mark V:

If you’re lucky, the brainless computer controlling the self-driving car might mistake you for a building, and do its best to avoid striking you …

Malgond April 25, 2019 10:34 AM

This research is rather useless. We already know that neural networks’ principle of operation causes them to misclasify (or rather, randomly classify) things that are wildly out of the training set, so for every such trained system there exists a set of inputs which it misclassifies. Once one knows (or guesses) the training set and can read or discern the output of the classifier, he can experiment with input until he finds something that gets misclassified.

However, for the purported purpose – avoiding being detected by surveillance cameras (such as run by various police states) – this approach is either completely useless or wildly unreliable. Unless you know exactly what system is behind the camera pointed at you (and you won’t, since it is not you who ordered and installed it), and you can get your hands on it beforehand and experiment (and you won’t since nobody would sell/lend identical system to you), you won’t know how to “fool” it. If you can’t discern the output of the classifier (as the camera does not say “I see you, bro”, but the data is saved at some obscure datacenter) you’ll never know if your attempt is successful or not. So it is more likely that wearing fancy stuff would not help you flying under the radar (you’d be auto-detected anyway in most cases), but to the contrary – you’ll stand out of the crowd (if anybody’s going to review the detected material).

1&1~=Umm April 25, 2019 11:03 AM

@Malgond: “Unless you know exactly what system is behind the camera pointed at you (and you won’t, since it is not you who ordered and installed it), and you can get your hands on it beforehand and experiment (and you won’t since nobody would sell/lend identical system to you), you won’t know how to “fool” it.”

Those are,not very good assumptions to make.

In most Western Nations and quite a few other industrialized nations large “Capital Expediture” (CapEx) using tax money has to follow a number of “anti-fraud/bribary” rules. These tend to make not just the tendering process open but also the final purchase information open as well.

Thus not only can you find out who supplied the system but what the system is from open records available to most people via various routes such as FOI requests or simply looking it up in the right place.

Once you know what the system is and who manufactured it, generally a visit to their web site will give you a list of customers who have also bought that particular product. It also can give you quite a bit of information about how the system is setup and operated at the Human Interface.

This gives you a list of places that have the same system, and enough information to learn how to recognise and potentially use the system controls.

A little known dirty secret of the security industry is not just that it is often low pay for long unsociable hours but the majority of workers are very very low skilled. Thus getting a job working in the bottom of the security industry is not exactly difficult (which is why so many ex-cons used to work in it). Showing even a little knowledge about knowing how to operate the equipment can often get you a job working with it.

Thus getting access is not as difficult as you might think. You can then either build usage skill or get to do your “tests” etc…

The simple fact is criminals usually can get hold of security work arounds for safes, alarms, CCTV devices and other security measures.

1&1~=Umm April 25, 2019 11:15 AM

@ALL:

On reading the link I was reminded of the supposadly apocryphal story of the “Lemon Juice” story about the peculiar belief of a bank robber called McArthur Wheeler back in 1995…

For those that do not know the story and it acting as a spur to two researchers to come up with a new theory about humans and their perception of their own abilities, then this might help,

http://story.fund/post/114093854037/dunning-kruger-effect

VRK April 25, 2019 12:19 PM

I’m not too hung-up on the term AI but I think the proof is on the table regarding “successful” machine learning using feedback. Frankly, that IS the domain of data processors, not humans. This article seems like writing on the wall, (at 150 wpm):

Decoded Brain Signals Could Give Voiceless People A Way To Talk

…scientists recorded signals from the brain’s speech centers, which control muscles in the tongue, lips, jaw and larynx… Next, a computer learned how to decode those signals and use them to synthesize speech…

Jordan April 25, 2019 12:52 PM

The deep fact that’s been revealed is that camouflage works. People have known that for a long time.

What is surprising here is also something that has been known for a long time. How well camouflage works depends on the viewer. Blaze orange camo doesn’t work on humans, but works on deer.

It happens that the camo that works on the automata doesn’t work on humans, and so the humans are surprised that the automata are fooled.

vas pup April 25, 2019 2:17 PM

@VRK • April 25, 2019 12:19 PM
https://en.wikipedia.org/wiki/Throat_microphone
“Advanced laryngophones are able to pick up whispers, and therefore perform well in environments where communicating with others at a distance in silence is required, such as during covert military or law enforcement operations.”

I guess there are situations when you need to get such signals WITOUT wearing any observed headphone device.

Patrick Flanagan developed device to get audio through skin. So, 007 could utilized it as well. Just humble opinion.

VRK April 25, 2019 2:45 PM

vas pup, thanks. re: “advanced laryngophones”

🙂 And some people have one, complete with a lifetime battery, implanted in their sternum it seems. Nearly suffocated the recipient before it healed over, never mind the glow-in-the-dark. :p

The remarkable challenge for machine learning in this regard will be condensing the overwhelming task of the analysis of this massive bio data into something you can squeeze onto a usefully sized linquistics device, especially for those NOT moving their lips, tongue, etc, but rather inclusive of less voluntary things like eye dialation, respiration, heart and blink rate, and “exceptional access”, blah de blah.

Anon E. Moose April 25, 2019 4:40 PM

Fun six part read recently about AI in history at ieee.org by Oscar Schwartz.

“…we explore that human history of AI—how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of superintelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are.”

https://spectrum.ieee.org/tag/AI%20history?type=&sortby=oldest

Clive Robinson April 25, 2019 6:02 PM

@ vas pup, VRK,

I guess there are situations when you need to get such signals WITOUT wearing any observed headphone device.

In my pile of bits and pieces from when I was wearing the green, I have a throat mike that does not even need you to wisper[1].

I also have a variation on it that I knocked up one weekend when I was feeling bored.

There are people around who for various reasons have lost the use of their larynx, back in the 1960’s when the first crude speech synthesizers were being investigated it was known that a suitable buzzing source when pressed against the throat would alow a person who had lost the use of their larynx to effectively talk again all be it in a very robot like fashion.

The important thing to remember is that the actual “intelligence” in most Western languages when spoken is in the modulation envelope not the tone or pitch of the carrier. This however is not true with a number of asiatic languages. A consequence of this is that in the general Western population “pitch perfect hearing” is realy quite rare whilst in asiatic language speakers from birth it’s around one fifth of the population. It’s also why we are seeing some very talented musicians from that part of the world.

[1] It uses ultrasonics just above the larynx to get “the voiced signal envelope” that is then used (much like in a VOCODER) as in input to a crypto unit that sent 2400baud over HF links. On the receive side the envelope got re-voiced on the decrypt side via a noise source to give a passable representation of the original speech (think about a hardware version of Linear predictive coding (LPC)).

Starouscz April 25, 2019 11:31 PM

One possible solution to the problem of manipulating AI can actually be more AI.

You can apply similar principles and have an AI watch the resulting AI decision for fairness, unexpected changes – to see if someone is gaming the system or for race, gender biases and more. It can also train each other network so it can recognize some attacks.There is always an option to manipulate ai, but it can be made more difficult than to manipulate average human.

I read an article but unfortunately its not in english, here is a link for company that tries to do this

https://www.bulletproof.ai/

Dennis April 26, 2019 4:18 AM

@Starouscz wrote, “You can apply similar principles and have an AI watch the resulting AI decision for fairness, unexpected changes – to see if someone is gaming the system or for race, gender biases and more. It can also train each other network so it can recognize some attacks.There is always an option to manipulate ai, but it can be made more difficult than to manipulate average human.”

At some point there requires a human to interfer and “correct” the AI when things go awry, for example a crashed vehicle tells presents a negative feedback to its auto-pilot designers. Then comes the next iteration of improved AI and so forth, until they eventually get it right. As for vehicles, a lot of it works on insurance principles of percentages and fault rates, much like network availability. At some point we’ve determined 99.9% is a good system, and we would learn to accept those system outcomes if the error rate can be minimized. Keep in mind, a human driver is also prone to such critical errors otherwise there would have not had any car accidents in the past.

vas pup April 26, 2019 1:33 PM

@Clive April 25, 2019 6:02 PM
Thank you!
I was talking about receiving signal through skin as well.
Using device you described it possible to pass speech/thoughts without actually talking.
Then laryngophone pass this electrical signal to sending device of ‘silent’ talker, signal then modulated and transmitted to receiving device of prospective person of ‘silent’ hearing, i.e. no headphones at all, receiving device demodulate signal and pass it to input of Neurophone developed by Patrick (you could make it small and keep on the body), its output is passed to the electrode attached to the skin under cloth and pass audio directly to the brain.
So, loop is completed.
For outsider: you are not talking, I am not listening, but we are really communicating. It could be made two-way, just both sets should be provided for each person.

vas pup April 26, 2019 1:48 PM

@Clive and VRK:
I guess another option is utilization of silent sound spread spectrum (dsss) technology when recipient is aware that is not mind control transmission, but expected message sent to operative with instructions.

You can pass information directly to the brain (audio) without any headphones on recipient by targeting beam on particular person head. Only target will hear message – nobody else around. Yeah, limitation exists: beam could not be obstructed.
Good in crowd environment.

Clive Robinson April 26, 2019 5:12 PM

@ vas pup,

… output is passed to the electrode attached to the skin under cloth and pass audio directly to the brain.

Directly connecting to the brain or other neurological parts of the human body is not necessary and carries considerable risk.

Providing the persons inner ear functions then stimulating the mastoid bone near Ried’s base line would be a safer approach.

There are a number of ways that can be done and some are not invasive. For instance when you have certain types of hearing test where ouyer or middle ear damage is suspected, a more modern version of the “tuning fork pressed behind the ear” is used.

See Information about “Bone Anchored Hearing Aids” (BAHA), for the equivalent of what an implant would do.

http://www.nchearingloss.org/baha.htm

For various reasons “close to my heart”[1] I’m actually not keen on electronics being implanted in people because various issues can and do arise. Not the least of which is the “fibroids” that form around them causing lumps that can catch against cloathing or cause rubbing/preasure sores with bag straps or seat belts. Also most these days have some kind of RF interface that makes them detectable at a considerably greater distance than wand type metal detectors. Worse this RF interface has at best lamentable security protocols and can with the right equipment be attacked at quite a sizable distance. The range this can be done at is often dependent on the protocol. If the protocol uses a strong two way authentication with randomised chalenge response then this reduces the range significantly because the attacker needs to correctly receive and respond to the very weak transmissions from the implant. However if it just responds as a slave issuing only acknowledges to an attacker’s signal, then the attacker can use a higher power transmitter from a much greater distance and just cycle through the command sequence repeatedly knowing there is a quite high probability that atleast one cycle will get accepted by the implant.

[1] In recent times as I’ve mentioned I’ve had a couple of heart function related implants put in and subsequently removed. Also being a communications engineer by way of training, career and hobby I’ve discovered that some implants are not as immune to some high RF field strengths as the manufacturers would like. I won’t go into the same level of depth as I have done in the past when I’ve talked about “Fault injection EM attacks” that work through supposed “RF Screans” via ventilation and case edge slots. But simplistically an attacker modulates an RF carrier of a specificaly chosen frequency with an attack waveform. Well the likes of pace makers and defibs sense the heart for weak electrical signals. Modulating an RF carrier with problematic heart waveforms can in some cases cause the sensing system to “lock on” to the modulating waveform rather than the hearts actual waveform. The consequences of this I suspect you can guess.

Faustus April 26, 2019 6:24 PM

@Mark *

I like the term AI. I distinguish it from machine learning in that it harkens back to the original goals of general intelligence, while machine learning (ML) is more the extremely successful process of learning via neural network and statistical processes that does not attempt to build human understandable structures, nor do anything novel beyond generalizing learned behavior.

I find the techniques of ML very boring so I don’t do it, so perhaps my distinction above is flawed or totally wrong. I welcome correction.

You have a point. What is called AI has not yet achieved general intelligence. What we do today would more accurately be called “reaching toward general intelligence”. The intelligence is still specialized.

I consider the ability to innovate a marker of intelligence. The AI I have built does succeed in creating novel and totally unanticipated solutions to problems. I think that ability is a gateway to general intelligence.

It is an exciting time to live, when I can use powerful computers to explore this aspect of intelligence. To me it has a similar feel as I imagine being an astronaut would. I certainly feel like I am dealing with an alien thought process and discovering new horizons.

I am too old to see space travel, but I may get to experience general artificial intelligence, and I may even be able to make major breakthroughs myself. I count myself lucky to be in the right place at the right time.

Marketing turns everything into a meaningless buzzword but that does not necessarily mean that there is no substance around the ideas so coopted.

another traveller April 27, 2019 1:32 AM

This “problem” will be solved by legislating it out of existence. Pretty sure there are already jurisdictions which ban hoodies because they impede cameras’ view of your face.

A Nonny Bunny April 27, 2019 2:38 PM

@another traveller

This “problem” will be solved by legislating it out of existence.

Well, for simplicity couldn’t they then just require everyone to wear a QR-code marking them as human?
That’ll also make it easier for self-driving cars. Just require all objects in the world to be clearly (and sufficiently redundantly) marked.

Pretty sure there are already jurisdictions which ban hoodies because they impede cameras’ view of your face.

Well, unless the camera can detect you’re wearing a hoodie, they can’t alert a police officer to go and fine you.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.