On Robots Killing People

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar circumstances. A malfunctioning robot he went to inspect killed him when he obstructed its path, according to Gabriel Hallevy in his 2013 book, When Robots Kill: Artificial Intelligence Under Criminal Law. As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.” From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States—and that’s likely an underestimate, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon killed nine South African soldiers in 2007 when a possible software failure led the machine to swing itself wildly and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Stephen Pettitt during a routine operation that had occurred a few years earlier.

You get the picture. Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic "dogs" are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet.

Historically, major disasters have needed to occur to spur regulation—the types of disasters we would ideally foresee and avoid in today’s AI paradigm. The 1905 Grover Shoe Factory disaster led to regulations governing the safe operation of steam boilers. At the time, companies claimed that large steam-automation machines were too complex to rush safety regulations. This, of course, led to overlooked safety flaws and escalating disasters. It wasn’t until the American Society of Mechanical Engineers demanded risk analysis and transparency that dangers from these huge tanks of boiling water, once considered mystifying, were made easily understandable. The 1911 Triangle Shirtwaist Factory fire led to regulations on sprinkler systems and emergency exits. And the preventable 1912 sinking of the Titanic resulted in new regulations on lifeboats, safety audits, and on-ship radios.

Perhaps the best analogy is the evolution of the Federal Aviation Administration. Fatalities in the first decades of aviation forced regulation, which required new developments in both law and technology. Starting with the Air Commerce Act of 1926, Congress recognized that the integration of aerospace tech into people’s lives and our economy demanded the highest scrutiny. Today, every airline crash is closely examined, motivating new technologies and procedures.

Any regulation of industrial robots stems from existing industrial regulation, which has been evolving for many decades. The Occupational Safety and Health Act of 1970 established safety standards for machinery, and the Robotic Industries Association, now merged into the Association for Advancing Automation, has been instrumental in developing and updating specific robot-safety standards since its founding in 1974. Those standards, with obscure names such as R15.06 and ISO 10218, emphasize inherent safe design, protective measures, and rigorous risk assessments for industrial robots.

But as technology continues to change, the government needs to more clearly regulate how and when robots can be used in society. Laws need to clarify who is responsible, and what the legal consequences are, when a robot’s actions result in harm. Yes, accidents happen. But the lessons of aviation and workplace safety demonstrate that accidents are preventable when they are openly discussed and subjected to proper expert scrutiny.

AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days. (OpenAI did not comment when asked about its stance on regulation; previously, it has said that “achieving our mission requires that we work to mitigate both current and longer-term risks,” and that it is working toward that goal by “collaborating with policymakers, researchers and users.”)

Large corporations have a tendency to develop computer technologies to self-servingly shift the burdens of their own shortcomings onto society at large, or to claim that safety regulations protecting society impose an unjust cost on corporations themselves, or that security baselines stifle innovation. We’ve heard it all before, and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned, and human operators trying to assist were killed in unexpected ways. Since the first-known death resulting from the feature in January 2016, Tesla’s Autopilot has been implicated in more than 40 deaths according to official report estimates. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly veering into other cars or trees, crashing into well-marked service vehicles, or ignoring red lights, stop signs, and crosswalks. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.

As we move into a future where robots are becoming integral to our lives, we can’t forget that safety is a crucial part of innovation. True technological progress comes from applying comprehensive safety standards across technologies, even in the realm of the most futuristic and captivating robotic visions. By learning lessons from past fatalities, we can enhance safety protocols, rectify design flaws, and prevent further unnecessary loss of life.

For example, the UK government already sets out statements that safety matters. Lawmakers must reach further back in history to become more future-focused on what we must demand right now: modeling threats, calculating potential scenarios, enabling technical blueprints, and ensuring responsible engineering for building within parameters that protect society at large. Decades of experience have given us the empirical evidence to guide our actions toward a safer future with robots. Now we need the political will to regulate.

This essay was written with Davi Ottenheimer, and previously appeared on Atlantic.com.

Posted on September 11, 2023 at 7:04 AM46 Comments

Comments

Winter September 11, 2023 7:45 AM

The examples are more about Machines killing people than Robots killing people. In these cases a machine malfunctioned and killed by the simple fact that moving heavy items tend to kill people who are in their way.

Tesla’s autopilots are not intended to kill people and they killed by malfunctioning and not performing their intended tasks.

The watershed will be when robots are build to kill people and then actually do kill people the intended way, but not necessarily the intended targets.

So the robotic anti-aircraft cannon might fit the bill by shooting rounds at the wrong moment at the wrong people.

Clive Robinson September 11, 2023 10:22 AM

@ Bruce, ALL,

Re : Agency is not intent.

“You get the picture. Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm.”

As @Winter notes you take a large lump of something solid and give it some inertia then don’t expect it not to cause damage if you are in the way of it, as adults “We should know that” (adolescents however…).

But it brings in the question of “knowledge” rather than intelligence. You might know the laws of physics, and how to calculate with them, but unless you can use that in a practical and meaningful way it’s not “knowledge”.

Worse, a computer can calculate the trajectory of an object very precisely given some basic information, but it usually does not have the ability to use it. Even when it does so as a gun control system, it’s still just calculation. It’s given some basic information and it calculates and spits out some answers that then might or might drive servos to align the gun.

It is not the computer but the machine that causes the death. But consider,

“Whilst a machine controled by computer might kill, is there knowledge and intent for it to be manslaughter or murder?

That is where is the actual cause of the death in a more abstract sense, and ultimately the responsible “Directing Mind” if there is one.

To successfully “legislate or regulate” meaningfully, you have to understand this, and when you do… you realise why those famous “Three Laws of Robotics” have no value except as aspirations.

It takes “Knowledge” to see where calculations have to be made and why, but that is a problem,

Some argue that computers do not have the ability “to know” because they are effectively disconnected from reality. That is they have neither senses or ability to act.

Others argue that is rather missing the point. Humans can see an object for the first time and assess it for it’s ability “to harm or not” to a reasonably accurate degree computers can not.

The issue of “How we actually do it” is a bit of a mystery so it is explained away by the expression “past experience”. Which gets further argued as a “learning process” and so on down an ever darker rabbit hole.

What we should say is,

“We don’t realy know… but we are working on it.”

Which should be sufficient caution. Especially when you can state truthfully,

“We are intelligent and have knowledge, computers are not and do not. So logically it is us that should keep out of the way.”

Which kind of means any “AI / Robot Safety Laws” should be as it is with machines in general “for us, not them”.

Garabaldi September 11, 2023 10:22 AM

Moving heavy items by hand also kills people. It used to kill a lot more people. Technology helps both because it tends to remove people from being up close with heavy items and because it makes us richer, and money buys nice things, like not dying.

Sure we need to pay some attention to machines not killing people, but we also need to pay attention to machines that prevent people from dying in old fashioned ways.

Chelloveck September 11, 2023 10:41 AM

I know you’re quoting someone else here, Bruce, but I have HUGE problems with the line, “As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.”” It sounds like Hallevy is ascribing thought, planning, and self-preservation to the robot. In fact the robot was likely completely oblivious to the worker’s presence. The robot wasn’t responding to a “threat”, it was working as designed and the worker simply got in the way.

The question is, why weren’t there safety mechanisms in place? If the Wikipedia article on the matter is to be believed, there were. The worker intentionally defeated the safety mechanisms meant to keep the robot from moving when open for maintenance. Even if we were to ascribe motive to the robot (which is still ridiculous) it wasn’t responding to any threat. The worker willfully blinded the robot to his own presence.

I’m picking on this for a very important reason: We can’t have any meaningful discussion of the ethics of robots and AI killing people while we’re under the delusion that these machines have some sort of free will and motive. We can’t teach AI (well, what is colloquially termed “AI” these days) to be ethical, any more than we can teach a hammer to be ethical.

I still contend that the biggest threat of current “AI” is the people who fail to treat it like the automaton that it is and want to attribute motive to it.

Clive Robinson September 11, 2023 10:50 AM

@ TimH,

It sounds a bargain when you consider the “trip time and cost” of not becoming quite an Astronaut.

Also you might want to look up the story behind the Eros –actually Anteros– statue in Londons Piccadilly Circus, it’s made of aluminium, and at the time of it’s making it was a fairly rare metal and it’s argued as to if it’s a “first” or not. However what bankrupted the artist was the base which is several tons of copper, for which I’ll let you calculate the modern value.

Peter A. September 11, 2023 11:20 AM

@Chelloveck: I agree

I used to ‘fool’ small, weak, unimportant robots (such as tape cartridge changers) by disabling safety switches – out of curiosity to see how they work. But I still kept my limbs well out of their reach.

Steve September 11, 2023 11:40 AM

Meh.

Machines have been killing people since there have been machines.

I suspect that not long after our ancestors discovered that you could put a log or two under something heavy and move it more easily, someone got themselves crosswise with it and ended up deleted from the gene pool.

Gerald Castaneda September 11, 2023 11:48 AM

Winter: I didn’t look into every machine mentioned, but car-making machines have long been classified as robots.

The Kawasaki quote is kind of bullshit though, especially the shortened version here. Here’s a larger quote to show more context:

In 1981, a thirty-seven-year-old Japanese employee in a motorcycle factory was killed by an artificial intelligence robot working near him. The robot erroneously identified the employee as a threat to its mission, and calculated that the most efficient way to eliminate the threat was to push the worker into an adjacent machine. Using its very powerful hydraulic arm, the robot smashed the surprised worker into the operating machine, killing him instantly, after which it resumed its duties without further interference. This is not science fiction, and the legal question is this: Who is to be held criminally liable for this homicide?

One obvious problem is that the author simply declares this to be “homicide”. They also say: “Homicide (mens rea) offenses can be murder or manslaughter. Their factual element requirement is identical, and it includes causing the death of a human. Because the robot has physically caused the worker’s death, the factual element requirement appears to be met.” The word “includes” papers over the requirement, common to every dictionary definition of “homicide” I could find, that the death by caused by a human (a different human, such that suicide is not a type of homicide). So, no, the “factual element requirement” is not met; only the subset of it that the author chose to quote.

There’s a token reference hinting at actual homicide later in the chaper: “The programmer’s criminal liability is determined based on his role in bringing about the homicide. If the programmer instrumentally used the robot to kill the worker by designing it to operate in this way, he is a perpetrator-­through-­another of the homicide.” Again, this seems to be relying on a dubious definition of “homicide”, which by the way is never provided in the book. Sure, setting up any machine/plot/etc. to cause the death of a human would be homicide. But we can’t so easily hand-wave away the question of whether this was homicide by talking about criminal liability for it. The killing of a human by a workplace accident is typically not considered homicide at all if there was no intent or negligence. That requirement can be fairly weak—Wikipedia gives the example of Aeroperu Flight 603, which crashed because a worker forgot to remove some tape; the worker was convicted of negligent homicide—but it’s important.

The text quoted above, by the way, is the only description of the accident I can find in the book, but the article linked by Bruce and Davi says more:

According to factory officials, a wire mesh fence around the robot would have shut off the unit’s power supply when unhooked. But instead of opening it, Urada had apparently jumped over the fence. The employee set the machine on manual control but then accidentally brushed against the on-switch, and the claw of the robot pushed him against the machine tooling device. Other workers were unable to stop the robot’s action.

Now, I have to assume the author intentionally omitted these details because they completely undermine the author’s point. The victim, Urada, caused their own death through their own actions and negligence. In principle, it could perhaps still be negligent homicide if such things were normal and expected at this company, or a co-worker suggested it. But now we’re hypothesizing about a “mundane” homicide by a person or corporation, not a robot—it apparently having been designed and installed with all proper care.

As I now consider Hallevy a dubious source, I also have to question the reference to an “artificial intelligence robot”; the Guardian article mentions no such thing, and why would a gear-cutting robot use “A.I.” of any legally significant level? I see no basis whatsoever for the idea that the robot had identified “a threat to its mission”. For all I know, the robot had no concept of “threats” and was simply executing its programmed movements, possibly in relation to sensor input. The article says nothing about it, and the book’s bibliography is just a list of about 150 books and 500 court cases with no mention of which relate to which of the book’s stories, or how. (I went to school; I know that a long bibliography looks better than a short one and nobody’s likely to check whether I actually read the books, whether they relate to any particular point, or whether they even exist. I’m not so determined to prove the author wrong that I’m going to make dozens of Library Genesis downloads and probably hundreds more inter-library loans.)

tim September 11, 2023 12:22 PM

Tesla’s Autopilot has been implicated in more than 40 deaths

Really? I get that its in vogue to make fun of Tesla these days but in every single one of those Tesla accidents – there was a human being who – in most cases- were purposely ignoring the safety protocols built into the car. And Tesla’s have repeatably ranked as one of the safest cars on the road today by various government and non-government organizations.

Meanwhile 4,295 people died in car crashes in California alone in 2021. Guess how many of those involved a Tesla?

Gerald Castaneda September 11, 2023 12:27 PM

I hadn’t seen Chelloveck’s message when I posted mine, and hadn’t noticed Urada had a Wikipedia article. Wikipedia says “Other workers in the factory were unable to stop the machine as they were unfamiliar with its operation”. That, to me, is the “smoking gun”: how do you have a factory where nobody knows how to stop a potentially dangerous machine? That’s pretty damn negligent. Every employee should’ve been trained on how to stop any machine in an emergency—which should’ve been via a big emergency button so obvious that even an untrained person could’ve figured it out. And of course employees should know that disabling or bypassing a safety feature is almost always grounds for instant dismissal and possibly criminal charges (except when done legitimately; for example, via strict contingency procedures, or in a life-threatening emergency).

The actual mechanisms and actions of the robot are unclear (“either crushed him or stabbed him in the back”), and after reviewing Wikipedia’s sources, Hallevy is still the only mention of artificial intelligence. Maybe there was fuzzy logic or something that got dubiously called “A.I.”, or maybe the quoted text is indeed science fiction. My best guess in that the “threat” was the machine detecting an obstruction and triggering a cleaning cycle meant to clear swarf, broken gears, and the like (which maybe should’ve pushed more weakly, or toward a more open space, but who can say without the details?).

wiredog September 11, 2023 12:59 PM

About 30 years ago, fresh out of college, my first job was writing software for industrial automation. i.e. “Robots”. Mostly C and C++ on a PC. As Chelloveck notes above, safety equipment was a big part of it. A 1 ton hoist moving at 3 ft/s will take your head right off without slowing down, so there were safeties in place in case some damn fool stuck his head, or other body parts, in the equipment. Lots of overtravel and end of travel switches. Emergency stops, colloquially knows as “oh shit buttons” liberally scattered about too. All of those safeties hooked into the main power relay such that if any safety was triggered there was an immediate power cut. If the safeties were overridden in order to recover from an e-stop the system would run at a very low speed, and only manually. Oh, and flashing lights and loud buzzers so everyone would know something had Gone Wrong, Badly. Testing that system was fun.

Worst injury I saw came when an experienced man in our shop decided that all the safety systems were slowing him down and ended up running his hand through a router (not the network kind of router…) and destroying it. His hand, not the router. The router was fine.

Gerald Castaneda September 11, 2023 1:39 PM

Really? I get that its in vogue to make fun of Tesla these days

What does “making fun of Tesla” have to do with anything said here? I’m sure Bruce and Davi were mentioning these deaths as tragedies rather than comedies.

but in every single one of those Tesla accidents – there was a human being who – in most cases- were purposely ignoring the safety protocols built into the car. And Tesla’s have repeatably ranked as one of the safest cars on the road today by various government and non-government organizations.

Whether it’s safer than other vehicles has nothing to do with whether it can be implicated in deaths. Whether the human was negligent, or Tesla employees were, will be relevant to determining who, if anyone, will be charged with crimes or found liable for damages; but autopilot is implicated regardless of all that, simply because it was in control at the time. Older features such as cruise control have been implicated in deaths too, even when working correctly.

Look at the recent “Elon Mode” news for Tesla’s autopilot: “A Tesla software hacker has found an ‘Elon Mode’ driving feature that seems to allow Tesla vehicles with Full Self-Driving to operate without any driver monitoring.” That suggests that Elon considers it safe to use autopilot without monitoring, as his statements (also regarding “full self-driving” and the sale of it) have supported. Wikipedia has a whole list of regulatory agencies saying Tesla deliberately misled people into thinking such things. (I disagree with them somewhat on the term “autopilot”—no reasonable pilot would think it’s okay to turn on autopilot and then leave the cockpit empty, or to rely on it to avoid the usual exhaustive training. But, evidently, the agencies don’t feel that the average person knows what a literal autopilot can do, which I suppose is reasonable.)

The safeguards that were bypassed were ones that Tesla had mostly been forced by regulators to add or improve. It was said that employees knew the system couldn’t do what people thought, knew how easy it was to bypass, even retrieved camera footage of specific incidents and joked about them. I won’t make conclusions about culpability here, but Tesla makes for a very good example and case-study on this topic.

Peter A. September 11, 2023 1:55 PM

@wiredog: True, industrial workers are more clever in disabling or working around safeties than engineers designing them…

Back to my university days: the lecturer on Industrial Control Systems was a facetious guy. (I am a software faculty graduate, but we have had some basic ICS classes in the curriculum.) Instead of lecturing us on the systems, he used to tell anecdotes from his experience in the industry. Some of them where just funny, like an official delegation witnessing the startup of a large installation when a high-pressure hydraulic hose failed and sprayed the fluid, which formed a fog cloud and settled on all the VIPs… decontamination was tricky and involved disposing of all their expensive business suits. But some of the anecdotes where really “meaty” and raising the hair on our backs.

Anonymous September 11, 2023 2:48 PM

Regulation
must push companies toward safe innovation and innovation in safety.

I agree.

AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.

There is a difference between robots killing accidentally and people killing intentionally. We don’t know how to punish a robot.

Those industrial robots malfunctioned

Machines don’t make mistakes; people do. While we still manufacture machines, we are responsible for their mistakes; machines came out of our hands(manu in manufactured). All machine actions can be preventable.
“The engineer gives up with the word instruction”

Jim Henderson September 11, 2023 3:49 PM

The examples cited could more accurately be described as “humans not being safe around dangerous machinery.” I wonder how many people died in accidents with forges, assembly lines, lathes, or other industrial equipment in the same timeframes.

And the case of the medical patient dying should be considered pure medical negligence — the doctor was using a new tool and didn’t bother to get trained on its use.

Mike B from Test Team September 11, 2023 3:50 PM

TEST PROCEDURE: Alt case 1: Robot freezes in place.

Step 1. First, poke it with a long stick. (Do NOT poke it with a “New Guy” actually going into its operating path.)

kiwano September 11, 2023 6:28 PM

For all the people pointing out that robots and automation replace less-automated processes that also kill (the wrong) people, I’d like to put out a bit of a reminder that a lot of the early hype about self-driving cars identified them as safer than human drivers. This seems like a reasonable enough benchmark to build a degree of regulation (or jurisprudence regarding liability) around. As for estimating which is safer, I’m pretty sure that any resulting policy can be structured to get the insurance industry to make those estimates.

RobertT September 11, 2023 6:57 PM

So we’re going to manage the risk of Robots killing people with Regulation.
Hmmmm good luck with that approach.
I suspect the Robots that are intended to kill people will evolve with or without regulation because the rewards are just so attractive.

And what happens if your robots don’t kill at the same rate as your enemy’s Robots?

If we look at the emerging field of armed autonomous surface and subsurface boats (Robots) they are clearly being designed to wander the worlds shipping lanes and find their targets by any means possible and explode in the most inconvenient location….All done without any direct human involvement.

It would be one thing if the US were alone in developing such a capability, but there are literally hundreds of defense contractors around the world working on exactly this sort of kill robot capability. In the South China Sea barely a week goes by without another Autonomous vessel washing up on the shore somewhere. China is developing both surface and subsurface variants, all able to charge their batteries and propel themselves through the water to their final destination. Dozens of variants of autonomous underwater gliders are in development, some will be armed but most will simply dwell in a given location and provide real time intellengence to a network of killer drones.

Autnomous killer robots are deing deployed today. It’s happening in the Air, it’s happening on the ground, it’s happening in the water….it’s happening

So yeah, good luck with your ideas on regulating killer robots.

RobertT September 11, 2023 7:49 PM

In my opinion to improve the safety of industrial robots we need to forget the belt and braces safety lockout approaces of yesteryear and embrace systems which incorporate Opical Flow along with Visual Slam (Simultaneous Localization And Mapping).

The Robot needs to be situtationally aware and able to navigate / work in a crowded environment. In many ways eliminating the human completely (well not in the kill sense) is easier than making this environment safe for a human. So our robots will only distroy other robots if/when they run amuck.
But this is where Robotics gets interesting because most factory workers don’t want to be eliminated completely from the production flow they just to be safe and believe that they’re essential. So maybe it’s the Robots that need to track the humans and see when one of those dumb humans does something truely stupid.
What’s the old saying…Common sense is not all that common.

Clive Robinson September 11, 2023 8:49 PM

@ Peter A., Wiredog, ALL,

Re : Safties, lockouts and those who find out the hardway.

“… industrial workers are more clever in disabling or working around safeties than engineers designing them…”

Not more clever, more stupid.

I’ve worked on some safety systems for not just entire “oil rigs” but entire oil fields.

The capital cost even back in the 80’s was in the billions of dollars, but every system had to be done on the cheap or you would not get the work.

So to be honest the safeties were not designed to stop workers being stupid or their bosses ordering them to be so. No the safeties were designed to “just meet the regulations” at the lowest possible price. So that when a worker was stupid and got their arm or other body part ripped off, crushed flat, or in otherways mangled we were in the clear as far as insurance and law.

If you don’t think workers can be stupid, see the top incident given in the piece,

“human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along.”

Back in 79 that would not realy have been any more a robot than a modern day “pick-n-place” machine using “ladder logic”.

Whilst the 25year old was an adult he either lacked knowledge or was fearfull of his line boss. So did something realy stupid.

I use some interesting equipment from time to time and have worked with early pre 1980 Puma Robots (those arms you see around production line panel welders and paint shops etc). Right through to some fancy very high power laser cutting and engraving equipment that can cut a hole not just through diamonds but you if you got part of you in the way of the direct beam or any reflections, that I use for satellite prototyping. Then there is the high power RF equipment for cutting, welding and the broadcast industry I’ve prototyped and built.

As an engineer designing such kit I’ve had to work on it with the safeties and interlocks disabled…

Am I stupid for doing so, well depending on your view, “yes” but then I would not be able to design the systems, so “Catch 22″…

So I take other precautions, that as an engineer I know to be risk reducing but by no means risk eliminating.

If I do my job right and users of the production equipment follow the safety guide lines then they will be reasonably safe, assuming the equipment is installed correctly. But I can not know they will, nor can I know they will use within the guide lines and other rules…

I listened to the latest Perun YouTube vid this morning and there were a couple of lines that raised a wry laugh,

“You don’t build a factory next to a school when using …. or high energy materials. But it could make for interesting field trips…”

(For those that don’t know high energy or high order materials are euphemisms for the materials that turn bits of metal etc into high kinetic energy objects like bullets, shrapnel and similar that a limited subset of which get called explosives).

The point is it makes the point that ALLWAYS safety lives with those who put the machines into service, not those who design them, or try to legislate / regulate them.

ResearcherZero September 12, 2023 4:05 AM

New forms of consumer and worker harm may develop in new complex, automated, or purported autonomous technologies.

“As AI-MC becomes more prevalent, it is important to understand the effects that it has on human interactions and interpersonal relationships”

‘https://dl.acm.org/doi/10.1016/j.chb.2019.106190

Uber’s computers detected Ms Herzberg 5.6 seconds before impact, the NTSB said, but did not correctly identify her as a person.

‘https://www.bbc.co.uk/news/technology-44574290

Employees had warned Uber that it’s vehicles were dangerous before the accident took place, and had previously complained that the autonomous systems had not been adequately tested.

at no point did the system classify her as a pedestrian

5.2 seconds before impact, the system classified her as an "other" object.

4.2 seconds before impact, she was reclassified as a vehicle.

Between 3.8 and 2.7 seconds before impact, the classification alternated several times between "vehicle" and "other."

2.6 seconds before impact, the system classified Herzberg and her bike as a bicycle.

1.5 seconds before impact she became "unknown."

1.2 seconds before impact she became a "bicycle" again.

‘https://www.cigionline.org/articles/who-responsible-when-autonomous-systems-fail/

The contributing factors cited by the NTSB included Uber’s inadequate safety procedures and ineffective oversight of its drivers.

Rafaela Vasquez, a backup safety driver who was tasked with monitoring the self-driving car, did not see Herzberg crossing the street. Following the accident and Herzberg’s death, Uber resumed testing its vehicles on public roads nine months later, and subsequently has been cleared of all criminal wrongdoing.

“On this trip, the safety driver spent 34% of the time looking at her cell phone while streaming a TV show.”

“Vasquez had been charged with negligent homicide, a felony. She pleaded guilty to an undesignated felony, meaning it could be reclassified as a misdemeanor if she completes probation.”

https://www.npr.org/2023/07/28/1190866476/autonomous-uber-backup-driver-pleads-guilty-death

Clive Robinson September 12, 2023 9:56 AM

@ ResearcherZero,

Re : Human to blaim?

“On this trip, the safety driver spent 34% of the time looking at her cell phone while streaming a TV show.”

If Uber’s cars are safe on the road as they claim… Then a safety driver would not be required as the software would be as good as or better than a human driver…

Obviously this is not the case, especially as the safety driver has been found to be negligent and pursued as a criminal.

So clearly the software failed to be even as good as a failed human and Uber knew it… Otherwise the human safety person who was clearly expected to respond –a number of orders faster than the software– but failed to do so, would have prevented the collision but for some reason did not.

Gerald Castaneda September 12, 2023 11:48 AM

@ Clive Robinson,

the human safety person who was clearly expected to respond

“Expected” is a strong word, something that might be disputed by a person familiar with “human factors”. They had the duty to respond, but it’s naïve to expect someone to give their full attention to a task that, 99.9% of the time, requires none of it. At the very least, a lack of engagement must be expected to significantly increase reaction time.

We’ve seen the same thing with Tesla’s “autopilot”. And with real autopilot systems: aviators are well aware that their hands-on skills will be lacking if they rely too much on automation (here’s an archived version of the FAA report referenced in the article, which link is now dead).

lurker September 12, 2023 2:38 PM

@ResearcherZero, Clive, ALL

Uber’s computer identified “something” in the path. Because the something appeared to keep changing its identity, the computer took no evasive action.

Houston, we have a problem.

Mr. Peed Off September 12, 2023 3:05 PM

@Anonymous
“We don’t know how to punish a robot.”

Incarceration in a blast furnace will reform most robots.

Mike B from Test Team September 12, 2023 4:52 PM

“Incarceration in a blast furnace will reform most robots.”

And all people, too. But it won’t educate them.

Clive Robinson September 12, 2023 5:40 PM

@ Mr. Peed Off, Mike B from Test Team, ALL,

“Incarceration in a blast furnace will reform most robots.”

That rather depends on what you mean by “reform”, any old machine can be cast again as “pigs” but no other machine learns by it…

In England, we tried all sorts of entertaining ways of removing the criminal element from the genepool but the number of criminals kept rising due to the prevailing “social policy”.

So later we tried “exporting the problem” to another country without asking the existing inhabitants. We all know –or should do– how that turned out…

So here we are not even a hundred years later trying the same failed the first time “pack them on boats” policy to be followed by “ship them else where” where I suspect the locals really really won’t want them.

Funny story is Iran’s comment about the US and other Western Nations statments about the exodus of what were to become immigrants.

The Iranian’s laughed and said if the West wanted them they were welcome to them, as they were criminals they did not want…

As Europe found some were also terrorists and a lot lot worse…

As for the US those Immigrants Texas etc are shipping upto New York and Washington DC… What percentage do you want to guess as criminals that are decidedly violent etc?

I guess we will have to wait and see.

Kosta September 13, 2023 1:58 AM

Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep?

Because more immediate risks are not existential

Winter September 13, 2023 2:09 AM

@Clive

So later we tried “exporting the problem” to another country without asking the existing inhabitants. We all know –or should do– how that turned out…

It made abundantly clear that crime is not in the genes. But we knew that already, we just do not want to acknowledge it.

Also, immigrants are not more criminal than other displaced and oppressed people. If you compare immigrants with natives on the same socio-economical stratum, their statistics do not stand out that much.

Winter September 13, 2023 2:14 AM

@Anonymous

“We don’t know how to punish a robot.”

First, avoiding punishment is not the best incentive.

But we actually do know how to give robots incentives. That is what Reinforcement Learning is all about. It is just that robots do not learn on the job and robots do not yet plan ahead. But that will come.

Clive Robinson September 13, 2023 8:51 AM

@ Winter,

Re : Immigrants and criminals.

“Also, immigrants are not more criminal than other displaced and oppressed people.”

You and I are talking about two different things by “immigrants”.

You are talking about people “caused” by other people or events to leave their homes.

In addition there are people who “chose” to leave their home countries.

This “I’ve chosen” group can be further broken down into those that are doing it for the purposes of honest endeavors and those who arr intent on behaving in criminal ways.

So the immigrants you see arive are made up of groups of different types of people all lumped together not just by the Government of the receiving country but the Western MSM etc.

There is more than enough evidence to show that criminals are using the “immigration path” to escape their own countries, likewise terrorists are thought to be hiding in there as well.

In the UK it’s hard to sort through the figures due to them being interfeared with politically in the Home Office. But it looks like the ratio of criminals in the immigrant groups is upwards of three times that in the existing population for serious crime. If you look at terrorists it’s many many times, however “the law of small numbers” applies.

What people forget is immigrants “loose their identity”. So if you are as has been found a Russian getting out across Europe destroying your papers and then pretending you are from another country which is war-torn it’s a way of getting a legitimate new identity. Thus is your choice for “economic” reasons such as has happened with some software developers, avoiding being called up into the armed forces, or a criminal looking for a new identity…

As we know it’s going on, the folks in the “Russian Community” in London know it’s going on and have a good idea of who some of the criminals are or where they congregate. But gathering the “official” data is as you can imagine a process that takes time for various reasons (not least catching them being one).

Winter September 13, 2023 9:27 AM

@Clive

You and I are talking about two different things by “immigrants”.

When I wrote “displaced and oppressed people”, I was using this in it’s literal sense as people who are in a different and strange place and oppressed by their current environment. Immigrants are generally ill treated and exploited, that has a negative effect on their social feelings.

Most immigrants leave for pressing reasons as immigration itself is often a traumatic experience. Some did indeed have traumatic experiences in their homeland, but that is not necessary.

Jon (a different Jon) September 13, 2023 8:07 PM

Before you try to get too clever with regulation, you will want to consider “ISDS” mechanisms. It’s part of major international trade treaties, and says that if a local government enacts a regulation that might impair the corporation’s profits, they can (and will) sue your country (and probably win).

hxxps://www.theguardian.com/politics/2019/feb/20/much-to-fear-from-post-brexit-trade-deals-with-isds-mechanisms

hxxps://www.theguardian.com/commentisfree/2023/sep/13/fossil-fuel-companies-britain-international-charter-treaty

And it’s not just for fossil fuels. J.

P Coffman September 14, 2023 11:49 AM

@ResearcherZero

I see the point, “at no point did the system classify her as a pedestrian”.

I would like to add “hysteresis”? For those unfamiliar, it is an engineering/signal processing concept where two or more successive samples of any “measurand” are in agreement, before assigning state change.

Granted, I am neither expert on the Tesla instrument redundancy, nor the mini frame rate which yielded such volatility over this snippet of object classification.

However, it is not a stretch to observe how there is no “super” in the loop, because any human presented with a situation devolving as rapidly as this is not tested like this during his or her motor vehicles test.

P Coffman September 14, 2023 11:58 AM

In the late eighties, at university, we were taught about “automated” X-ray machine failure wherein the shutter window/aperture was opened long enough to deliver a fatal dose to a male patient.

In retrospect, I believe there was no “dosimeter-in-the-loop”. This is to say, rushing this thing through test must have been a factor. Dosimeter tech existed way back, though this seems like it might even be my own afterthought.

Clive Robinson September 14, 2023 12:41 PM

@ P Coffman,

“any human presented with a situation devolving as rapidly as this is not tested like this during his or her motor vehicles test.”

Technically the vehicle was suffering from a form of illussion that can be imagined as a cross between mirage and a hallucination in effect.

In humans we have “Multistable perception” the most well known being the wireframe cube that as you look at it appears to jump forwards or backwards. Likewise the vase / pair of face profiles and similar.

In none of the examples is the perceived image actually there, even though the brain says they are. What is there is in effect caused by pattern recognition in the mind turning shadows into shapes.

https://oxfordre.com/psychology/view/10.1093/acrefore/9780190236557.001.0001/acrefore-9780190236557-e-893

Anonymous September 18, 2023 1:19 PM

There is another case which Gabriel Hallvey covers in The Matrix of Insanity in Modern Criminal Law that I think belongs in this post.

What happens when the manufacturer decides to cut costs and use inexpensive materials which cause corrosion making the robot to malfunction.

Julia Reed September 18, 2023 6:32 PM

Robots are already all around us, the business network PwC predicts that up to 30% of jobs could be automated by robots by the mid-2030s. So, while they may not take over the world, we can expect to see more robots in our daily lives.

Anonymous November 29, 2023 10:32 AM

I disagree with your mention of ChatGPT.
It is a language model. It speaks. It does not act. It should not be considered “dangerous” in a legal sense.

Clive Robinson November 29, 2023 3:54 PM

@ Anonymous on November 29, 2023 10:32 AM

“It speaks. It does not act. It should not be considered “dangerous” in a legal sense.”

See my first comment above about “directing minds”,

https://www.schneier.com/blog/archives/2023/09/on-robots-killing-people.html/#comment-426521

Think about the system of “orders” passed down “the chain of command”.

We know which finger pressed the fire switch, but were they acting on orders from a human or a machine, and how can they tell?

It’s why in “War Crimes Trials” they do try to prosecute the most senior who give orders as well as those who carry thrm out.

Those orders are just “language” not “actions” yet people are maimed, mutilated and murdered on just such language.

But also consider when someone hires a hit man because they are less expensive than divorce etc, they do get found and they do get prosecuted and incarcerated.

But what if you say,

“Alexa hire a hitman to rub out my husband”…

Can you chose your language such that you can get it “past the AI’s built in guardrails”?

The answer is yes, and it was proved by Gus Simmon’s back in the early 1980’s with his “Prisoner Problem” which gave rise to “subliminal channels”,

https://en.wikipedia.org/wiki/Subliminal_channel

In turn his work used Claud Shannon’s 1940’s proofs of communication requires redundancy, and resundance gives a channel for “Perfect Secrecy”.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.