Ethical self-driving cars: People before animals?

Self-driving cars in Germany will not avoid animals if it puts human life at risk.

As is reported in Automotive news Europe, ethical conundrums around self-driving vehicles are being grappled with in the car capital of the world, Germany, and the conclusion being written into the constitution of autonomous software is simple - human safety before animal safety.

But is it that simple?

Growing up, I spent many years cruising through the Karoo as my family and I sped on down to the Cape and back for holiday. There were two rules when driving at night through our favourite semi-desert. Rule 1. Never speed, especially at night. Kudus are spooked by car lights and those close enough to the warmth of the road often choose their escape across the tar and can end up through your windscreen. An animal as large as a Kudu can leave a family dead in seconds if its ton-of-a-body ends up on your lap.

Some lucky human escapes after hitting Kudu.

Rule 2 is not quite as simple and involves the ethical conundrums of the quick-thinking brain as it makes important choices in the milliseconds before a potential death, yours or the animal's: If an animal runs in front of your car, do NOT swerve, rather brake straight (sometimes it's best not to brake at all) and hit the animal head on. This is especially true for smaller animals like foxes and mongoose, which twice, personally and sadly, I have had no choice but to hit at 120km/h, unless I risked my life and others by swerving at dangerous speeds and losing control, or worse, by swerving into oncoming traffic.

These are the ethical conundrums that technology faces when it comes to autonomous driving, but it gets even more complicated than that. If your instinct dictates that avoiding an animal is possible when travelling at certain speeds - that is, it feels that in certain situations it is okay to break the above rules on the pure basis of instinct (which is a real phenomenon that we should never underestimate), how do we then transplant "breaking the rules" and instinct into a machine in the right situations? Is that even possible?

After all, the above rules are not set in stone and are rather general guides on which to steer our African driving lives. Can autonomous cars ever truly make decisions that "feel right" in the moment - and in so doing spare an innocent fox's life? Can a machine break its own rules and on safe occasion swerve to avoid an animal? Can a machine know that some parts of South Africa are more dangerous in this sense than other places - possibly, yes - but then can it know that in some instances that this is not always the case, that often the rules that dictate life and death are momentary and subjective. If humans themselves cannot really control ambiguous situations like these, how can we expect a machine to?

For the most part, self-driving cars will be safer than human-driven cars and will save thousands of lives. That is fact. But in some instances, the human spirit cannot be trumped and artificial intelligence can only do so much. That much, for the most part, is much more than humans can do, but machines will always have their moral limit.

Nowhere is this made more clear than in iRobot (2004) when our protagonist loses faith in artificial intelligence when a robot saves him from drowning, but allows a little girl to die at the scene of the same car accident. By percentage measure he had more chance of living and so was saved, despite his protests that the machines should leave him and save her instead. That's what he would have done. That was the human thing to do.

How human can machines actually be? On the cusp of the autonomous driving revolution, this is the predominant moral question that plagues us.