The ‘trolley problem’ is a thought experiment in ethics that can be defined as follows: picture an empty train hurtling down a track. Pan along the track, and there are five people tied to the ground who will be killed if the train continues. You are stood by a lever that can divert the train on to a second track, saving the lives of the five individuals. Tied to the second track, however, is one individual. Should you actively commit murder for the greater good? Or should you simply not interfere and allow the greater suffering?

Most people consider it moral to divert the train, as the loss of five lives would be worse than the loss of one. Their response often changes, however, when the scenario is altered slightly: If you could push a 150kg innocent bystander in front of the empty train and derail it, again saving five lives, would this be moral? Something about the intimacy of the ‘push’ makes people reluctant to sacrifice one individual to save five. It is easy to imagine that the ‘active murder’ response rate would fall further if the participants had to hold the 150kg bystander down while the train approached. There is no right answer to this problem: our best understanding of people’s intuitions is that we have a visceral neurological response to killing someone which isn’t activated when we kill them remotely, allowing us to be more calculated.

Historically, this hypothetical has helped to provide insights into our reasoning process when there are no good outcomes – it was a tool for academic fields such as neuroscience and philosophy. More recently, however, the lever that changes the track has become tangible: once the technology arrives, driverless cars will have to be programmed with some solution to the trolley problem. If an automated vehicle is travelling along at 100km/h and two people jump onto the road, should it swerve and kill one pedestrian? Further, what weight does it give to the passengers on board? These are immense moral dilemmas that will be thrust upon computer programmers in the coming decade, and every conceivable answer seems ugly.

Suppose the algorithm prioritises the safety of the owner at all costs – a popular proposition among potential consumers according to early research. This could result in a driverless truck ploughing into a crowd of people to save the owner from injury. If the vehicle only has a limited level of responsibility to its owner, someone will have to draw the line between seven victims and eight as a cut-off point for their endangerment. The vehicle could value all lives equally – in this instance, six people could walk into a road completely assured of their safety, but cars carrying five passengers would swerve and collide with buildings to avoid them.

Do you take one life to save five? Source: Scott Nauert, Wikimedia Commons.

Worth layering on to this is the reality that these decisions are unavoidable. Driverless cars are almost certainly going to be several times safer than human-operated cars and save hundreds of thousands of lives – not adopting the technology universally when it becomes available would be a far greater evil than accepting it with some ethical reservations. Similarly, putting driverless cars on the road without an algorithm for complex collisions when the technology exists is a stance in itself – the ‘do not interfere and let five people die’ option in the trolley problem. Programmers and policymakers will have to come to a decision on what we, as a society, consider fairest in the worst possible scenarios.

Watching these debates unfold across the world will be compelling. The USA has already issued federal guidelines on driverless cars, but individual states will be expected to add legislative nuance. This makes setting a moral code a local responsibility in America, and it is perfectly likely that some states will regulate more than others. Will Kentucky lawmakers have a different moral code to those in Tennessee? Will we see regional trends with the deregulated South forging ahead? One recommendation from the report is that owners be required to take the wheel when the vehicle is on a collision course. A state reintroducing human error once the technology has surpassed us, however, bears the same limitations as the example above: once death rates fall in a neighbouring state this would be indefensible. Again, there are no easy answers.

What about difficult answers? One article speculates that the moral standard will have to be devolved beyond state level – consumers may be forced to choose a ‘moral framework’ for their vehicle when it is purchased. This is not as dystopian as it may sound: if the manufacturer or the state set the algorithm, they will likely bear legal responsibility for any deaths caused as a result of their decision. The financial implications of this could be so severe that no company would take on the risk. This, in turn, would prevent the proliferation of the technology and result in further unnecessary deaths by human error. Making the owners liable by having them choose a solution to the trolley problem circumvents the legal and moral obstacles faced by government and business, and does so in a way that allows driverless technology to permeate.

Looking further into the future, the resolution to this debate may set a precedent that is applied to other Artificial Intelligence. Autonomous drones will have to be programmed with some level of acceptable collateral damage if they are to be used as their predecessors have been. Could a machine lie to a killer to keep you safe, or is it obligated to tell the truth? This is a dilemma known as Kant’s axe. There will be more opportunities for us to bring stone cold rationality to the most stressful situations and, crucially, achieve far better outcomes than we do at present. If our dialogue fails to pre-empt the technology, however, then the train will arrive before we’ve made it to the lever.

Scott Harvey

2024 © The Perspective – All Rights Reserved