The ‘trolley problem’ has served to torment the friends of amateur philosophers for decades. First articulated in its modern form by Philippa Foot in 1967, the quandary is designed to explore basic moral principals underpinning utilitarian ethics.
Imagine yourself in a train yard, standing before a lever which can change the route of oncoming trains. You hear a shout and turn to see a runaway cart barrelling down the line towards five oblivious rail workers. For some reason it is not possible that these five workers can discover the peril and save themselves, perhaps they are wearing noise cancelling headgear and so cannot be warned of their impending doom. You alone have the power to save these five lives, but not without consequence. A lone worker, equally oblivious and un-warnable, stands on the other fork of the track. If you pull the lever, you can save the five men working on the main line, but only if you sacrifice the one man on the other track.
The question has been asked in many iterations over the years. What if the lone worker was in-fact a child or a relative? What if they have discovered the cure for cancer but for some reason hadn’t communicated these findings to the relevant authorities? What if the five men are convicts? What if the five are in their eighties while the lone worker is in their twenties? And so on and so forth ad absurdum. I have personally always found this problem incredibly annoying: the situation is fundamentally so artificial that it seems the answers can’t hope to be fruitful. It seems obvious that five lives should be prioritised over one, but I wouldn’t want to have to make that decision and take that action to kill the one, particularly if it would mean sacrificing a loved one for strangers. It is hard to give any answer other ‘I don’t know’.
With the advent of self-driving cars however, it seems that this question may finally have a real world application. A truly advanced on-board computer could hypothetically have to make these kinds of decisions, albeit rarely, in a real-world situation. That programmers are having to consider these questions is interesting, but will people feel comfortable handing over these life or death moral decisions to computers and their programmers? How can we come to terms with a terrible accident for which no-body can really be blamed?
There are several ways in which this problem has been approached, each varying in degrees of complexity. The CitiMobil2 project in Italy has circumvented the problem completely by not addressing the decision making process. Far simpler than the processes underpinning those self driving cars in development by companies such as Google, the cars developed for this project are simply programmed to follow a route, breaking if something gets in their way. Can it be morally justifiable to follow this policy if it means it risks more lives however? In practice, the ‘don’t swerve, just break’ often yields the best outcomes statistically speaking, but there are hypothetically instances in which this could lead to preventable tragedy. For instance, if a large group of pedestrians were suddenly to run into the road in front of a car, or if a school bus was hurtling towards a collapsed section of road, if there was no oncoming traffic a swerve could certainly save lives.
Another approach that has been taken to try and provide a more sophisticated answer is crowd-sourcing. The ‘Moral Machine’ is a website hosted by MIT, aimed at trying to crowd source a ‘correct’ answer for a number of hypothetical scenarios. This seemed, to me, like a good idea. Once I received my results I changed my mind. I do not, for instance, think that gender, social standing or physical fitness should be moral determinants in choosing whose life to endanger, but my results suggested that my subconscious does. Perhaps with enough questions, enough people, and enough time some kind of consensus may be reached, but with how unrepresentative of my own beliefs the outcome of my responses was, it seems these answers may be hard to trust. The blind luck involved in the case of human drivers, although yielding more deaths, seems fairer.
In light of this, the approach announced late last year by the executive of Mercedes Benz Christoph Von Hugo of programming all cars to prioritise the safety of their drivers above all others kind of makes sense. At first glance, this seems somewhat callous and runs counter to the general consensus that utilitarianism should dictate all decisions surrounding these kinds of moral questions. But it does make a hard and fast rule with a consistent outcome. There is clearly a commercial element at play in this decision, after all who would buy a car that is programmed, in certain circumstances, to actively put their own life at risk. This is borne out in opinion polling also: although most people agree that self driving cars should make decisions based upon a utilitarian basis, i.e. make the decision which endangers the fewest individual people, they also say that when buying a self driving car, they would opt for one which would prioritise its’ passengers’ safety above all others.
There is a greater logic to Mercedes Benz’ decision however. As stated by Von Hugo, the car has the most control over the safety of it’s own passengers. He argued that there is insufficient certainty beyond this to be able to make these decisions with sufficient accuracy. For instance, if a car swerves into a tree to avoid a group of pedestrians, the outcomes cannot be so easily predicted as if the car was just to break and stay on course. The tree, after all, could fall down and kill more people. The simple, and more controllable rule of ensuring the driver’s safety therefore makes sense.
It is easy to get carried away in these thought experiments, and many developing these self driving cars have certainly grown very tired of them. The principal engineer developing self driving cars for google, Andrew Chatham, sarcastically remarked how much he and his team ‘love the trolly problem’. Of course these questions are interesting to think about, but in reality don’t seem actually to be that significant a part of the story. There is yet to be any situation in the millions of miles of test drives where any of these moral quandaries have had to be addressed, and by fussing over them many supporters of self-driving cars worry that they could serve to delay the far greater utilitarian positive of replacing human drivers with computers. Human error accounts for 90% of car-related fatalities and self driving cars would reduce the death toll on the world’s roads by a large margin. Self driving cars are also more fuel efficient and reduce congestion.
So in a round about way I suppose I can still smugly dismiss these questions as annoying and unhelpful. The utilitarian argument overwhelmingly favours the use of self driving cars, with or without moral machinery. It is an interesting question, and one that must invariably be answered by those groups working on the development of these vehicles, but it is important to remember that on a practical level the moral judgement of any car will never be able to run in the same enormous bracket as the moral judgement of people. A computer would never drive drunk, would never be in a hurry, would never suffer from road rage. The age of self driving cars seems soon to be upon us, and regardless of their moral sensibilities, they are going to save a lot of lives.