First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This is how “The Three Laws of Robotics” are worded, as formulated by the American science fiction author Isaac Asimov in 1942 – a proposal consisting of several rules of ethics for intelligent robots. In Asimov’s futuristic universe, more or less faithfully rendered in the films “I Robot” and “Bicentennial Man”, these laws are hard-coded into every robot, making them physically incapable of breaching them. Asimov uses the laws of robotics in a number of stories that show how a few very simple and transparent laws can have unforeseen consequences. It’s possible Asimov deliberately designed the laws to be so broad in scope that consequences like these would emerge and provide more fodder for his stories. Nonetheless, the point remains valid: no matter which ethical rules we create for robots, they’ll have unforeseen and potentially fatal consequences.
Asimov’s robot laws require a certain form of intelligence in robots, since they sometimes need to decide if their action or inaction will bring a person to harm – and to what degree. Should a robot refuse to light a candle on a kitchen table because the light particles could be dangerous? Should a robot snatch cigarettes from the mouths of smokers? If the robot strictly follows Asimov’s laws, then it should do that, no matter what the human thinks. It’s not an amount of power over human beings many people would like to see a robot have, but where do you draw the line?
It’s difficult enough to agree on ethical guidelines for humans. Take for example the classic “tram problem”: You’re standing at a tram crossing, which a runaway tram is speeding towards. If you let it keep on going, it will hit and kill two railway workers who won’t be able to jump aside. You can choose to force it to switch tracks, where it will hit and kill just one railway worker. What should you do? If you don’t take action, two people die; if you act, only one dies, but you’ve made a conscious choice to kill that individual – which makes you the one passing judgement on who should live or die, instead of allowing fate to decide. If you think nonetheless that it’s better that just one person dies, we can make the problem more difficult. Instead of the tram speeding toward two railway workers, it’s two mischievous teens who’ve deliberately ignored a “No Trespassing” sign and have no business being on the track. Should the law-abiding railway worker pay the price for two juvenile delinquents breaking the law? You decide…
It doesn’t get any easier when it’s machines we’re delegating the decisions to. Let’s say you own a robot car. The robot car is programmed to minimise loss of human life if a collision is unavoidable. That means that if it has to choose between you, its owner, dying or two homeless drunks in the middle of the road, it will choose to save the drunks. Would you buy the car if you knew that? Or would you and other car owners demand that the robot car value the owner’s life above all others – and if so, do you share the responsibility for the choices the car makes?
We’ve begun to use robotic drones in war and other violent conflicts, and that’s naturally raised the question of whether a robot should be allowed to decide to kill an enemy. It’s a slippery slope if we allow machines to kill humans, say those opposed to the idea, and it would also be an obvious breach of the first and most important of Asimov’s laws of robotics. But it’s by no means a clear-cut issue. What if a drone could shoot a suicide bomber before he’s able to detonate a bomb in the midst of a crowd – but can’t do it because it needs to wait for permission from a human?
It’s easy to think of situations where it’s difficult to decide which decision a human being or a machine should take, but in the real world it’s rarely so clear-cut what the consequences of an action will be. Is the robot car sure the homeless drunks will die if it runs over them? Is the drone sure that it’s really a suicide bomber and that a bullet will stop the individual before the bomb can be activated? We don’t always have full knowledge of real-life situations, and incomplete or faulty information is perhaps the biggest problem when we allow machines to make decisions for us.
We have a tendency to believe that computers are 100% rational and unprejudiced, and that their decisions will therefore be fair and impartial, but this is unfortunately not always the case. A computer’s decisions are no better than the data and algorithms fed into it – “garbage in, garbage out” as they say in the IT world. In the USA, police are starting to use computer systems that predict where it’s most likely for crimes to be committed based on past incidences. It’s well known however that American police officers tend to stop and detain black people in the ghetto more than white citizens in more affluent neighbourhoods, and therefore a disproportionate number of poor blacks are arrested for illegal possession of firearms or narcotics. That colours the statistics, and when the data’s entered into the computer system, it will predict more crime in poor black neighbourhoods and direct more officers there, where they’ll naturally find more crime than in the more affluent communities that are often neglected by the cops. By the same token, if a trigger-happy Muslim is labelled as a “terrorist”, while a gun-crazed white American is called a “disturbed lone wolf”, it will affect who the system considers to be potential terrorists. Human prejudice is propagated and reinforced by machines, and since machines are by nature considered to be impartial, these prejudices become part and parcel with the law.
No matter the circumstance, ethical human beings are a pre-requisite for ethical machines. People can of course elect to program a robot to act unethically, or provide the robot with a prejudiced view of the world that legitimises actions a majority would consider unethical.
At the current pace of development, within ten years we’ll have computers whose processing rivals a human brain. It’s far from certain that computers will develop a consciousness, but also not a given that they won’t. Our brains are composed of a number of simple neurons that don’t have a consciousness individually, but somehow form one when they’re combined, without our knowing exactly how this process occurs. Therefore we can’t really know if robots are able to learn from experience like us.
In the future, it may be difficult to determine if a computer has self-awareness or is just programmed to create the illusion of a consciousness. There are already computers that have passed the so-called “Turing test” for computer consciousness even though the systems’ programmers don’t consider them to have self-awareness. Perhaps the existence of a consciousness can only be determined from within – “I think, therefore I am” – and it may be difficult for a computer to communicate its self-awareness to the outside world. It may therefore be necessary to introduce “human rights” for machines we’re not sure have any self-awareness, just to err on the side of caution. But what should these rights consist of? Can we even guess what needs an artificial consciousness may have?
Our human needs arose through aeons of evolution and are basically rooted in each individual’s survival and reproduction. Traits that ensure survival and reproduction are naturally more likely to be passed on than traits that don’t. Self-aware machines wouldn’t arise through evolution and therefore don’t share these needs. Maybe their main need would be to solve the tasks they were built to solve, or maybe some completely foreign, non-human needs emerge in their artificial brains. It’s possible the machines would demand the same rights people have, simply from a desire to be treated as equals even if their inner needs are quite different and can’t even be formulated in a human language. Maybe we’d choose to provide machines with the same rights we have, either for lack of a better idea or in the hope that it would make the machines more like ourselves – our true descendants.
Image via Flickr