Despite our automated production lines, our computers, and the visible rise of robotics and artificial intelligence, we remain, by and large, relaxed about the prospect of any threat to us and our civilisation. Even the voices of scaremongers, pundits, experts and media columnists largely go unheeded. So should we be worried, or are we being sensibly sanguine?
This topic was born of science fiction and the creation of the first automatons going back well over 100 years. It was amplified by the creations of Hollywood with HAL 9000 in the movie 2001 and T-800 in Terminator, so we might have expected the public to become increasingly paranoid, but it would appear that this only afflicts ‘the experts’.
Where, then, does reality lie? It would be foolish to think that we can design out all the risks and likelihood of unexpected behaviours. But this appears to be the mission and grand design of those focused on ‘ethical programming’.
To date we have constructed automated production lines with scant regard for Asimov’s Laws of Robotics, and we have engineered autonomous weapon systems designed to maximise human deaths. However, the reality is that laws, design bounds and ethical standards cannot protect us against oversights and the actions of a society of machines, not to mention the inevitable interdiction by human hand.
Asimov’s laws may turn out to be just the first step in a futile mission to invoke 100% control and behavioural certainty – they state that a robot may not: 0) harm humanity or allow humanity to come to harm; 1) injure a human or allow a human being to come to harm. But a robot must: 2) obey humans, except where there is a conflict with the first law; 3) protect its own existence provided there is no conflict with the above.
A hammer was designed to knock in nails, and a screwdriver was designed to wind in screws, but both can be used as a very effective murder weapons. Gun licensing and controls do not stop misuse, accidents and occasional acts of barbarity. And so it is with machines, even if you remove people from the equation. For example, production lines can be lethal to humans because they have generally have no sensory indication of our presence. That is not the fault of the machine but of the designer who opted for warning signs, red lines on the floor and wire-frame guards.
Now, suppose we were capable of coming up with foolproof laws for individual machines. We have to consider what would happen when machines get to interact. Are we smart enough to apply the same rationale to a ‘society of machines’ and make it observe all the rules? This is very doubtful. The majority of men and women are not inclined to go to war, but nations often do so. All too often we see the preferences and wishes of the many swept aside by the desires of the few.
The thorny problem is the emergent behaviour that becomes prevalent and varied with the complexity of numbers and communication. We have a very poor grasp of these aspects in the biological and human context, but we know that predicting the outcomes of societies from the behavioural characteristics of individual entities appears impossible. Our best efforts rest on simulation starting from the individual. Going from global behaviour to predicting the bounds of the individual is a puzzle we have yet to crack.
On another plane, there is a fundamental truth that many choose to ignore: centralised control always fails. So if we are to build in safeguards, we should ‘design for safety’ but include elements of self-checking for the individuals, groups and global society. This needs to be compounded by immediate global corrections. And here comes the really good news. We are going to be in the driving seat for some significant time yet. And the bad news? We have no firm estimate of when the machines will break free of us. So, try as we might to achieve an ethical future, the chances of success are most likely declining.