Why does a learning robot behave the way it does? Unnervingly, the answer is: we don’t know. This has enormous implications for workplace ethics.
The most advanced learning robot around today is called a ‘deep learning agent’. Their neural networks are based on vast quantities of data, designed to mimic the human brain. A computational model for a neural network based on mathematics and algorithms was first developed back in 1943, but it is only in the past three or four years that computers have become sophisticated enough to use them effectively.
A deep learning robot isn’t programmed to perform a task in a particular way. Instead, they learn by doing a task repeatedly and evolving a strategy for success. The end result is sometimes one that their human creators were not prepared for – and do not understand. Bias, sexism and racial profiling have all cropped up in decisions made by deep learning robots.
In a timely response to these developments, the British Standards Institute has issued some guidelines on what constitutes good and bad learning robot behaviour. As a baseline, they begin by stating that robots shouldn’t be designed “solely or primarily” to turn on their masters and try and kill us. Glad they got that one straight. Some of the most vivid fictional creations – think Frankenstein, Hal and The Terminator – are all potent symbols of our subconscious fears of artificial life.
Less catastrophic – but certainly still alarming – the guidelines go on to address the potential for robots to deceive, to exceed their remits as well as the possibility of human addiction to bonding with artificial life. This isn’t the realm of science-fiction but the result of experiments that have already taken place.
Alan Winfield, professor of robotics at the University of the West of England, said the guidelines represented “the first step towards embedding ethical values into robotics.” He believes it’s a necessary move, as the ability of deep learning robots to trawl the entire internet for information is problematic because of inherent bias.
“All the human prejudices tend to be absorbed. These systems tend to favour white middle-aged men, which is clearly a disaster,” he says.
There is already evidence that voice recognition software is not as good at understanding women as it does men, or facial recognition programs not identifying black faces as easily as white ones.
Noel Sharkey, emeritus professor of robotics and AI at the University of Sheffield in the UK, says that bias has already shown up in robot technologies used by the police force to identify suspicious people at airports and have proved to be a form of racial profiling.
“We need a black box on robots that can be opened and examined,” Sharkey told The Guardian. “If a robot is being racist, unlike a police officer, we can switch it off and take it off the street.”
The guidelines also warn against humans becoming over-dependent on robots. When machines consistently provide us with the right answers, the tendency is to hand over trust or become lazy. A rogue result “might develop new or amended action plans … that could have unforeseen consequences” they say, and there is the potential for “approbation of legal responsibility” by robots.
While artificial intelligence clearly has many benefits to offer, especially in supporting human beings to make better decisions and to pursue scientific goals that are currently beyond our reach, the signs are that our awareness around the potential for robots to outwit us is growing.
The consensus among scientists is that in the next five to 10 years, we will be living alongside machine learning systems. The robots are definitely coming. The question is, are we ready for them?
[…] of the future workforce structures enabled by the use of technology. For example, using artificial intelligence and machine learning in call centers will make the workforce smaller but more expert-focused, with […]
Thanks for posting this informative things to people. This is really helpful for readers.