For far too long, humans have viewed robots as threats; a new school of thought wants us to look at them as partners
For the longest time, we have looked at robots and artificial intelligence (AI) as a threat, and all these doomsday movies and fiction around the same subject haven’t helped either. Even the likes of Elon Musk have consistently stated that an AI taking over the world is a real threat and we should guard against it. However, researchers and scientists have been quick to dismiss such concerns. Eminent researcher Dr Kate Darling, who is a specialist in human-robot interaction, robot ethics, and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab, argues that we would be better positioned if we started thinking about robots and AI like animals and subsequently partners.
In a recent interview with The Guardian, she reasons that we have domesticated animals because they are useful to us. Although animals and robots are not the same, the analogy helps us discard the problematic human-robot one. The belief that an AI can take over the world is extremely unrealistic for robots, and even AIs work in narrow human-designed envelopes. Opening up our mind to the possibility of robots being our partners gives us some options in how we use technology.
Sure, there is the question of robots taking away people’s jobs, but Kate argues that it is driven by a broader economic and political system of corporate capitalism. The animal analogy tells us that we have options in how we can use robots to supplement human labour. She also questions the belief and contention that robots could do us harm and should be held responsible for it. With the limited technological envelope that robots function in, it is up to the robot designer/developer to ensure that no harm is caused to humans. If harm is indeed caused, Kate believes, the moral accountability rests with the people who built the robot. However, this is not the case now as companies and tech manufacturers constantly try to evade responsibility for their actions. An example of this is the cyclist that was killed by a self-driving Uber in 2018; the back-up driver was held accountable, instead of the manufacturer.
Kate has seven Pleo (the robot that she’s holding in the adjoining photo) baby robot dinosaurs, an Aibo robotic dog, a Paro baby seal robot, and a Jibo robot assistant. These have helped her to learn first-hand about our ability to empathise with robots. There is now a substantial body of research proving that we do empathise with robots. However, she is worried about how companies could take advantage of people using social robots. She is also concerned about the increasing amounts of data that companies are collecting. Since a lot of social robotics deals with characters modelled on humans, there are fears around gender and racial biases creeping into them.
On the question of if we should give rights to robots, Kate argues that even though sentient robots are a long time away, it is important to look at robot rights the way we look at animal rights. However, she fears that our biases are bound to creep in and like in the case of animals, we might advocate for the rights of certain robots over others.