Robotics and artificial intelligence (AI) are two fields that are changing the world and gradually becoming more relevant to our daily lives. It isn’t an uncommon sight for any of us to see robots on a factory floor, assembling products and performing repetitive actions tirelessly.
Nowadays, even the robots that were once figments of sci-fi movies are now appearing in the world industry. There are several football tournaments between the Robotics departments of various premier institutes around the world. Honda, the Japanese automaker has also developed a humanoid robot named ASIMO; which has the ability to walk and run on two feet, as well as process and execute various voice commands, gestures, etc.
Robots are becoming more and more relevant in our daily lives. In the near future, they might replace humans in tasks such as driving, traffic monitoring and even crime fighting. Therefore, the theoretical threat of robots turning against humans, though improbable, has been widely debated for decades. This concept was first envisioned and introduced by the popular sci-fi author Isaac Asimov in 1942 in his short story Runaround. Asimov has come up with three basic laws that govern our relationships with Robots:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second laws.
These three laws were central to his storyline. The laws covered most of the scenarios so well that they have been accepted by the AI and robotics community whole-heartedly. It is amazing to think that a science fiction writer envisioned three laws that would remain relevant well beyond his time in 1942! A time where few would imagine that computers would one day come to a size that doesn’t fill half a room; let alone come down to the size of our palms, with unimaginable processing power (for those times).
While there is the side of people who support the Three Laws, there is also a lobby that feels that the Three Laws are redundant and unnecessary. They argue that by implementing international law, it is possible to allow robots to function within the same limits as specified by the Three Laws, while avoiding conflicts such as the one stated above. They also believe that robots and AI constructs, by design are meant to keep humans safer and enrich human life- for example autonomous cars; which can drive people around much more safely than humans ever can; and robotic surgery.Like all laws, these three laws don’t cover all scenarios; and certainly do not cover different interpretations. At a first glance, it would seem that these are the perfect laws. Yet his books later on exposed loopholes where robots, due to their AI capabilities, calculate that curbing the freedom of humanity is one of the only ways to follow the First Law, where the robot technically cannot, due to inaction, allow humans to come to harm.
However the counter-argument against the same is that robots may also be designed to harm. An example cited is the use of Predator Drones by the United States of America armed forces; which is a robot specifically designed to be a killing machine.
Although the Three Laws’ relevance in modern day and future robotics and AI may be debateable, it is interesting to think that one man thought of all this, and designed laws which, while being debated, are still accepted as good enough to use- more than 70 years ago!
Srinivasan is a student from UCSI university. He recently completed his first internship at Datum.