Yesterday, I could read in a newspaper that there was an unfortunate accident that produced the death of an operator manipulating a robot in an industrial facility. The translation of this piece of news in some newspapers of my country was practically saying that the robot was a murderer.
Although this can be only a way to sell the piece of news (to provoke the attention of the readers), this is creating a bad understanding about what current robotics is. Reading the piece of news in my language, anybody can think that the AI of the robot decided to kill the operator. Industrial robots are not androids (robots with human aspect) as Japanese Asimo. Most industrial robots are robotic arms with a few degrees of freedom, and they have not implemented much AI in their software. Movements are planned and there are not many decisions taken by the robot from the external data. In fact, it could be possible that the accident was caused by a lack of AI instead of AI.
AI is implemented in robotics to cope with both the uncertainty from the environment and the internal uncertainty, but mostly the external one. A robotic arm in an assembly chain is not working with much external uncertainty. The pieces of the product that it is manipulating usually arrive with the proper position and orientation and the robot follows a programmed movement to make its task.
If the uncertainty about the task is increased (for instance if pieces arrive with different poses) the use of sensors and AI algorithms let the robot to choose different movements by itself to fit the execution of the task. The higher the uncertainty is the larger the need of AI is and the more complex AI algorithms are. Mobile robots moving in a changing or unknown environment have much more sensors and AI developed in order to execute their task than typical industrial ones.
These AI algorithms only try to make the interpretation of the scene from the sensors and they rebuild the path of movement to execute the programmed task properly. A fault would be a wrong interpretation of the scene from the sensorial data that would drive to an undesired path of movement.
From a complexity viewpoint, AI is increasing the complexity of the system because the system with AI has more possible states (usually defined by the position of the stepper motors of every joint) than in a single planned path. This is the classical way as a system works. The system has a certain set of possible states, and we reduce the complexity through a control device, however, in order to fit the external conditions there is certain amount of complexity that we must add to the system in order it can operate with some amount of uncertainty. We can get it with a more complex control device.
AI techniques define a special way to control the system. Typical controllers only modify the signals on the actuators to reach a desired waypoint through some kind of feedback loop from the inputs of sensors; however, AI systems make decisions to select the waypoints establishing different paths as a human could make.
In this case, AI only makes decisions to modify the way of execution of the programmed task, but it does not modify the task. We can go farther and program AI in order that the robot can modify the task. This would introduce and additional complexity level. AI would select between a set of different tasks, but it never can choose a task similar to “I must kill people” (if it is not programed previously) because this would require that the robot had understanding to interpret the meaning of killing that current AI does not provide. But AI can contribute to provoke the accident and to avoid it.
The functionality required by the system is what drives us to increase the complexity of the controller and the complexity of the system. This is introducing additional requirements of safety into the system. If the robot has a single planned path, it is easy for the operator to know how it would evolve. However, if the number of possible states increases it can be impossible for the operator to determine which positions can be safe to manipulate it. To avoid this situation the AI can be implemented to fit these safety requirements, for instance, analyzing the surroundings with some kind of sensor and stopping the motors if there is something unexpected nearby. It would not be the industrial robotic AI what would kill people, but the lack of it.
In terms of complexity, this can be easily understood. Complexity makes the system less predictable. As the number of possible paths increases, the probability of one of them to be followed is reduced, and furthermore, as AI is selecting the paths from the external collected data (with uncertainty) this fact makes the chosen path totally unknown.
The introduction of a technology in a system, specially in a system where there are humans, requires that the system be analyzed and modified taking technology into account, because the complexity of the system can be totally different. Safety procedures are usually defined in order that people do not enter the area of all possible movements (without the control system) of a big robot when it is activated and working, and there is a “fear button” to stop it if necessary. With this consideration, it is easier that working procedures are more responsible of a death than the AI implemented.