Portrait of René Descartes, dubbed the "F...

René Descartes. Image via Wikipedia

When we think about an artificial intelligence system, we usually imagine a system that will get an optimum or even precise result of a problem, but, can we really trust in a system with that objective when the problem is related to management?

AI systems are implemented on computers, and computers are predictable systems, knowing their current state there will be only an output for a certain input. They do not work with uncertainty. This is conceptually a very important aspect because we are interested in AI especially when the problem to resolve is very complex, when it is more complex than our brain can easily manage.

Unpredictability is a property of any complex system. A complex system can change its behavior in an unexpected way, and then an AI system must work with uncertainty. This is a reason because, for many applications, artificial neural networks fit better than expert systems. The former ones can integrate random information easily but the latter ones cannot as they are rule-based.

If you have worked at a large company with automated or highly precise administrative procedures, probably you have suffered what happens when a new and not considered event appears. The organization cannot do a simple task because the system does not support it. The task will be delayed until a human brain decides to phone someone else in order that it will be done manually.

It is true that a computer system can solve the previous example with a rule for sending the incident by e-mail to a human supervisor, in the same way that the human operator can do, but the system would have only the utility of a silly operator and would not have the capability of an intelligent manager.

Expert systems are computer programs designed to make decision acting as a human operator, as the rules can be expressed in a higher conceptually level than a usual computer software, they are tools with certain capability to imitate a rule-based human reasoning, but they have the same problem of a conventional software, as they are not well-designed to cope with unexpected events, in fact, they can be a source of additional complexity in an organization because complexity increases with the number of rules and our capability to assure that all potential events are considered decreases.

Although experts systems can be useful for some task, we must understand that, probably, they have a conceptual error that could proceed from René Descartes, the French philosopher. They are designed assuming that a system starting from thinking can explain the outer world. But, perhaps, a better paradigm can be that the outer world is who builds a proper thinking system. A system with self-learning capability would be preferable to reproduce and improve the human reasoning.

Advertisements