How did we get to the current stage of development of these systems? The fact is that the beginning and one of the original directions of the development of bots was the desire to pass the Turing test. A little explanation should be given here. The Turing test (according to Wikipedia) is a way of determining the ability of a machine to use natural language and indirectly to prove that it has mastered the ability to think in a way similar to human. In 1950, Alan Turing proposed this test as part of his research into the creation of artificial intelligence (AI) – a replacement of loaded with emotions and, in his view, pointless question Do machines think? for a better defined question.
Passing this test has long been the goal of many research teams and has fueled work on artificial intelligence and ‘rule-based’ systems. Today, when we know how to build a program that will pass such a test without any problems, we know much more in the context of artificial intelligence and the differences between conversational systems and the actual “artificial intelligence.” We also know that building a system capable of convincing a person in a blind test that he is talking to a person does not actually create “intelligence.”
So let’s look at bots in the context of the function they are supposed to perform – namely another interface in the human-machine relationship.
The development of today’s electronics and cloud solutions means that we can use natural language as part of solutions that often surround us in everyday life. Voice assistants (Google Now, Siri, Cortana, Alexa, etc.) allows us to interact with home and entertainment management systems, as well as improve the use of web resources. Of course, these solutions can be complex AI-based systems as well as simple rule-based mechanisms.