Artificial intelligence exists not only in science-fiction movies, but also in books. From the self-controlled cars to robots-surgeons - all of these technologies have already been developed. During the conference at the Milken Institute in Los Angeles, the advantages and disadvantages of artificial intelligence and the ways they can affect the world have been discussed. Personal assistant in searching the Internet and the autopilot functions are the already existing simplified artificial intelligence, which is rapidly becoming more sophisticated and functional. According to the professor of Computer Science, Stuart Russell, machines with artificial intelligence will replace many of the current professions that require physical labor or data analysis, such as in the financial sector. Gurudutt Banavar from IBM believes that in future, there will be new professions because all people will have to work with AI. Moreover, other skills would also be needed for such activities. Russell and Norvig (2003) suggest that artificial intelligence will change not only the economy, but also the everyday life. The work will be in providing individual services to other people, so there would be no mass production or employment in the financial sector. The giant factories or large office buildings with thousands of workers will disappear. Thus, that future is already knocking on the door.
For this reason, a program about AI, “Artificial Intelligence Positioned to Be a Game-Changer” from the channel 60 minute, has been selected. However, it is necessary to give a specific definition of the AI in terms of science. Definition of artificial intelligence, provided by John McCarthy in 1956 at a conference at Dartmouth University, was not directly related to the understanding of intelligence of humans. According to McCarthy and Hayes (1969), AI researchers are free to use the methods that do not occur among people, if it is needed to solve specific problems. In explanation of his determination, John McCarthy points out that the problem is that it is not yet identified what computational procedures should be called as intelligent ones. Some of the mechanisms of intelligence are understood, but the rest still remain secret. Therefore, in terms of the cybernetic science, intelligence only refers to the ability to achieve computational purposes in the world.
At the same time, there is a point of view that intelligence can be only a biological phenomenon. The word ‘intelligence’ is closely related to the category of intellect. Thus, the precise definition of this science does not exist, since philosophy and psychology have not yet resolved the question of nature and status of the human intellect. There is no precise criteria of when the computer becomes as intelligent one, although in the early days of artificial intelligence, there have been proposed some hypotheses, such as the Turing test or hypothesis of Newell - Simon. In addition, in 1980, John Searle has published a thought experiment in the philosophy, called the “Chinese room”; the experiment was about the field of mind and philosophy of artificial intelligence (Searle, 2002). The purpose of the experiment is to refute the allegation that the computer, endowed with “artificial intelligence” by programming it in a certain way, is able to be conscious in the same sense in which humans are. In other words, the goal is a refutation of the hypothesis of the so-called “strong” artificial intelligence and criticism of the Turing test. Now, there are many approaches to the understanding of the AI issue and the main problems of creating the intelligent systems.
The psychological analysis of the thinking process involves accounting of human subjectivity, analysis of motivational sphere, including the struggle of motives, learning processes, characterization of targets and their shifts. Any activity, in order to be defined as such, must include the purpose, the means, and the result. Freedom of choice inevitably entails the freedom to choose the ways and means of achieving the result. A lack of any of these components or even their rigid fixation, transform human activity into something else; for example, in artificial intelligence. In other words, intellectual, mental activity of the person in principle cannot be considered outside of the terms of his/her movement, perception, memory, and other forms of activities.
However, in the context of the artificial intelligence, there is an important question about self-learning of the computer systems. For this task, interesting developments in the field of modeling of biological systems would be worth mentioning. There are many different directions of the developments of this idea. Artificial Neural Network is a system of connected and interacting simple processors. Such processors are usually very simple. Each of such network processors only deals with signals that it periodically receives, and the signals are periodically being sent to other processors. Yet, the signals are connected to the large network, which controls such simple processors, who are acting together locally and can perform various complicated tasks. Neural networks are used to solve fuzzy and difficult problems, such as geometric shapes or clustering objects recognition. The genetic approach is based on the idea that an algorithm can be more efficient if the best characteristics are borrowed from other algorithms. There is also a relatively new approach, which seeks to create a stand-alone program - an agent that interacts with the external environment, which is called the Agent approach. Moreover, making a lot of “not very smart” agents to interact together properly, the “hive” intelligence can be received.
One of the most important features of artificial and “natural” intelligence is the ability for self-learning. The pattern recognition systems, in addition to the initially installed dictionaries and symbol tables, there is also a process of recognition and correction of the occurred errors, which are also stored and become a part of the experience. This is very similar to the training. However, the stimulation is severely limited in such systems. The word or symbol, which was defined correctly, saves in a default way, but the error requires the manual input of the correction of the meaning, and, possibly, its additional fixation by training. For the system, it does not matter, whether or not the object was defined correctly. However, the negative emotions are needed for the effective process of learning. With natural intellect, things are tougher, because for the correct actions it is provided with a positive stimulus, and for the mistake with a negative. Thus, in order to teach the AI to learn, it has to be programmed to feel the negative and positive emotions.
However, in order to learn, AI needs to build a system of the associations. Consisting of a series of reflexes, each of which is, at the same time, a feeling, the association itself is nothing other than a continuous sense. The association is a holistic reaction, similar to visual, auditory or tactile one, which usually lasts longer, and its nature is constantly changing. Like any individual feeling, the association becomes more fixed and pronounced in the process of repetition. Comparing with individual perceptions, the association is much more complex formations. Due to the repetition, its constituent nerve processes are so closely related, that the slightest agitation of its parts results in the playback of the whole association. The basic law of the associations is the law of learning contiguity, due to which it is possible to explain the phenomenon of memory, intellect, emotions, will and involuntary actions. Ideas or impressions, which are usually found together, simultaneously or sequentially, are associated with each other, so that one idea leads to another. In addition, the recurrence of impressions is a sufficient basis for the emergence of different associations. Thus, all knowledge comes from experience; there is no inherent associations and no knowledge that a person owns from birth. During human maturation and accumulation of diverse senses, increasingly complex mental connection between his ideas and impressions appears, and the associations are multiplying. In maturity, human reaches a higher level of mental activity. However, higher mental functions, such as thinking, reasoning, memory and making judgments, can be reduced to a set of elementary sensations. This approach to the issue of self-education found its spread in artificial neural networks, which are connected with associative memory. Similar to how it works with human memory, all of the information is taken from the memory of AI by interacting with a part of the information. Auto-associative memory is a type of memory, which can complete or edit the image, but cannot associate the resulting image with another way. This fact is the result of a single-level associative memory structure in which vector is outputted from the same neurons, which receives the input vector. Such networks are unstable. Hetero-associative memory is a type of memory in which the stimulus affects the one a set of neurons and the response occurs from another one. The first model of auto-associative memory was developed by Hopfield and is called Hopfield network. Subsequently, the ideas of Hopfield’s model were developed in the model of hetero- associative memory - bidirectional associative memory. However, without the support of emotions, associative memory, governed by the laws of contiguity, do not allow to learn, and act only as the storage function. In contrast to the human, whose memory and learning processes are interpenetrated, the AI is programmed, to a greater extent, to separate these processes.
In nature, there are two mechanisms for improving the behavioral reactions. For the simplest organisms this is, basically, the principle of natural selection, when most adequately responding to the environment individual are the most prolific and their reactions and experiences are fixed in the genetic code. For the most developed representatives and, accordingly, for the most complex behavioral responses, a mechanism controlled by emotions becomes the main motivation system. It is not just physical pain, hunger or desire to breed, but the entire world of suffering and joy, with the function of becoming the force that motivates a person on the path of self-development. Thus, for the development of AI, it is necessary or the impact of motivational sphere or the constant movement forward through the evolution of the most efficient models, under the supervision of the predictive system.
To conclude, it should be mentioned that for the further development of artificial intelligence, it is needed to use the experience of psychology. Today, artificial intelligence is not able to make the effective corrections of its mistakes and has no motivation, as the most important factor of the individual learning. Despite the progress, which has occurred in the field of AI in the past 50 years, the discussions about the appearance of the general artificial intelligence (AGI) are premature. Scientists are still trying to understand the principle of knowledge transfer. Till this time, the researchers do not know how to create the algorithm of learning a new skill. Like humans, machines also have limitations. Therefore, they can ask people to help; for example, click the button in the elevator, or open the door. Such interaction with people is symbiotic robot autonomy. However, humanity and artificial intelligence are inseparable, and, in the future, people and software cannot do without each other. Systems with artificial intelligence will not only deal with the problem of traffic jams in the city and predictions of climate, but also will help people to make serious decisions. For this reason, psychology should take an active participation in the development of artificial intelligence.