At that time, there were three main schools of artificial intelligence: ① Symbolism, also known as logicism, psychology or computer science, whose principles were mainly the hypothesis of physical symbol system and the principle of bounded rationality. This school believes that artificial intelligence comes from mathematical logic. After the emergence of other schools of artificial intelligence, symbolism remains the mainstream school of artificial intelligence. The representatives of this school are Newell, Xiao, Simon and Nelson. Connectionism, also known as bionics or physiology, is mainly based on neural networks and the connection mechanism and learning algorithm between neural networks. This school believes that artificial intelligence comes from bionics, especially the study of human brain model. From model to algorithm, from theoretical analysis to engineering realization, it has laid a solid foundation for neural network computers to go to market. Behaviorism, also known as evolution or cybernetics, is based on cybernetics and perception-activism control system. They have different views on the development history of artificial intelligence. This school believes that artificial intelligence originated from cybernetics.
Simon's most basic contribution to artificial intelligence is that he put forward the "Physical Symbol System Hypothesis" (PSSH). In this sense, he is one of the founders and representatives of symbolism school. His basic point of view is that the basic element of knowledge is symbol, the foundation of intelligence depends on knowledge, and the research method is to simulate the macro function of human brain with computer software and psychological methods. Symbolism is mainly based on two basic principles: ① the hypothesis principle of physical symbol system. (2) Simon's principle of bounded rationality. This theory encourages people to make a comprehensive exploration of artificial intelligence. Simon thinks that if any physical symbol system is intelligent, it can certainly perform six operations: input, output, storage, copy, conditional transfer and symbol structure establishment. On the contrary, any system that can perform these six operations will certainly show intelligence. According to this assumption, we can draw the following conclusions: man is intelligent, so he is a physical symbol system; Computer is a physical symbol system, so it must have intelligence; A computer can simulate both human and human brain functions.
1956, Simon, Newill and John Shaw, another famous scholar, successfully developed the world's earliest heuristic program "logic theorist" LT (1 logic theorist), thus making the machine take the first step in logical reasoning. In the computer laboratory of Carnegie Mellon University, Simon and Newell started with the analysis of human skills in solving mathematical problems, and made some people think about various mathematical problems carefully, asking them not only to write answers, but also to tell their own reasoning methods and steps. Through a large number of examples, Simon and Newell collected a wide range of solutions to general problems. They found that when people solve mathematical problems, they usually use trial and error. When trying, you don't have to list all the possibilities, but use logical reasoning to quickly narrow the scope of the search. Humans have similar thinking rules in proving mathematical theorems. By decomposing a complex problem into several simple subproblems, substituting known constants into unknown variables, and making tentative reasoning with known axioms, theorems or problem-solving rules until all subproblems are finally known. Then, according to the axiom in memory and the proved theorem, the sub-problem is solved by substitution and replacement, and the whole problem is finally solved. The verification of mathematical theorems by human beings is also a heuristic search, similar to the principle of computer chess. On this basis, they challenged the mathematical theorem with the "logic theorist" program, established a heuristic search method for proving the mathematical theorem by machine, and proved 38 theorems in the 52 theorems in the second chapter of Russell and Whitehead's mathematical masterpiece "Principles of Mathematics" by computer (1963, the improved "logic theorist" program was used on a larger computer, and finally the whole of the second chapter was completed.
Based on this success, Simon and Newell extended the "logical theorist" program to the process of human solving general problems, and envisaged using machines to simulate human thinking activities with universal significance. "Logical Theorist" has been highly praised by people, who think that it is the first real achievement to explore human intelligence activities with computers, and it is also the first practical proof that Turing asserts that machines can have intelligence. In the process of developing the "Logic Theorist" program, Simon first proposed and successfully applied "list" as the basic data structure, and designed and implemented the table processing language IPL (Information Processing Language). In the history of artificial intelligence, IPL is the ancestor of all table processing languages and the earliest language to use recursive subroutines. Its basic element is symbol, and the table processing method is introduced for the first time. The most basic data structure of IPL is table structure, which can be used to replace storage addresses or rule arrays, helping programmers to get rid of tedious details and think about problems at a higher level. Another feature of IPL is the introduction of a generator, which generates one value at a time, then suspends and waits to be called, starting from the place where it was suspended. Many early artificial intelligence programs were written in tabular language. Therefore, the table processing language itself has gone through a process of development and perfection, and its last version, IPLⅴV, can handle tables with tree structure.
1956 In the summer, dozens of scholars from various fields such as mathematics, psychology, neurology, computer science and electrical engineering gathered at Dartmouth College in Hanover, New Hampshire, USA, to discuss how to simulate human behavior with computers, and according to the suggestion of J. McCarthy (197 1 Turing Prize winner), this subject field was officially established. The convening of the conference marks the official birth of the discipline of artificial intelligence. Herbert Herbert simon pointed out that the research of artificial intelligence is to learn how to program a computer to accomplish human witty behavior. The "logic theorist" Simon brought to the meeting was the only artificial intelligence software that could work at that time, which aroused great interest and concern of the delegates. Therefore, Simon, Newell, and the founders of Dartmouth Conference, McCarthy and Minsky (M.L. Minsky, 1969 Turing Prize winner) are recognized as the founders of artificial intelligence. The four of them formed the first artificial intelligence research group in 1960, which effectively promoted the development of artificial intelligence.
1960 Simon and his wife conducted an interesting psychological experiment. The experiment shows that the process of human problem solving is a search process, and its efficiency depends on heuristic function. On the basis of this experiment, Simon, newell and Xiao successfully developed a "universal problem solver", which can solve 1 1 different types of problems. The basic principle of this solution system is to find out the difference between the target requirements and the current situation, and choose the operation that is conducive to eliminating the difference, so as to gradually narrow the difference and finally achieve the goal. Simon has repeatedly stressed that scientific discovery is only a special type of problem solving, so it can also be realized by computer programs. From 1976 to 1983, Simon cooperated with Pat W. Langley and Gary L. Blatz to design six versions of the bacon system discovery program, and rediscovered a series of famous laws of physical chemistry, which proved Simon's above arguments. This has opened up a large field of artificial intelligence "solving problems".
Simon has been studying computer chess since he turned to computer technology. From 65438 to 0966, Simon, Newell and Baylor cooperated to develop the earliest chess program MATER. 1997, after IBM's "deep blue" computer defeated Kasparov, the international grandmaster of Belarus, Simon, who was 8 1 year-old, together with T. Munakata, an artificial intelligence expert from Ohio State University, published the article "Lessons of Artificial Intelligence" in the August issue of ACM Communication Magazine. This paper comments on this matter and points out that the score of a chess program running on a computer is 2600 points, which is equivalent to the level of Kasparov, the world chess champion of Belarus.
Simon's other great contribution to artificial intelligence is to develop and perfect the concept and method of semantic network as a general means of knowledge representation, and has achieved great success. Semantic network is an important and effective knowledge representation method. This representation was put forward by M.R.Quillian in the late 1960s, as a display psychological model of human associative memory. When developing TLC system, Quilling used it to describe the meaning of English and simulate human associative memory. The use of semantic network as a general knowledge representation method was basically clarified by Simon in the process of studying natural language understanding from 65438 to 0970. In the mid-1970s, Simon cooperated with CAD expert C.M.Eastman to study the automatic spatial synthesis of residence, which not only created "intelligent building", but also became the beginning of intelligent CAD research, namely ICAD.
DSS(Decision Support System) originated in the late 1960s and early 1970s, and has received great attention. The core of its concept is the theory of decision-making mode, which was also laid by Simon. In addition to Bayesian model, another important theoretical model of decision-making under uncertain conditions is the expected value maximization model using von Neumann-Morgenstein utility function. Simon formed the idea that electronic computers can simulate people's thinking in the book Model of People, and began a series of research on artificial intelligence. Simon proposed the bounded rationality model of the maximum expected value model of utility function. Bounded rationality model's basic ideas are: firstly, all decision makers participate in a limited scope; Secondly, we can't give a probability value to the future, but we'd better have a general concept of future events; Third, if the latter is not transferred from the former, our wishes in one field may be completely different from those in another; Finally, we pay more attention to collecting information than analyzing requirements. After collecting information, the most common choice is based on intuition. Based on Simon's decision model theory, P. G. Keen proposed a design method called "adaptive method". Decision support system is considered as an adaptive system, which consists of three technical levels: DSS application system, DSS generation system and DSS tools. It is run by decision makers and can adapt to the changes of the times. Simon once praised such a system "to adapt to various changes in three time ranges, that is, in short-term operation, the system can seek answers in a relatively narrow range;" In the medium-term operation, the system can learn to adapt by modifying its functions and activities; In the long-term operation, the system can develop and adapt to very different behavior styles and functions. These studies closely link computer technology with management decision-making.