This field is based on the statement that a core attribute of human beings-intelligence-can be described so accurately that it can be simulated by machines. [5] This raises philosophical questions about the nature of the mind and the limits of scientific arrogance, which have been solved by myths, novels and philosophy since ancient times. [6] Artificial intelligence has always been the theme of exciting optimism. [7] It has suffered shocking setbacks. [8] Today, it has become an important part of the technology industry, providing solutions to many of the most difficult problems in computer science. [9]
The research of artificial intelligence is highly technical and specialized, and it is deeply divided into sub-fields, which are often unable to communicate with each other. [10] Sub-fields have emerged around specific institutions, the work of a single researcher, the solutions to specific problems, the long-term disagreement on how to realize artificial intelligence and the application of widely different tools. The core issues of artificial intelligence include reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. [1 1] General intelligence (or "strong AI") is still the long-term goal of (some) research. [ 12]
Thinking machines and artificial people appeared in Greek mythology, such as Tallas in Crete, Golden Robot in hephaestus and Galatia in Pygmalion. [13] Every major civilization has built portraits of human beings that are regarded as intelligent: animated statues are worshipped in Egypt and Greece [14], and humanoid robots are made by Yan Shi, [15] Hero of Alexandria, [16] Al-Jazari [/kloc-]. It is also generally believed that Jābir ibn Hayyān, [19] Judah Loew[20] and Paracelsus created artificial people. By 19 and the 20th century, artificial people have become a common feature in novels, just like Mary Shelley's Frankenstein or Karel? R.U.R of apek (Rosen's universal robot). [22] Pamela McCordack believes that all these are ancient examples of impulse, as she described it, "casting gods". [6] The stories of these creatures and their fate discuss many of the same hopes, fears and ethical issues as artificial intelligence.
The problem of simulating (or creating) intelligence has been decomposed into many specific sub-problems. These include the special features or abilities that researchers will display in Ricoh's intelligent system. The features described below have attracted the most attention. [ 1 1]
[Editor] Deduct, reason and solve problems
Early artificial intelligence researchers developed algorithms that imitated the gradual reasoning used by human beings in solving puzzles, playing board games or making logical reasoning. [39] In the late 1980s and 1990s, artificial intelligence research also developed a very successful method to deal with uncertain or incomplete information by using the concepts of probability theory and economics. [40]
For difficult problems, most of these algorithms may require huge computing resources-most of them have experienced "combinatorial explosion": when the problem exceeds a certain scale, the amount of memory or computer time required becomes astronomical. Finding a more effective problem-solving algorithm is the top priority of artificial intelligence research. [4 1]
Humans use quick and intuitive judgments to solve most problems, rather than conscious step-by-step reasoning that early artificial intelligence research can simulate. [42] Artificial intelligence has made some progress in imitating this "sub-symbol" problem solving: the embodied method emphasizes the importance of sensory motor skills to advanced reasoning; Neural network research attempts to simulate the structure of human and animal brains that produces this skill.
General intelligence
Main articles: strong AI and AI-complete
Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (called strong AI), which combines all the above skills and exceeds human capabilities in most or all aspects. [12] Some people think that such a project may need anthropomorphic features, such as artificial consciousness or artificial brain. [74]
Many of the above problems are considered to be AI complete: to solve one problem, you must solve all the problems. For example, even a straightforward and specific task like machine translation requires machines to follow the author's argument (rationality), know what they are talking about (knowledge), and faithfully reproduce the author's intention (social intelligence). Therefore, machine translation is considered to be completely artificial intelligence: it may need artificial intelligence as powerful as human beings to complete it. [75]
[edit] method
There is no established unified theory or paradigm to guide the research of artificial intelligence. Researchers disagree on many issues. [76] There are several long-standing unresolved questions: Should artificial intelligence simulate natural intelligence by learning psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aviation engineering? [77] Can intelligent behavior be described by simple and elegant principles, such as logic or optimization? Or do you have to solve a lot of completely unrelated problems? [78] Can intelligence be reproduced with advanced symbols similar to words and thoughts? Or do you need "secondary symbol" processing? [79]
[Editor] Cybernetics and Brain Simulation
Main articles: cybernetics and computational neuroscience
There is no consensus on how far the brain should be simulated. In the1940s and191950s, some researchers explored the relationship among neurology, information theory and cybernetics. Some of them have made machines that use electronic networks to show basic intelligence, such as the turtle of W. Grey Walter and the beast of Johns Hopkins. Many of these researchers gathered in the Skopostheorie Society of Princeton University and the Ratio Club of Britain for a meeting. [24] By the year 1960, this method was basically abandoned, although some of its elements reappeared in the year 1980.
How to judge whether an agent is smart? In 1950, alan turing proposed a general program for testing agents, which is now called Turing test. This program allows almost all major problems of artificial intelligence to be tested. However, this is a very difficult challenge, and all agents have failed at present.
Artificial intelligence can also evaluate specific problems, such as small problems in chemistry, handwriting recognition and playing games. This test is called the subject matter expert Turing test. Smaller problems provide more achievable goals and have more and more positive results.
The main results of artificial intelligence test are:
Best: it is impossible to perform better.
Powerful Superman: Better than all human beings.
Superman: Better than most humans.
Subhuman: performing worse than most humans.
For example, the performance of checkers is the best, [143] the performance of chess is superman, close to the powerful superman, [144] and the performance of many daily tasks performed by human beings is inferior to that of human beings.
A completely different method is based on measuring machine intelligence through tests, which are developed from the mathematical definition of intelligence. Examples of such tests began in the late 1990s, and intelligence tests were designed using Kolmogorov's concepts of complexity and compression. Marcus Hutter has put forward a similar definition of machine intelligence in his book General Artificial Intelligence (Springer 2005), which was further developed by Leger and Hutt [147]. As an advantage, mathematical definitions can be applied to non-human intelligence, and there are no human testers.
Artificial intelligence is a common topic in science fiction and prediction about the future of technology and society. The existence of artificial intelligence, which is comparable to human intelligence, has caused thorny ethical problems. The potential power of this technology has not only inspired hope, but also caused concern.
Mary shelley's Frankenstein [160] considers a key question in the ethics of artificial intelligence: If an intelligent machine can be created, can it also feel? If it can feel, does it have the same rights as human beings? This idea also appears in modern science fiction: the movie "Artificial Intelligence: Artificial Intelligence" holds that a machine in the form of a little boy is endowed with the ability to feel human emotions, including, sadly, the ability to bear pain. This issue, now called "robot rights", is being considered by, for example, the Future Institute in California, [16 1] although many critics think that it is too early to discuss it. [ 162]
Another problem that science fiction writers and futurists discuss is the influence of artificial intelligence on society. In the novel, AI is represented by servants (R2D2 in Star Wars), law enforcers (K.I.T.T "Knight Rider"), comrades (Commander Data Lieutenant in Star Trek), conquerors (Matrix), dictators (hands folded), exterminators (Terminator, battlestar galactica), extensions of human abilities (ghosts in the shell). Academic sources have considered the following consequences: the reduction of human labor demand, [163] the enhancement of human ability or experience, [164] and the need to redefine human identity and basic values. [ 165]
Several futurists believe that artificial intelligence will go beyond the limits of progress and fundamentally change mankind. Ray Kurzweil calculated by Moore's Law (which describes the endless exponential progress of digital technology with amazing accuracy) that by 2029, desktop computers will have the same processing power as human brains, and by 2045, artificial intelligence will reach a point where it can improve itself at a speed far beyond any imagination, which was named by science fiction writer Vernor Vinge. [164] Edward Fredkin thinks that "artificial intelligence is the next stage of evolution", [166] this idea was first put forward by samuel butler's Darwin in the Machine (1863) and detailed by George Dyson in his book of the same name 1998. Several futurists and science fiction writers predict that humans and machines will become more capable and powerful cyborgs in future fusion than any other party. This idea called superhuman originated from Aldous Huxley and robert ettinger, and is now associated with robot designer Hans Moravec, cybernetic expert kevin Warwick and inventor ray kurzweil. [164] Transcendentalism also appears in novels, such as the comic book Ghost in the Shell and the sci-fi series Dune. Pamela McCordack wrote that these scenes expressed the desire of ancient humans, which she called "forging gods" [6]