History of AI

 [1943-1955] The gestation of artificial intelligence 

○ McCulloch & Pitts: model of artificial neurons 

○ Hebb: simple updating rule for modifying the connection strengths between neurons (Hebbian learning) 

○ Two undergraduate students at Harvard, Marvin Minsky and Dean Edmonds, built the first neural network computer in 1950. The SNARL, as it was called, used 3000 vacuum tubes and a surplus automatic pilot mechanism from a B-24 bomber to simulate a network of 40 neurons. 

○ Alan Turing gave lectures on the topic as early as 1947 at the London Mathematical Society and articulated a persuasive agenda in his 1950 article "Computing Machinery and Intelligence." Therein, he introduced the Turing Test, machine learning, genetic algorithms, and reinforcement learning. He proposed the Child Programme idea, explaining "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulated the child's?"


 [1956] The birth of artificial intelligence 

○ John McCarthy moved to Dartmouth College. He convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him bring together U.S. researchers interested in automata theory, neural nets, and the study of intelligence. They organized a two-month workshop at Dartmouth in the summer of 1956. 

○ “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” 

○ The Logic Theorist reasoning program was able to prove most of the theorems in Chapter 2 of Russell and Whitehead's Principia Mathematica.


[1952-1969] Early enthusiasm, great expectations 

○ General Problem Solver (GPS). This program was designed from the start to imitate human problem-solving protocols. Within the limited class of puzzles it could handle, it turned out that the order in which the program considered subgoals and possible actions was similar to that in which humans approached the same problems. 

○ Geometry Theorem Prover, which was able to prove theorems that many students of mathematics would find quite tricky. 

○ A series of programs for checkers (draughts) that eventually learned to play at a strong amateur level. 

○ McCarthy defined the high-level language Lisp, which was to become the dominant AI programming language for the next 30 years. 

○ Minsky: microworlds, limited domain problems (calculus integration, geometric analogy problems, blocks world). 

○ “Machines will be capable, within twenty years, of doing any work that a man can do.” (Herbert Simon, 1965)



 [1966-1973] A dose of reality 

○ Simon stated that within 10 years a computer would be chess champion, and a significant mathematical theorem would be proved by machine. These predictions came true (or approximately true) within 40 years rather than 10. 

○ (1957) Early machine translation efforts. It was thought initially that simple syntactic transformations based on the grammars of Russian and English, and word replacement from an electronic dictionary, would suffice to preserve the exact meanings of sentences. 

○ "the spirit is willing but the flesh is weak" translated as “водка хорошая, но мясо протухло" (“The vodka is good but the meat is rotten") 

○ The new back-propagation learning algorithms for multilayer networks that were to cause an enormous resurgence in neural-net research in the late 1980s were actually discovered first in 1969


 [1974-1980] First AI winter 

○ (1966), a report by an advisory committee found that "there has been no machine translation of general scientific text, and none is in immediate prospect." All U.S. government funding for academic translation projects was canceled. 

○ (1969) Minsky and Papert's book Perceptrons proved that perceptrons could represent very little. Although their results did not apply to more complex, multilayer networks, research funding for neural-net research soon dwindled to almost nothing. 

○ (1973) Professor Sir James Lighthill was asked by the UK Parliament to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the utter failure of AI to achieve its "grandiose objectives." He concluded that nothing being done in AI couldn't be done in other sciences. He specifically mentioned the problem of "combinatorial explosion" or "intractability", which implied that many of AI's most successful algorithms would grind to a halt on real world problems and were only suitable for solving "toy" versions. British government end support for AI research in all but two universities. Research would not revive on a large scale until 1983.


[1969-1979] Knowledge-based systems 

○ Use domain-specific knowledge instead of general-purpose search mechanism trying to string together elementary reasoning steps to find complete solutions (weak method). Domain- specific knowledge allows larger reasoning steps and can more easily handle typically occurring cases in narrow areas of expertise. 

○ The DENDRAL program to solve the problem of inferring molecular structure from the information provided by a mass spectrometer. 

○ Expert systems proliferate. 

○ MYCIN to diagnose blood infections. With about 450 rules, MYCIN was able to perform as well as some experts, and considerably better than junior doctors.


 [1980-present] AI becomes an industry 

○ The first successful commercial expert system, RI, began operation at the Digital Equipment Corporation (McDermott, 1982). The program helped configure orders for new computer systems; by 1986, it was saving the company an estimated $40 million a year. By 1988 DEC's Al group had 40 expert systems deployed, with more on the way. 

○ DuPont had 100 in use and 500 in development, saving an estimated $10 million a year. 

○ Nearly every major U.S. corporation had its own AI group and was either using or investigating expert systems. 

○ In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build intelligent computers running Prolog. 

○ In response, the United States formed the Microelectronics and Computer Technology Corporation (MCC) as a research consortium designed to assure national competitiveness. In both cases, AI was part of a broad effort, including chip design and human-interface research. 

○ In Britain, the Alvey report reinstated the funding that was cut by the Lighthill report.


 [1986-present] The return of neural networks 

○ In the mid-1980s at least four different groups reinvented the back-propagation learning algorithm first found in 1969. 

○ These so-called connectionist models of intelligent systems were seen by some as direct competitors both to the symbolic models promoted by Newell and Simon and to the logicist approach of McCarthy and others. 

○ The current view is that connectionist and symbolic approaches are complementary, not competing. As occurred with the separation of AI and cognitive science, modern neural network research has bifurcated into two fields, one concerned with creating effective network architectures and algorithms and understanding their mathematical properties, the other concerned with careful modeling of the empirical properties of actual neurons and ensembles of neurons.


[1987-early 1990s] Second AI winter 

○ The collapse of the Lisp machine market in 1987. Specialized computers, called Lisp machines, were optimized to process the programming language Lisp, the preferred language for AI. Workstations by companies like Sun Microsystems offered a powerful alternative to LISP machines and companies like Lucid offered a LISP environment for this new class of workstations. 

○ By the early 90s, the earliest successful expert systems proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem). Expert systems proved useful, but only in a few special contexts. 

○ By 1991, the impressive list of Fifth generation computer project (Japan) goals penned in 1981 had not been met. Funding cuts in american projects as well.


[1987-present] AI adopts the scientific method 

○ The isolationism is currently being abandoned. There is a recognition that machine learning should not be isolated from information theory, that uncertain reasoning should not be isolated from stochastic modeling, that search should not be isolated from classical optimization and control, and that automated reasoning should not be isolated from formal methods and static analysis. 

○ To be accepted, hypotheses must be subjected to rigorous empirical experiments, and the results must be analyzed statistically for their importance (vs. ad hoc method). It is now possible to replicate experiments by using shared repositories of test data and code. 

○ Machine learning and Data mining as separate fields.


[1995-present] The emergence of intelligent agents 

○ One of the most important environments for intelligent agents is the Internet. Al systems have become so common in Web-based applications that the "-bot" suffix has entered everyday language. 

○ Realization that the previously isolated subfields of AI might need to be reorganized somewhat when their results are to be tied together. AI has been drawn into much closer contact with other fields, such as control theory and economics, that also deal with agents. 

○ Artificial General Intelligence or AGI looks for a universal algorithm for learning and acting in any environment.

[2001-present] The availability of very large data sets 

○ Throughout the 60-year history of computer science, the emphasis has been on the algorithm as the main subject of study. But some recent work in Al suggests that for many problems, it makes more sense to worry about the data and be less picky about what algorithm to apply. 

○ More data gives significant improvement of quality with the same algorithm. Here is one of rationales behind the big data trend. 

○ Many works suggest that the “knowledge bottleneck" in Al—the problem of how to express all the knowledge that a system needs—may be solved in many applications by learning methods rather than hand-coded knowledge engineering, provided the learning algorithms have enough data to go on.


[2010-present] Deep-learning revolution 

○ Significant advances in machine learning, especially deep learning (neural networks) 

○ Speech recognition and Computer vision is dominated by deep learning. 

○ More details in the next section.

No comments:

Post a Comment

Introduction of Artificial Intelligence

     The Artificial Intelligence tutorial provides an introduction to AI which will help you to understand the concepts behind  Artificial I...