Dreams of the spring of artificial intelligence

By the fall of 2022, the editors of Naked Science looked around and decided that it was time to launch a series of articles about artificial intelligence. We start with a large article, the subtitle of which could be the phrase “a brief introduction to the history of AI — from medieval monks to deep learning.”

Artificial intelligence (AI) — almost everyone knows these two words, but few people can give a clear definition and explain what it is. Our ideas about AI are formed more by Hollywood cinema than by a real understanding of the technologies behind this word. Cinema, like any art, always works with its audience through emotions. And the best—selling of them is fear.

A computer with the AI HAL 9000 in the “2001 Space Odyssey” captures control of an interplanetary ship. Cyborg T-800 in “Terminator” goes back in time to kill Sarah Connor. Quite recent examples — a neuroimplant subordinates the consciousness of its carrier in the movie “Upgrade”. The gynoid Ava easily manipulates a programmer invited to conduct a reverse Turing test, kills his creator and escapes into the wild in the movie “Out of the Machine”. There are countless examples. However, reality is extremely far from cinematic images.

Humanity has come a long way from 80 years of searches, mistakes and dead ends, each of which ended in the “winter of artificial intelligence” — disappointment in the capabilities and potential of this technology. But since the beginning of the 2010s, the world has been experiencing a “warming” in the field of AI again. Therefore, while in cinemas they are trying to scare the layman — large corporations and governments of leading countries are investing billions in the development of AI, because right now it is changing everything — from scientific research to everyday life.

However, even the most advanced modern developments are still far from any fantasy of directors and screenwriters. As usual, ideas about the future may coincide in details, but never in the main. Let’s figure out what artificial intelligence is and briefly follow the main milestones in the history of its development.

Inventing concepts

The American mathematician John McCarthy (1927-2011) first became interested in computers in 1948, having started attending a seminar on “Cerebral mechanisms of behavior”, at which, among other things, it was discussed whether computers could begin to think like people. This topic fascinated him so much that much later — in the summer of 1956 — he organized a ten-week seminar at Dartmouth College (a private research university in New Hampshire, USA) with money from the Rockefeller Foundation.

In the grant application, McCarthy formulated the objectives of the seminar:

(Our) research will be conducted based on the assumption that every aspect of learning or any other feature of (human) intelligence can potentially be described so accurately (mathematically) that it will be possible to create a computing device to simulate it. (We) will try to find a way to make computers use (natural human) language, form abstractions and concepts, solve various problems that only humans can solve now, as well as improve themselves. We believe that significant progress can be made on one or more of these issues if a carefully selected group of scientists work on it together over the summer.

The team McCarthy assembled was really impressive. There was the creator of information theory Claude Shannon, the future star of the mathematical foundations of artificial intelligence and the creator of frame theory Marvin Minsky, cognitive psychologist Allen Newell, who later developed the program “Logic Theorist”, whose modifications learned to play chess and solve puzzles, and many, many other talented people.