Get in touch now!

Hey! We have something special for you. Discover the digital potential of your business with a 30-minute introductory meeting

Go to blog

A Brief History of Artificial Intelligence

A Brief History of Artificial Intelligence

History of Artificial Intelligence

The history of artificial intelligence dates back to ancient times when philosophers contemplated the concept of artificial beings, mechanical men, and other automata that could exist or potentially come into existence.

The creation of intelligent machines has fascinated humanity for centuries, and its evolution has been nothing short of remarkable. From early theoretical reasoning to modern advancements in machine learning and robotics, the quest for creating intelligent machines has been a driving force in the world of technology.

Thanks to early thinkers, artificial intelligence has become increasingly tangible throughout the 18th century and beyond. Philosophers contemplated how human thought could be mechanized and manipulated by non-human intelligent machines.

The thought processes that fueled interest in AI originated when classical philosophers, mathematicians, and logicians considered the manipulation of symbols (mechanically), eventually leading to the creation of the programmable digital computer, the Atanasoff Berry Computer (ABC) in the 1940s.

This specific invention inspired scientists to proceed with the idea of creating an "electronic brain," an artificially intelligent being.

It took nearly a decade before AI found itself in a field like today's.

Alan Turing, a mathematician among other things, proposed a test that measures a machine's ability to replicate human actions indistinguishably. Later in that decade, the field of AI research was founded during a summer conference at Dartmouth College in the mid-1950s, where John McCarthy, a computer scientist and cognitive scientist, coined the term "artificial intelligence."

From the 1950s to the present day, countless scientists, programmers, logicians, and theorists have contributed to solidifying our modern understanding of artificial intelligence.

Each decade has seen innovations and revolutionary discoveries that have radically changed our conception of AI, transforming it from a distant fantasy into a tangible reality for present and future generations.

In particular, the evolution of AI theories has continued as follows:

The Birth of Theoretical Artificial Intelligence (1940 onwards)

The period between 1940 and 1960 was heavily marked by the conjunction of technological developments (of which World War II was an accelerator) and the desire to understand how to combine the functioning of machines and organic beings.

For Norbert Wiener, a pioneer of cybernetics, the goal was to unify mathematical theory, electronics, and automation as "a comprehensive theory of control and communication, both in animals and machines."

Shortly before, a first mathematical and computational model of the biological neuron (formal neuron) had been developed by Warren McCulloch and Walter Pitts as early as 1943.

In the early 1950s, John Von Neumann and Alan Turing were the founding fathers of the technology that made it possible. They transitioned from computers with 19th-century decimal logic (which handled values from 0 to 9) to the binary logic of machines (based on Boolean algebra, handling 0 or 1).

The two researchers thus formalized the architecture of our contemporary computers and demonstrated that it was a universal machine capable of executing what is programmed. Turing, on the other hand, first raised the question of the possible intelligence of a machine in his famous 1950 article "Computing Machinery and Intelligence" and described an "imitation game" in which a human being should be able to distinguish in a telepathic dialogue whether they are speaking with a man or a machine.

Despite the controversy surrounding this article (Turing's "Turing test" does not seem to qualify for many experts), it is often cited as the source of the debate on the distinction between humans and machines.

The acronym "AI" was coined by John McCarthy of MIT (Massachusetts Institute of Technology). Marvin Minsky of Carnegie-Mellon University, in particular, defines it as "the creation of computer programs that perform tasks currently done more satisfactorily by humans because they require high-level mental processes such as perceptual learning, memory organization, and critical reasoning."

The summer conference in 1956 at Dartmouth College is considered the founding event of the discipline. It is worth noting the great success of what was not a conference but rather a workshop. Only six people, including McCarthy and Minsky, remained consistently present during this work (which was essentially based on developments based on formal logic).

Despite the technology still being fascinating and promising, popularity declined in the early 1960s. Machines had very limited memory, making it difficult to use a computer language.

Back in 1957, the renowned economist and sociologist Herbert Simon prophesied that Artificial Intelligence would be able to beat a human at chess within the next 10 years.

However, AI fell into a long winter, and Simon had to wait 30 years before his vision came true.

Artificial Intelligence Becomes a Reality (1970 onwards)

In 1968, Stanley Kubrick directed the film "2001: A Space Odyssey," where a computer - HAL 9000 - encapsulated all the ethical questions posed by AI: would it represent a high level of sophistication, a benefit to humanity, or a danger?

The impact of the film will not be scientific, of course, but it will contribute to popularizing the theme, as will science fiction author Philip K. Dick, who never ceased to wonder if machines would one day experience emotions.

It was with the advent of the first microprocessors in the late 1970s that AI took off again and entered the golden age of expert systems.

The path was effectively opened at MIT in 1965 with DENDRAL (an expert system specialized in molecular chemistry) and at Stanford University in 1972 with MYCIN (a system specialized in diagnosing blood diseases and prescribing drugs). These systems were based on an "inference engine" programmed to be a logical mirror of human reasoning. By inputting data, the engine provided high-level expert answers.

Promises predicted massive development, but the frenzy would fall again in the late 1980s and early 1990s. Programming such knowledge required a lot of effort, and with 200 to 300 rules, there was a "black box" effect where it was unclear how the machine reasoned.

Development and maintenance thus became extremely problematic and, most importantly, faster and many other less complex and less costly ways were possible. It should be noted that in the 1990s, the term artificial intelligence had almost become taboo, and more modest variations such as "advanced computing" were used.

The success in May 1997 of Deep Blue (IBM's expert system) in the game of chess against Garry Kasparov fulfilled Herbert Simon's prophecy from 1957 thirty years later, but it did not support the funding and development of this form of AI.

Deep Blue's operation was based on a systematic brute force algorithm, where all possible moves were evaluated and weighted.

The defeat of the human being remained highly symbolic in history, but Deep Blue had actually managed to handle a very limited perimeter, far from the ability to model the complexity of the world.

Machine vs. Human Chess

Artificial Intelligence and Its Splendor in the Third Millennium (2000 onwards)

The new exciting boom in the discipline has been explained by two key factors that have propelled the industry around 2010.

Firstly, the access to massive volumes of data. In the past, to use algorithms for classification and image recognition, manual sampling was required. Today, a simple Google search can find millions of data points in seconds.

The extraordinary efficiency of computer graphics cards in speeding up the calculation of learning algorithms has been a real breakthrough. Before 2010, processing the entire sample took weeks due to the iterative nature of the process.

Thanks to the computing power of these cards (capable of performing over a trillion transactions per second), remarkable progress has been made at a low financial cost.

Recent technological advancements have led to significant public successes and increased funding: in 2011, IBM's AI, Watson, won against two Jeopardy champions! In 2012, Google X (Google's research lab) was able to have its algorithms recognize a cat in a video. Over 16,000 processors were used for this task, but the potential is extraordinary: a machine learns to distinguish something.

In 2016, AlphaGO (Google's AI specialized in the game of Go) defeated the European champion (Fan Hui) and the world champion (Lee Sedol), and then surpassed itself with AlphaGo Zero. It should be emphasized that the game of Go has a much more complex combinatorial nature than chess (greater than the number of particles in the universe), and achieving such significant results in terms of brute force power (as with Deep Blue in 1997) is not possible.

Where did this miracle come from? A complete revolution of the expert system paradigm.

The approach has become inductive: it is no longer about encoding rules as in expert systems, but about allowing computers to discover them autonomously through correlation and classification, based on a large amount of data.

Among the techniques of machine learning, deep learning seems the most promising for numerous applications, including speech and image recognition. In 2003, Geoffrey Hinton of the University of Toronto, Yoshua Bengio of the University of Montreal, and Yann LeCun of New York University decided to launch a research program to update neural networks.

Incredible results have been achieved through experiments conducted simultaneously by Microsoft, Google, and IBM. Thanks to deep learning, error rates for speech recognition have been reduced by half.

The Future

The relentless pursuit of progress in this field has led to a profound transformation of our collective consciousness, and we continue to witness a paradigm shift in how we interact with technology.

Looking to the future, the possibilities for AI seem endless, and we can only imagine the incredible advancements that await us.

(Click here to read our comprehensive guide on artificial intelligence for businesses)