Artificial Intelligence
3 minutes

A short history of Artificial Intelligence

A short history of AI from 1950 to today.

History of AI

The idea of AI can be traced back to ancient mythology, where stories about creatures made of clay or bronze that were brought to life by the gods were told. However, the actual development of AI as we know it today started in the 1950s.

In 1950 Alan Turing proposed the Turing Test, in his seminal paper "Computing Machinery and Intelligence," published in the journal Mind. The test was introduced as part of his discussion on the question, "Can machines think?" and was designed to provide a practical criterion for determining whether or not a machine is capable of exhibiting intelligent behaviour indistinguishable from that of a human. A machine could be considered intelligent, he suggested, if it could engage in a conversation indistinguishable from a human.A machine could be considered intelligent, he suggested, if it could engage in a conversation indistinguishable from a human.A machine could be considered intelligent, he suggested, if it could engage in a conversation indistinguishable from a human.A machine could be considered intelligent, he suggested, if it could engage in a conversation indistinguishable from a human.

However, it was John McCarthy, a pioneering computer scientist and cognitive scientist, who is credited with coining the term "artificial intelligence" in 1955. He played a key role in organising the 1956 Dartmouth Conference, which is considered the birthplace of AI as a field. The goal of the conference was to bring together researchers who were interested in developing machines that could think and reason like humans. McCarthy's contributions to AI include the development of the Lisp programming language, which became crucial for AI research, and his work on time-sharing, among other foundational contributions to computer science and artificial intelligence.

In the following years, there were many exciting developments in AI research, including the creation of the first AI programs, such as the Logic Theorist and the General Problem Solver and the first AI ChatBot Eliza. However, progress was slow, and by the 1970s, interest in AI had waned due to a lack of significant breakthroughs.

In the 1980s, AI experienced a resurgence due to new techniques and approaches, such as expert systems and rule-based systems, which proved to be successful in solving practical problems. However, by the end of the decade, these approaches were found to have limitations and were not suitable for more complex tasks.

In the 1990s, the development of machine learning algorithms and neural networks led to significant progress in AI research. The emergence of the internet and the availability of large datasets also contributed to the growth of AI. In the following years, AI systems were developed that could perform tasks such as speech recognition, image classification, and natural language processing.

In the 2010s, the field of AI exploded, with breakthroughs in deep learning, reinforcement learning, and generative models, leading to the development of sophisticated AI systems that could learn and make decisions autonomously. Today, AI is used in a wide range of industries, from healthcare to finance to transportation, and is transforming the way we live and work.

As the 2020s include significant developments such as the refinement of generative AI models like GPT-4, which have revolutionised natural language processing and content creation.  Despite these advances, the 2020s also witness ongoing debates around ethical considerations, data privacy, and the societal impact of AI, emphasising the need for robust governance frameworks to ensure responsible development and deployment of AI technologies.

Key Milestones

1955 - Logic Theorist

Developed by Allen Newell, Herbert A. Simon, and Cliff Shaw in 1955, the Logic Theorist is often considered the first artificial intelligence program. It was designed to mimic human problem-solving skills by proving mathematical theorems. The program used a method called heuristic search to solve problems, representing one of the first applications of heuristics in computing. The Logic Theorist was capable of proving many of the theorems in Whitehead and Russell's Principia Mathematica, and in some cases, it even found more elegant proofs than those originally published. Its success demonstrated the potential of machines to perform tasks that require intelligence, fundamentally challenging the prevailing views of the time about the capabilities of computers.

1957 - General Problem Solver

The General Problem Solver (GPS) was developed by Allen Newell and Herbert A. Simon in 1957 as an extension of the ideas introduced by the Logic Theorist. It was designed to be a more general problem-solving machine, capable of solving a wide range of problems, not just mathematical theorems. GPS attempted to model the general thought processes of humans when they face a problem, using a strategy that broke down problems into smaller, more manageable sub-problems. It applied rules and heuristics to solve these sub-problems, aiming to mimic human problem-solving techniques. Despite its limitations and the simplicity of the problems it could solve, the GPS was a groundbreaking step towards developing AI systems that could address complex tasks across different domains.

1964 - Eliza

Developed by Joseph Weizenbaum at the Massachusetts Institute of Technology (MIT), ELIZA was one of the first chatbots, simulating conversation by mirroring user input with scripted responses. Its most famous script, DOCTOR, emulated a Rogerian psychotherapist, engaging users in a dialogue that many found surprisingly human-like. Joseph Weizenbaum recounted a particularly telling interaction between the chatbot and his secretary. After being introduced to ELIZA, particularly its DOCTOR script, the secretary began conversing with the program. Intrigued by its human-like responses, she soon asked Weizenbaum to leave the room, desiring privacy to continue her conversation with ELIZA. ELIZA's ability to pass as a human conversational partner, despite lacking any understanding of the content it processed, sparked significant discussions about the nature of intelligence and the possibilities of machine-human interaction.

1997 - IBM's Deep Blue Beats Kasparov

Deep Blue, a chess-playing computer developed by IBM, defeated the reigning world champion, Garry Kasparov, in a highly publicised match. This event marked the first time a computer had beaten a world champion in a match under standard chess tournament conditions. The victory of Deep Blue was a watershed moment for AI, showcasing the potential of machines to perform tasks previously thought to require human intelligence and intuition.

1990s Kismet and Emotional Interactions

Developed in the late 1990s at MIT's Artificial Intelligence Laboratory by Dr. Cynthia Breazeal, Kismet was a robot designed to engage in social interactions with humans. It could simulate emotions through facial expressions, vocalisations, and movement, reacting to human speech and touch. Kismet's design aimed to explore the interaction between humans and robots, demonstrating the potential for machines to participate in social and emotional exchanges.

2011 Watson's Jeopardy! Victory

IBM's Watson competed on the quiz show "Jeopardy!" against two of the show's greatest champions, Brad Rutter and Ken Jennings. Watson's victory was significant, not just for its ability to understand and process natural language questions but also for its capacity to deal with the nuances, puns, and complexities of human language in a competitive setting.

2014 The Turing Test Passed

A chatbot named "Eugene Goostman," simulating a 13-year-old Ukrainian boy, was reported to have passed the Turing Test by convincing 33% of human judges of its humanity during a competition at the Royal Society in London. This event sparked debate over the nature of intelligence and the capabilities of AI, though some critics argued that the test's criteria and implementation were flawed.

2016 AlphaGo Victory

The ancient Chinese board game Go has long been considered a grand challenge due to its profound complexity and strategic depth. A pivotal moment in AI history occurred in 2016 when AlphaGo, a program developed by Google DeepMind, faced off against Lee Sedol, one of the world's top Go players, in a five-game match. Go, with its near-infinite number of possible positions, had resisted conquest by computers for decades, making this event a landmark in the demonstration of AI's capabilities.

AlphaGo's victory over Lee Sedol, winning four out of five games, was a breakthrough that resonated beyond the confines of the Go community, signalling a new era in artificial intelligence. Unlike previous AI systems that relied heavily on brute force computation, AlphaGo combined advanced machine learning techniques, including deep neural networks and reinforcement learning, to evaluate and predict the outcomes of complex Go positions. This approach allowed AlphaGo to not just mimic human intuition but also to devise novel strategies that had never been seen in the game's thousands of years of history.

Read the full story in this article on how AlphaGo beat the Go World Champion

2020 GPT-3

OpenAI unveiled GPT-3 (Generative Pre-trained Transformer 3), the third iteration of its state-of-the-art language processing AI model. With 175 billion parameters, GPT-3 demonstrated an unprecedented ability to generate human-like text, perform language translation, compose poetry, write code, and even create educational tutorials, among other tasks. Its release sparked a wave of innovation in AI applications.

August 1, 2023

Read our latest

Blog posts