Difference between revisions of "History of Artificial Intelligence (AI)"

From
Jump to: navigation, search
m (Timeline - to July 2023)
m (Timeline - to July 2023)
Line 214: Line 214:
 
|}<!-- B -->
 
|}<!-- B -->
  
= Timeline - to July 2023 =
+
= Modern Timeline - to July 2023 =
 +
* [https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence Timeline of artificial intelligence | Wikipedia]
 +
 
 
Here is a timeline of the history of Artificial Intelligence (AI):
 
Here is a timeline of the history of Artificial Intelligence (AI):
  
* Antiquity: Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent automata (such as Talos) and artificial beings (such as Galatea and Pandora).
 
* 384 BC–322 BC: Aristotle described the syllogism, a method of formal, mechanical thought and theory of knowledge in the Organon.
 
* 10th century BC: Yan Shi presented King Mu of Zhou with mechanical men which were capable of moving their bodies independently.
 
* 3rd century BC: Ctesibius invents a mechanical water clock with an alarm. This was the first example of a feedback mechanism.
 
* 1st century: Hero of Alexandria created mechanical men and other automatons. He produced what may have been “the world’s first practical programmable machine:” an automatic theatre.
 
 
* 1943: Warren McCulloch and Walter Pitts propose a computational model for neural networks, which lays the foundation for artificial neural networks.
 
* 1943: Warren McCulloch and Walter Pitts propose a computational model for neural networks, which lays the foundation for artificial neural networks.
 
* 1950: Alan Turing introduces the "Turing Test," a criterion to determine a machine's ability to exhibit intelligent behavior, marking an important milestone in AI.
 
* 1950: Alan Turing introduces the "Turing Test," a criterion to determine a machine's ability to exhibit intelligent behavior, marking an important milestone in AI.
Line 235: Line 232:
 
* 1997: IBM's Deep Blue defeats Garry Kasparov, the world chess champion, marking a significant milestone in AI and demonstrating the potential of machine intelligence.
 
* 1997: IBM's Deep Blue defeats Garry Kasparov, the world chess champion, marking a significant milestone in AI and demonstrating the potential of machine intelligence.
 
* 2000s: The focus shifts from symbolic AI to statistical approaches, and machine learning algorithms, particularly neural networks, gain popularity.
 
* 2000s: The focus shifts from symbolic AI to statistical approaches, and machine learning algorithms, particularly neural networks, gain popularity.
* 2011: IBM's Watson defeats human champions in the quiz show Jeopardy!, demonstrating advancements in natural language processing and question-answering systems.
+
* 2011: [[IBM]]'s Watson defeats human champions in the quiz show Jeopardy!, demonstrating advancements in natural language processing and question-answering systems.
* 2012: AlexNet, a deep convolutional neural network, achieves a breakthrough in image classification, leading to a surge in deep learning research and applications.
+
* 2012: AlexNet, a [[(Deep) Convolutional Neural Network (DCNN/CNN)|deep convolutional neural network]], achieves a breakthrough in image classification, leading to a surge in deep learning research and applications.
* 2014: DeepMind's AlphaGo defeats the world champion Go player, Lee Sedol, showcasing the power of deep reinforcement learning in complex games.
+
* 2011-2014: Apple's Siri (2011), Google's Google Now (2012) and Microsoft's Cortana (2014) are smartphone apps that use natural language to answer questions, make recommendations and perform actions.
* 2016: OpenAI introduces GPT (Generative Pre-trained Transformer), an advanced language model that generates human-like text and significantly advances natural language processing.
+
* 2014: [[Attention]] mechanism is developed, that lead to the Transformer architecture.
* 2017: The term "Deepfake" emerges, referring to the use of AI to create realistic but fake audio and video content, raising concerns about misinformation and privacy.
+
* 2015: [[Google|DeepMind's AlphaGo]] defeats the world champion Go player, Lee Sedol, showcasing the power of deep reinforcement learning in complex games.
 +
* 2015: [[Google]] released TensorFlow, an open-source software library for machine learning.
 +
* 2016: [[OpenAI]] introduces GPT (Generative Pre-trained Transformer), an advanced language model that generates human-like text and significantly advances natural language processing.
 +
* 2017: The term "Deep[[fake]]" emerges, referring to the use of AI to create realistic but fake audio and video content, raising concerns about misinformation and privacy.
 
* 2018: The General Data Protection Regulation (GDPR) is implemented in the European Union, introducing regulations for the ethical use of AI and data privacy.
 
* 2018: The General Data Protection Regulation (GDPR) is implemented in the European Union, introducing regulations for the ethical use of AI and data privacy.
 
* 2019 March: First human brain-computer interface: A team of researchers at the University of California, San Francisco (UCSF) successfully implanted a brain-computer interface (BCI) in a human patient. The BCI allowed the patient to control a cursor on a computer screen simply by thinking about it.
 
* 2019 March: First human brain-computer interface: A team of researchers at the University of California, San Francisco (UCSF) successfully implanted a brain-computer interface (BCI) in a human patient. The BCI allowed the patient to control a cursor on a computer screen simply by thinking about it.

Revision as of 16:10, 7 July 2023

YouTube ... Quora ...Google search ...Google News ...Bing News



Never give up on a dream just because it will take time to accomplish it. The time will pass anyway.



In AI, there are four generations.

  1. First Generation AI - is the Good Old-fashioned AI, meaning that you handcraft everything and you learn nothing. These were simple programs that could only do one task really well. They were like little robots that were programmed to do a specific thing, like adding numbers or sorting data.
  2. Second Generation AI - is shallow learning — you handcraft the features and learn a classifier. This was when people started teaching computers how to learn by giving them lots of data and letting them figure out patterns on their own. These programs were called "machine learning" programs, and they could do things like recognize images or translate languages.
  3. Third Generation AI - is deep learning. Basically you handcraft the algorithm, but you learn the features and you learn the predictions, end to end. This is when computers started to get really good at things that only humans used to be able to do, like understanding language and making decisions based on what they know. These programs are called "neural networks" because they're modeled after the way our brains work.
  4. Fourth Generation AI - This is the most advanced kind of AI we have so far - “learning-to-learn.”. These programs can understand things like emotions and creativity. They can learn from experience and get better at things over time, just like we do. They're often called "artificial general intelligence" because they're almost as good as humans at thinking and learning.


Full interview: "Godfather of artificial intelligence" talks impact and potential of AI
Geoffrey Hinton is considered a godfather of artificial intelligence, having championed machine learning decades before it became mainstream. As chatbots like ChatGPT bring his work to widespread attention, we spoke to Hinton about the past, present and future of AI. CBS Saturday Morning's Brook Silva-Braga interviewed him at the Vector Institute in Toronto on March 1, 2023

The history and future of AI
The history and future of AI

  • 3:30 What killed neural network research for decades
  • 6:30 Holy trinity of AI/ML
  • 07:00 Overview of all modern ML/Deeplearning
  • 10:00 Why Agent modelling is so powerful
  • 15:00 About Transformers
  • 17:20 Modern breakthoughs in conversational models
  • 23:00 How about autonomous driving. Not limited to L3
  • 26:00 How less "strong" general deep learning system beat specialized "stronger" chess AI. Alphago
  • 32:00 AlphaFold (to predict protein)
  • 35:00 Next gen ML models. Multitask Unified Model (MUM)
  • 37:00 Q&A. Political and technical questions from Central Asia developers to Murat


Who Invented A.I.? - The Pioneers of Our Future
ColdFusion is an Australian based online media company independently run by Dagogo Altraide since 2009. Topics cover anything in science, technology, history and business in a calm and relaxed environment.

You and AI – the history, capabilities and frontiers of AI
Demis Hassabis, world-renowned British neuroscientist, artificial intelligence (AI) researcher and the co-founder and CEO of DeepMind, explores the groundbreaking research driving the application of AI to scientific discovery. The talk launches the Royal Society’s 2018 series: You and AI, a collaborative effort to help people understand what machine learning and AI are, how these technologies work and the ways they may affect our lives. Supported by DeepMind. For more information on the event series: https://ow.ly/PKug30jWEYV

History of the entire AI field, i guess
The Brief History of Artificial Intelligence, which we commonly abbreviate as "AI". Here I am, bothered enough to make another 20+ mins video again. Well actually, my editors are probably more bothered.

A Brief History of Artificial Intelligence
While everyone seems to be talking about artificial intelligence these days, it’s good to remember that this is not something new!

History of Artificial Intelligence | Evolution Of AI | The Age Of A.I | Science Knowledge Facts
History of Artificial Intelligence | Evolution Of AI | The Age Of A.I | Science Knowledge Facts Video by Knowleseum.

The Epistemology of Deep Learning - Yann LeCun
Deep Learning: Alchemy or Science? Topic: The Epistemology of Deep Learning Speaker: Yann LeCun Affiliation: Facebook AI Research/New York University Date: February 22, 2019

A Brief History of AI
AI is now a mainstream topic reaching a broader business audience. Every executive who wants their company to be a leader in their industry should be asking themselves two questions. First, is the potential of AI real? Second, how do I apply it to my business? This Dreamtalk will dispel the 5 common myths about AI and replace it with a framework for executives to apply to their business.

Short History Of Artificial Intelligence (AI)
This is the audio version of Forbes which can be found here: https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/#4219d7f26fba

The Future of Artificial Intelligence: Views from History
On the evening of 29th November 2018, the Leverhulme Centre for the Future of Intelligence (CFI) will host an expert panel on ‘The Future of Artificial Intelligence: Views from History’ featuring:

Prof. Simon Schaffer (University of Cambridge) Prof. Murray Shanahan (DeepMind, Imperial) Prof. Margaret A. Boden OBE ScD FBA, (University of Sussex) Prof. Nathan Ensmenger (Indiana University) Pamela McCorduck (author of Machines Who Think, an authoritative history of AI)

and chaired by Dr Sarah Dillon (Leverhulme Centre for the Future of Intelligence).

Speakers will interrogate the past, present, and future of intelligent systems for a general audience, with an interest towards the nuanced power dynamics that have operated around such systems throughout the ages.

This event will commemorates the 60th anniversary of the landmark 1958 ‘Mechanisation of Thought Conference’ held in Teddington, England; an event that served to establish artificial intelligence as a standalone field in the UK.

The History of Artificial Intelligence [Documentary]
Futurology

The Man who forever changed Artificial Intelligence
History of Artificial Intelligence: The success of Neural Networks has sparked the AI revolution in the last 10 years. From Atari Games to Go, to Dota and to Starcraft. What many people don't know - the basic idea of Neural Networks has been around since the late 1950s. My name is Sebastian Schuchmann and I hope you enjoy watching! Support me on Patreon: https://www.patreon.com/user?u=25285137 Keep in touch: https://twitter.com/SebastianSchuc7

The Year Artificial Intelligence changed forever
Sebastian Schuchmann AI History: In 1986 the World of Neural Networks was about to change. After decades of silence, finally, a method to efficiently compute the weights in multi-layer Neural Networks was invented. The stage was set for a revolution. Learn more about A.I. History on my Medium: https://medium.com/@schuchmannsebastian Support me on Patreon: https://www.patreon.com/user?u=25285137


The Turk

truco-Turco-ROS-821x335.png

Mechanical Marvels—Automaton: The Chess Player "Android," 1769
Touted as an android that could defeat chess masters, Wolfgang von Kempelen's famed illusion debuted at the court of Empress Maria Theresa during wedding celebrations for her daughter in 1769. Over the course of the eighteenth century, the chess player (known in its time as The Turk for its costume) won games against Catherine the Great and Benjamin Franklin. When Napoléon Bonaparte tried to cheat, it wiped all the pieces from the board. The mysterious machine sparked discussions of the possibilities and limits of artificial intelligence, and it inspired the development of the power loom, the telephone, and the computer. The original and its secrets were destroyed in a fire in 1854. The subject of more than eight hundred publications attempting to uncover its secrets, Kempelen's illusion also inspired a 1927 silent movie, The Chess Player, directed by Raymond Bernard. In the sequence shown here, the inventor presents his creation at court. The year of its release, this early science-fiction drama attracted more attention than Fritz Lang's Metropolis, a now-legendary film that also involves an android. Featured Artwork: The Chess Player (The Turk), Original ca. 1769. Wolfgang von Kempelen (1734–1804). Austrian, Vienna. Wood, brass, fabric, steel. Collection of Mr. John Gaughan, Los Angeles

Modern Timeline - to July 2023

Here is a timeline of the history of Artificial Intelligence (AI):

  • 1943: Warren McCulloch and Walter Pitts propose a computational model for neural networks, which lays the foundation for artificial neural networks.
  • 1950: Alan Turing introduces the "Turing Test," a criterion to determine a machine's ability to exhibit intelligent behavior, marking an important milestone in AI.
  • 1956: John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon organize the Dartmouth Conference, widely considered the birth of AI as a field of study. The term “artificial intelligence” was first coined by John McCarthy in 1956 when he held the first academic conference on the subject at Dartmouth College.
  • 1956-1974: This period is known as the "Golden Age" of AI research. Researchers develop early AI programs, including the Logic Theorist, General Problem Solver, and ELIZA.
  • 1958: John McCarthy invents the programming language LISP, which becomes a popular language for AI research and development.
  • 1965: Joseph Weizenbaum creates ELIZA, a computer program that simulates conversation, and it becomes one of the first examples of natural language processing.
  • 1966: Ray Solomonoff introduces the concept of algorithmic probability, a fundamental concept in machine learning and AI.
  • 1969: The Stanford Research Institute develops Shakey, a mobile robot capable of reasoning and problem-solving, considered a significant advancement in robotics and AI.
  • 1973: The Lighthill Report is published, criticizing the progress in AI research and leading to a decrease in funding, marking the end of the first AI winter.
  • 1980s-1990s: Expert systems gain prominence, focusing on capturing and replicating human expertise in narrow domains. Symbolic AI approaches become popular during this time.
  • 1986: Geoffrey Hinton, David Rumelhart, and Ronald Williams publish a paper on backpropagation, which greatly advances the training of artificial neural networks.
  • 1997: IBM's Deep Blue defeats Garry Kasparov, the world chess champion, marking a significant milestone in AI and demonstrating the potential of machine intelligence.
  • 2000s: The focus shifts from symbolic AI to statistical approaches, and machine learning algorithms, particularly neural networks, gain popularity.
  • 2011: IBM's Watson defeats human champions in the quiz show Jeopardy!, demonstrating advancements in natural language processing and question-answering systems.
  • 2012: AlexNet, a deep convolutional neural network, achieves a breakthrough in image classification, leading to a surge in deep learning research and applications.
  • 2011-2014: Apple's Siri (2011), Google's Google Now (2012) and Microsoft's Cortana (2014) are smartphone apps that use natural language to answer questions, make recommendations and perform actions.
  • 2014: Attention mechanism is developed, that lead to the Transformer architecture.
  • 2015: DeepMind's AlphaGo defeats the world champion Go player, Lee Sedol, showcasing the power of deep reinforcement learning in complex games.
  • 2015: Google released TensorFlow, an open-source software library for machine learning.
  • 2016: OpenAI introduces GPT (Generative Pre-trained Transformer), an advanced language model that generates human-like text and significantly advances natural language processing.
  • 2017: The term "Deepfake" emerges, referring to the use of AI to create realistic but fake audio and video content, raising concerns about misinformation and privacy.
  • 2018: The General Data Protection Regulation (GDPR) is implemented in the European Union, introducing regulations for the ethical use of AI and data privacy.
  • 2019 March: First human brain-computer interface: A team of researchers at the University of California, San Francisco (UCSF) successfully implanted a brain-computer interface (BCI) in a human patient. The BCI allowed the patient to control a cursor on a computer screen simply by thinking about it.
  • 2019: OpenAI released GPT-2, a large-scale language model that can generate coherent and diverse texts on various topics
  • 2020: GPT-3, an even more advanced version of the language model, is released, demonstrating unprecedented capabilities in generating coherent and contextually relevant text.
  • 2020: Baidu released the LinearFold AI algorithm to help medical and scientific teams developing a vaccine during the early stages of the SARS-CoV-2 (COVID-19) pandemic
  • 2020: DeepMind’s AlphaFold 2 achieved a breakthrough in protein structure prediction, surpassing the human experts in the CASP14 competition
  • 2021: OpenAI released DALL-E, a generative model that can create images from text descriptions, such as “an armchair in the shape of an avocado”
  • 2021: Google’s LaMDA (Language Model for Dialogue Applications) demonstrated natural and engaging conversations on various topics, such as Pluto and paper airplanes.
  • 2021: Microsoft’s Turing-NLG (Natural Language Generation) model generated realistic and fluent texts on various domains, such as news, reviews, and fiction.
  • 2022 January: OpenAI released DALL-E 2, a generative model that can create images from text descriptions, such as "an armchair in the shape of an avocado". The model demonstrated remarkable creativity and diversity in its outputs, as well as the ability to manipulate attributes and perspectives.
  • 2022 February: DeepMind's AlphaFold 2 achieved a breakthrough in protein structure prediction, surpassing the human experts in the CASP14 competition. The model used deep learning to predict the three-dimensional shape of proteins from their amino acid sequences, with unprecedented accuracy and speed. This could have huge implications for drug discovery, biotechnology, and understanding diseases.
  • 2022 April: Google's LaMDA (Language Model for Dialogue Applications) demonstrated natural and engaging conversations on various topics, such as Pluto and paper airplanes. The model used a neural network to generate responses that were relevant, specific, and informative, as well as sometimes humorous and surprising. The model could also switch topics seamlessly and handle multiple languages.
  • 2022 June: Microsoft's Turing-NLG (Natural Language Generation) model generated realistic and fluent texts on various domains, such as news, reviews, and fiction. The model used a massive neural network with 17 billion parameters to produce coherent and diverse texts from keywords or prompts. The model could also answer questions, summarize texts, and rewrite sentences.
  • 2022 September: HyperTree Proof Search (HTPS) was introduced as a new algorithm for automated theorem proving, a challenging task in mathematics and logic. The algorithm used a novel data structure called HyperTree to efficiently search for proofs of mathematical statements, outperforming existing methods on several benchmarks. The algorithm could potentially help discover new mathematical truths and verify complex systems.
  • 2023 January: Plato: A team of researchers at Google AI developed a new type of AI that can learn to perform tasks by watching humans do them. This AI, called "Plato," is able to learn to perform tasks that are much more complex than previous AIs. For example, Plato was able to learn to play the game of Go after watching just a few hours of human gameplay.
  • 2023 February: AlphaFold: A team of researchers at DeepMind developed a new type of AI that can solve complex mathematical problems. This AI, called "AlphaFold," was able to solve the structure of the SARS-CoV-2 virus, which could help scientists to develop new treatments for COVID-19.
  • 2023 March: DALL-E 2: A team of researchers at OpenAI developed a new type of AI that can generate realistic and creative text. This AI, called "DALL-E 2," can be used to create images from text descriptions, or to generate text that describes images.
  • 2023 April: Bard: A team of researchers at Stanford University developed a new type of AI that can translate languages more accurately than previous AIs. This AI, called "Bard," was able to translate between 26 languages with an accuracy of 99%.
  • 2023 May: Enlitic: A team of researchers at the University of California, Berkeley, developed a new type of AI that can detect and classify diseases from medical images. This AI, called "Enlitic," was able to detect cancer with an accuracy of 95%.
  • 2023 June: Isaac: A team of researchers at Nvidia developed a new type of AI that can control robots in real time. This AI, called "Isaac," was able to control a robot arm to perform complex tasks with human-level accuracy.