Moonshots

From
Revision as of 06:20, 25 September 2020 by BPeat (talk | contribs)
Jump to: navigation, search

Youtube search... ...Google search

The “Sputnik” moment for China came a year ago when a Google computer program, AlphaGo, beat the world’s top master of the ancient board game of Go. Now, China is racing to become the world leader in artificial-intelligence. In context, what do you think would be a "Moonshot" response?

______________________________________________________________________________________

The 'moonshot' milestones along the road to Artificial General Intelligence (AGI)...

______________________________________________________________________________________

Can Conjure & Ask Questions

Youtube search... ...Google search

Able to Predict the Future

Youtube search... ...Google search

Able to 'Learn' the Wide World Web

Youtube search... ...Google search

Autonomous Vehicles

Youtube search... ...Google search

Emergence from Analogies

Youtube search... ...Google search

Principles of analogical reasoning have recently been applied in the context of machine learning, for example to develop new methods for classification and preference learning. In this paper, we argue that, while analogical reasoning is certainly useful for constructing new learning algorithms with high predictive accuracy, is is arguably not less interesting from an interpretability and explainability point of view. More specifically, we take the view that an analogy-based approach is a viable alternative to existing approaches in the realm of explainable AI and interpretable machine learning, and that analogy-based explanations of the predictions produced by a machine learning algorithm can complement similarity-based explanations in a meaningful way. Towards Analogy-Based Explanations in Machine Learning | Eyke Hüllermeier

Analogies
This video is part of the Udacity course "Deep Learning". Watch the full course at http://www.udacity.com/course/ud730

Complexity Concepts, Abstraction, & Analogy in Natural and Artificial Intelligence, Melanie Mitchell
Complexity Concepts, Abstraction, & Analogy in Natural and Artificial Intelligence a talk by Melanie Mitchell at the GoodAI Meta-Learning & Multi-Agent Learning Workshop. See other talks from the workshop

Conceptual Abstraction and Analogy in Natural and Artificial Intelligence
Melanie Mitchell, Santa Fe Institute; Portland State University While AI has made dramatic progress over the last decade in areas such as vision, natural language processing, and game-playing, current AI systems still wholly lack the abilities to create humanlike conceptual abstractions and analogies. It can be argued that the lack of humanlike concepts in AI systems is the cause of their brittleness—the inability to reliably transfer knowledge to new situations—as well as their vulnerability to adversarial attacks. Much AI research on conceptual abstraction and analogy has used visual-IQ-like tests or other idealized domains as arenas for developing and evaluating AI systems, and in several of these tasks AI systems have performed surprisingly well, in some cases outperforming humans. In this talk I will review some very recent (and some much older) work along these lines, and discuss the following questions: Do these domains actually require abilities that will transfer and scale to real-world tasks? And what are the systems that succeed on these idealized domains actually learning?

Melanie Mitchell: "Can Analogy Unlock AI’s Barrier of Meaning?"
UCSB College of Engineering Speaker Bio: Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute and Professor of Computer Science (currently on leave) at Portland State University. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. She is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her latest book is Artificial Intelligence: A Guide for Thinking Humans. Abstract: In 1986, the mathematician and philosopher Gian-Carlo Rota wrote, “I wonder whether or when artificial intelligence will ever crash the barrier of meaning.” Here, the phrase “barrier of meaning” refers to a belief about humans versus machines: humans are able to “actually understand” the situations they encounter, whereas it can be argued that AI systems (at least current ones) do not possess such understanding. Some cognitive scientists have proposed that analogy-making is a central mechanism for concept formation and concept understanding in humans. Douglas Hofstadter called analogy-making “the core of cognition”, and Hofstadter and co-author Emmanuel Sander noted, “Without concepts there can be no thought, and without analogies there can be no concepts.” In this talk I will reflect on the role played by analogy-making at all levels of intelligence, and on how analogy-making abilities will be central in developing AI systems with humanlike intelligence.