Difference between revisions of "Gaming"
m |
m (→Other Videos) |
||
(25 intermediate revisions by the same user not shown) | |||
Line 14: | Line 14: | ||
</script> | </script> | ||
}} | }} | ||
− | [https://www.youtube.com/results?search_query=game | + | [https://www.youtube.com/results?search_query=game+gaming+artificial+intelligence+ai Youtube search...] |
− | [https://www.google.com/search?q=game | + | [https://www.quora.com/search?q=AI%20game%20gaming ... Quora] |
− | [https://news.google.com/search?q=game | + | [https://www.google.com/search?q=game+gaming+artificial+intelligence+ai ...Google search] |
− | [https://www.bing.com/news/search?q=game | + | [https://news.google.com/search?q=game+gaming+artificial+intelligence+ai ...Google News] |
+ | [https://www.bing.com/news/search?q=game+gaming+artificial+intelligence+ai&qft=interval%3d%228%22 ...Bing News] | ||
− | * [[Gaming]] ... [[Game-Based Learning (GBL)]] ... [[Games - Security|Security]] ... [[Game Development with Generative AI|Generative AI]] ... [[Metaverse#Games - Metaverse|Metaverse]] ... [[Games - Quantum Theme|Quantum]] ... [[Game Theory | + | * [[Gaming]] ... [[Game-Based Learning (GBL)]] ... [[Games - Security|Security]] ... [[Game Development with Generative AI|Generative AI]] ... [[Metaverse#Games - Metaverse|Games - Metaverse]] ... [[Games - Quantum Theme|Quantum]] ... [[Game Theory]] ... [[Game Design | Design]] |
− | |||
* [[Case Studies]] | * [[Case Studies]] | ||
** [[Sports]] | ** [[Sports]] | ||
Line 26: | Line 26: | ||
** [[Toys]] | ** [[Toys]] | ||
** [[Education]] | ** [[Education]] | ||
− | * [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless | + | * [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless]] ... [[Hugging Face]] ... [[Algorithm Administration#AIOps/MLOps|AIOps/MLOps]] ... [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)|AIaaS/MLaaS]] |
+ | * [[Minecraft]]: [[Minecraft#Voyager|Voyager]] ... an AI agent powered by a [[Large Language Model (LLM)]] that has been introduced to the world of [[Minecraft]] | ||
* [[Python]] ... [[Generative AI with Python|GenAI w/ Python]] ... [[JavaScript]] ... [[Generative AI with JavaScript|GenAI w/ JavaScript]] ... [[TensorFlow]] ... [[PyTorch]] | * [[Python]] ... [[Generative AI with Python|GenAI w/ Python]] ... [[JavaScript]] ... [[Generative AI with JavaScript|GenAI w/ JavaScript]] ... [[TensorFlow]] ... [[PyTorch]] | ||
** [[Game Development with Generative AI#Roblox | Roblox]] ... building tools to allow creators to develop integrated 3D objects that come with behaviour built in. | ** [[Game Development with Generative AI#Roblox | Roblox]] ... building tools to allow creators to develop integrated 3D objects that come with behaviour built in. | ||
− | * [[ | + | ** [[JavaScript#Games_to_Learn|Games to Learn JavaScript and CSS]] |
+ | ** [[Python#Games_to_Learn_Python | Games to Learn Python]] | ||
+ | * [[Immersive Reality]] ... [[Metaverse]] ... [[Omniverse]] ... [[Transhumanism]] ... [[Religion]] | ||
** [[Metaverse#Flight Simulator 2020| Flight Simulator 2020]] | ** [[Metaverse#Flight Simulator 2020| Flight Simulator 2020]] | ||
** [[Metaverse#Fortnite| Fortnite]] | ** [[Metaverse#Fortnite| Fortnite]] | ||
Line 58: | Line 61: | ||
* [https://futurism.com/the-byte/browser-game-opponents-neural-networks In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism] | * [https://futurism.com/the-byte/browser-game-opponents-neural-networks In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism] | ||
* [https://www.polygon.com/2019/12/6/20998745/ai-dungeon-2-text-adventure-openai-how-to-play-nick-walton You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - Polygon] To play [[Jupyter]]-notebook based game click... | * [https://www.polygon.com/2019/12/6/20998745/ai-dungeon-2-text-adventure-openai-how-to-play-nick-walton You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - Polygon] To play [[Jupyter]]-notebook based game click... | ||
− | |||
* [https://www.pgs-soft.com/blog/writing-board-game-ai-bots-the-good-the-bad-and-the-ugly/ Writing Board Game AI Bots – The Good, The Bad, and The Ugly | Tomasz Zielinski - PGS Software] | * [https://www.pgs-soft.com/blog/writing-board-game-ai-bots-the-good-the-bad-and-the-ugly/ Writing Board Game AI Bots – The Good, The Bad, and The Ugly | Tomasz Zielinski - PGS Software] | ||
* [https://www.intrinsicalgorithm.com/media.php Intrinsic Algorithm | Dave Mark] reducing the world to mathematical equations | * [https://www.intrinsicalgorithm.com/media.php Intrinsic Algorithm | Dave Mark] reducing the world to mathematical equations | ||
Line 75: | Line 77: | ||
* [https://www.makeuseof.com/how-use-chatgpt-my-gpt-bots/ How to Use ChatGPT's "My GPT" Bots to Learn Board Games, Create Images, and Much More | Dreamchild Obari - Make Use Of] ... Game Time ... Do you have a board game somewhere at home that you don't know how to play? Game Time comes in clutch and can explain cards and board games to you. You can also upload images if you don't know what the game is called but have the instructions or an idea of what it is. | * [https://www.makeuseof.com/how-use-chatgpt-my-gpt-bots/ How to Use ChatGPT's "My GPT" Bots to Learn Board Games, Create Images, and Much More | Dreamchild Obari - Make Use Of] ... Game Time ... Do you have a board game somewhere at home that you don't know how to play? Game Time comes in clutch and can explain cards and board games to you. You can also upload images if you don't know what the game is called but have the instructions or an idea of what it is. | ||
* [https://www.engati.com/blog/ai-in-gaming AI in Gaming | 5 Biggest Innovations (+40 AI Games) | Jeremy DSouza - engati] ... benefits, game types, innovations, popular games, & future of AI in gaming | * [https://www.engati.com/blog/ai-in-gaming AI in Gaming | 5 Biggest Innovations (+40 AI Games) | Jeremy DSouza - engati] ... benefits, game types, innovations, popular games, & future of AI in gaming | ||
+ | ** [https://colab.research.google.com/github/nickwalton/AIDungeon/blob/master/AIDungeon_2.ipynb AI Dungeon 2] ... uses [[OpenAI]]'s GPT LLM to allow players to engage in text-based adventures where the possibilities are virtually limitless | ||
** [https://codecombat.com/ Code Combat] ... innovative game-based learning technology | ** [https://codecombat.com/ Code Combat] ... innovative game-based learning technology | ||
** [https://screeps.com/ Screeps] ... MMO sandbox game for programmers | ** [https://screeps.com/ Screeps] ... MMO sandbox game for programmers | ||
+ | |||
+ | |||
= Gaming Evolution = | = Gaming Evolution = | ||
== [[Meta]]: Diplomacy 2022 == | == [[Meta]]: Diplomacy 2022 == | ||
− | * [[Negotiation]] | + | * [[Agents]] ... [[Robotic Process Automation (RPA)|Robotic Process Automation]] ... [[Assistants]] ... [[Personal Companions]] ... [[Personal Productivity|Productivity]] ... [[Email]] ... [[Negotiation]] ... [[LangChain]] |
* [https://www.infoq.com/news/2022/12/meta-diplomacy-cicero/ Meta's CICERO AI Wins Online Diplomacy Tournament | Anthony Alford - InfoQ] ... Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players. | * [https://www.infoq.com/news/2022/12/meta-diplomacy-cicero/ Meta's CICERO AI Wins Online Diplomacy Tournament | Anthony Alford - InfoQ] ... Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players. | ||
+ | |||
+ | Cicero, has demonstrated the ability to play the strategy game Diplomacy at a level that rivals human performance. Cicero can engage in game conversations and negotiations without most human players realizing they are interacting with a machine. During gameplay on an online league, Cicero sent over 5,000 messages to human players, and its identity as an AI remained undetected. Cicero's performance was impressive, ranking in the top 10% of players. The integration of AI into the game of Diplomacy has shown that machines can effectively mimic human negotiation tactics and strategic thinking. Cicero's achievements in Diplomacy are a testament to the potential of AI in complex human interactions. As AI continues to evolve, it will undoubtedly transform the landscape of diplomacy, offering new tools and methods to support diplomatic efforts. | ||
<youtube>lNtBiZaLA0k</youtube> | <youtube>lNtBiZaLA0k</youtube> | ||
Line 128: | Line 135: | ||
* [https://katbailey.github.io/post/was-alphagos-move-37-inevitable/ Was AlphaGo's Move 37 Inevitable? | Katherine Bailey] | * [https://katbailey.github.io/post/was-alphagos-move-37-inevitable/ Was AlphaGo's Move 37 Inevitable? | Katherine Bailey] | ||
* [https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/ Man beats machine at Go in human victory over AI | Richard Waters - Ars Technica] ... Amateur exploited weakness in systems that have otherwise dominated grandmasters. | * [https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/ Man beats machine at Go in human victory over AI | Richard Waters - Ars Technica] ... Amateur exploited weakness in systems that have otherwise dominated grandmasters. | ||
+ | |||
+ | AlphaGo is a computer program developed by Google DeepMind that uses artificial intelligence (AI) to play the board game Go. In 2016, AlphaGo made history by becoming the first computer program to defeat a professional Go player, Lee Sedol, in a five-game match. | ||
+ | |||
+ | During the second game of the match, AlphaGo made a surprising move, known as Move 37, which stunned the Go community and left Lee Sedol speechless. The move involved placing a stone in an unexpected location on the board, which initially appeared to be a mistake. However, as the game progressed, it became clear that the move was part of a complex strategy that allowed AlphaGo to gain an advantage over Lee Sedol. Move 37 is significant because it demonstrated the power of AlphaGo's AI algorithms and its ability to think creatively and strategically. The move was not based on any known human strategy or prior knowledge of the game, but rather on AlphaGo's own analysis and evaluation of the board position. | ||
+ | |||
+ | |||
+ | <hr><center><b><i> | ||
+ | |||
+ | What would have happened with human-in-the-loop on Move 37? | ||
+ | |||
+ | </i></b></center><hr> | ||
+ | |||
+ | |||
+ | The move highlighted the limitations of human intuition and the potential for AI to uncover new insights and strategies in complex domains. If a human expert had been involved in the decision-making process for Move 37, they might have questioned AlphaGo's choice and suggested a more conventional move. This could have prevented AlphaGo from making the unexpected and seemingly risky move that ultimately led to its victory. | ||
+ | |||
<youtube>WXuK6gekU1Y</youtube> | <youtube>WXuK6gekU1Y</youtube> | ||
Line 153: | Line 175: | ||
== [[Creatives#John Conway |John Conway]]: [https://playgameoflife.com/ The Game of Life (GoL)] ...1970 == | == [[Creatives#John Conway |John Conway]]: [https://playgameoflife.com/ The Game of Life (GoL)] ...1970 == | ||
+ | * [[Artificial General Intelligence (AGI) to Singularity]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ... [[Algorithm Administration#Automated Learning|Automated Learning]] | ||
* [https://playgameoflife.com/ Game_of_Life] | * [https://playgameoflife.com/ Game_of_Life] | ||
+ | * [https://thelifeengine.net/ Life Engine] | ||
* [https://www.ibiblio.org/lifepatterns/october1970.html MATHEMATICAL GAMES: The fantastic combinations of John Conway's new solitaire game "life" | Martin Gardner - ] [https://www.scientificamerican.com/ Scientific American 223 (October 1970): 120-123.] | * [https://www.ibiblio.org/lifepatterns/october1970.html MATHEMATICAL GAMES: The fantastic combinations of John Conway's new solitaire game "life" | Martin Gardner - ] [https://www.scientificamerican.com/ Scientific American 223 (October 1970): 120-123.] | ||
* [https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life Wikipedia] | * [https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life Wikipedia] | ||
Line 170: | Line 194: | ||
<youtube>FWSR_7kZuYg</youtube> | <youtube>FWSR_7kZuYg</youtube> | ||
<youtube>Gbvy6gY5Ev4</youtube> | <youtube>Gbvy6gY5Ev4</youtube> | ||
+ | <youtube>np6ZVZIs7f8</youtube> | ||
== [https://books.google.com/books?id=bz_dgCLhhkUC&pg=PA405&lpg=PA405&dq=Donald+Waterman+Draw+Poker Donald Waterman]: Draw Poker ...1968 == | == [https://books.google.com/books?id=bz_dgCLhhkUC&pg=PA405&lpg=PA405&dq=Donald+Waterman+Draw+Poker Donald Waterman]: Draw Poker ...1968 == | ||
Line 214: | Line 239: | ||
<youtube>ipNT1QZV7Ag</youtube> | <youtube>ipNT1QZV7Ag</youtube> | ||
<youtube>jSVqCsinQLM</youtube> | <youtube>jSVqCsinQLM</youtube> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
= More... = | = More... = | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
<youtube>HnkVoOdTiSo</youtube> | <youtube>HnkVoOdTiSo</youtube> | ||
<youtube>Q8BX0nXfPjY</youtube> | <youtube>Q8BX0nXfPjY</youtube> |
Latest revision as of 09:07, 17 November 2024
Youtube search... ... Quora ...Google search ...Google News ...Bing News
- Gaming ... Game-Based Learning (GBL) ... Security ... Generative AI ... Games - Metaverse ... Quantum ... Game Theory ... Design
- Case Studies
- Development ... Notebooks ... AI Pair Programming ... Codeless ... Hugging Face ... AIOps/MLOps ... AIaaS/MLaaS
- Minecraft: Voyager ... an AI agent powered by a Large Language Model (LLM) that has been introduced to the world of Minecraft
- Python ... GenAI w/ Python ... JavaScript ... GenAI w/ JavaScript ... TensorFlow ... PyTorch
- Roblox ... building tools to allow creators to develop integrated 3D objects that come with behaviour built in.
- Games to Learn JavaScript and CSS
- Games to Learn Python
- Immersive Reality ... Metaverse ... Omniverse ... Transhumanism ... Religion
- Autonomous Drones Racing
- Artificial Intelligence (AI) ... Machine Learning (ML) ... Deep Learning ... Neural Network ... Reinforcement ... Learning Techniques
- Q Learning
- Competitions
- Blockchain
- Bayesian_Game
- Analytics ... Visualization ... Graphical Tools ... Diagrams & Business Analysis ... Requirements ... Loop ... Bayes ... Network Pattern
- GameGAN
- Quantum Chess
- Video/Image ... Vision ... Colorize ... Image/Video Transfer Learning
- Policy ... Policy vs Plan ... Constitutional AI ... Trust Region Policy Optimization (TRPO) ... Policy Gradient (PG) ... Proximal Policy Optimization (PPO)
- Deepindex.org list
- Unity Core Platform
- 101+ Free Python Books | Christian
- AI is becoming esports’ secret weapon | Berk Ozer - VentureBeat
- Inside the LARPS (ive-action role-playing game) that let Human Players Experience AI Life | Tasha Robinson
- An introduction to Deep Q-Learning: let’s play Doom
- AI and Games Series; an Informed Overview | Dr Tommy Thompson
- Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI | M. Sadler and N. Regan
- Artificial Intelligence in Video Games | Wikipedia
- Using Machine Learning Agents Toolkit in a real game: a beginner’s guide | Alessia Nigretti - Unity ...Agents
- This AI Robot Will Beat You at Jenga | Jesus Diaz
- In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism
- You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - Polygon To play Jupyter-notebook based game click...
- Writing Board Game AI Bots – The Good, The Bad, and The Ugly | Tomasz Zielinski - PGS Software
- Intrinsic Algorithm | Dave Mark reducing the world to mathematical equations
- Future AI toys could be smarter than parents, but a lot less protective | Mikaela Cohen - CNBC Evolve
- Future AI toys could be smarter than parents, but a lot less protective | Mikaela Cohen - CNBC
- This AI Resurrects Ancient Board Games—and Lets You Play Them; What tabletop games did our ancestors play in 1000 BC? A new research project wants to find out, and make them playable online too. | Samantha Huioi Yow - Wired ...Digital Ludeme Project; Modelling the Evolution of Traditional Games
- The Generative AI Revolution in Games | James Gwertzman and Jack Soslow - Andreessen Horowitz
- Modeling Games with Markov Chains | Kairo Morton - Towards Data Science ... Follow Exploring Probabilistic Modeling using “Shut the Box”
- Google:
- AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
- Google’s AI surfs the “gamescape” to conquer game theory | Tiernan Ray
- DeepMind’s AI can now play all 57 Atari games—but it’s still not versatile enough | MIT Technology Review ...Agent57 | DeepMind ...Agents
- OpenSpiel: A Framework for Reinforcement Learning in Games | M. Lanctot, E. Lockhart, J. Lespiau1, V. Zambaldi1, S. Upadhyay, J. Pérolat, S. Srinivasan, F. Timbers, K. Tuyls, S. Omidshafiei, D. Hennes, D. Morrill1, P. Muller, T. Ewalds, R. Faulkner, J. Kramár, B. De Vylder, B. Saeta, J. Bradbury, D. Ding, S. Borgeaud, M. Lai1, J. Schrittwieser, T. Anthony, E. Hughes, I. Danihelka and J. Ryan-Davis - DeepMind
- How to Use ChatGPT's "My GPT" Bots to Learn Board Games, Create Images, and Much More | Dreamchild Obari - Make Use Of ... Game Time ... Do you have a board game somewhere at home that you don't know how to play? Game Time comes in clutch and can explain cards and board games to you. You can also upload images if you don't know what the game is called but have the instructions or an idea of what it is.
- AI in Gaming | 5 Biggest Innovations (+40 AI Games) | Jeremy DSouza - engati ... benefits, game types, innovations, popular games, & future of AI in gaming
- AI Dungeon 2 ... uses OpenAI's GPT LLM to allow players to engage in text-based adventures where the possibilities are virtually limitless
- Code Combat ... innovative game-based learning technology
- Screeps ... MMO sandbox game for programmers
Contents
- 1 Gaming Evolution
- 1.1 Meta: Diplomacy 2022
- 1.2 NVIDIA: 40 Years on, PAC-MAN ...2020
- 1.3 OpenAI: Hide and Seek ... 2019
- 1.4 Meta: Brown & Sandholm: 6-player Poker ...2019
- 1.5 Google DeepMind AlphaStar: StarCraft II ... 2019
- 1.6 OpenAI: Dota 2 ...2018
- 1.7 Google DeepMind AlphaGo Zero: Go ...2016
- 1.8 Google DeepMind: Atari video games ...2015
- 1.9 IBM: Watson: Jeopardy ...2011
- 1.10 IBM: Deep Blue: Chess ...1997
- 1.11 John Conway: The Game of Life (GoL) ...1970
- 1.12 Donald Waterman: Draw Poker ...1968
- 1.13 Martin Gardner: Hexapawn ...1962
- 1.14 Donald Michie: Noughts and Crosses ...1960
- 1.15 Arthur Samuel: Checkers ...1950s
- 2 More...
- 3 Books
Gaming Evolution
Meta: Diplomacy 2022
- Agents ... Robotic Process Automation ... Assistants ... Personal Companions ... Productivity ... Email ... Negotiation ... LangChain
- Meta's CICERO AI Wins Online Diplomacy Tournament | Anthony Alford - InfoQ ... Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players.
Cicero, has demonstrated the ability to play the strategy game Diplomacy at a level that rivals human performance. Cicero can engage in game conversations and negotiations without most human players realizing they are interacting with a machine. During gameplay on an online league, Cicero sent over 5,000 messages to human players, and its identity as an AI remained undetected. Cicero's performance was impressive, ranking in the top 10% of players. The integration of AI into the game of Diplomacy has shown that machines can effectively mimic human negotiation tactics and strategic thinking. Cicero's achievements in Diplomacy are a testament to the potential of AI in complex human interactions. As AI continues to evolve, it will undoubtedly transform the landscape of diplomacy, offering new tools and methods to support diplomatic efforts.
NVIDIA: 40 Years on, PAC-MAN ...2020
- GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.
OpenAI: Hide and Seek ... 2019
- Emergent Tool Use from Multi-Agent Interaction | OpenAI
- Emergent Tool Use from Multi-Agent Autocurricula B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch
Meta: Brown & Sandholm: 6-player Poker ...2019
- Occlusions
- Facebook and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge
Google DeepMind AlphaStar: StarCraft II ... 2019
OpenAI: Dota 2 ...2018
Google DeepMind AlphaGo Zero: Go ...2016
- AlphaGo Zero: Starting from scratch | DeepMind
- China's 'Sputnik Moment' and the Sino-American Battle for AI Supremacy | Kai-Fu Lee - Asia Society
- Move 37, or how AI can change the world | George Zarkadakis - HuffPost
- Was AlphaGo's Move 37 Inevitable? | Katherine Bailey
- Man beats machine at Go in human victory over AI | Richard Waters - Ars Technica ... Amateur exploited weakness in systems that have otherwise dominated grandmasters.
AlphaGo is a computer program developed by Google DeepMind that uses artificial intelligence (AI) to play the board game Go. In 2016, AlphaGo made history by becoming the first computer program to defeat a professional Go player, Lee Sedol, in a five-game match.
During the second game of the match, AlphaGo made a surprising move, known as Move 37, which stunned the Go community and left Lee Sedol speechless. The move involved placing a stone in an unexpected location on the board, which initially appeared to be a mistake. However, as the game progressed, it became clear that the move was part of a complex strategy that allowed AlphaGo to gain an advantage over Lee Sedol. Move 37 is significant because it demonstrated the power of AlphaGo's AI algorithms and its ability to think creatively and strategically. The move was not based on any known human strategy or prior knowledge of the game, but rather on AlphaGo's own analysis and evaluation of the board position.
What would have happened with human-in-the-loop on Move 37?
The move highlighted the limitations of human intuition and the potential for AI to uncover new insights and strategies in complex domains. If a human expert had been involved in the decision-making process for Move 37, they might have questioned AlphaGo's choice and suggested a more conventional move. This could have prevented AlphaGo from making the unexpected and seemingly risky move that ultimately led to its victory.
Andrew Jackson & Josh Hoak: Minigo ...2018
an open source, unofficial implementation of AlphaGo Zero using Reinforcement Learning (RL) approaches can be massively parallelized, so Kubernetes seems like a natural fit, as Kubernetes is all about reducing the overhead for managing applications. However, it can be daunting to wade into Kubernetes and Machine Learning, especially when you add in hardware accelerators like GPUs or TPUs! This talk will break down how you can use Kubernetes and TensorFlow to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on GPUs, TPUs, TensorFlow, KubeFlow, and large-scale Kubernetes Engine clusters. Uses self-play with Monte Carlo Tree Search and refining the Policy/Value along the way.
Google DeepMind: Atari video games ...2015
IBM: Watson: Jeopardy ...2011
IBM: Deep Blue: Chess ...1997
John Conway: The Game of Life (GoL) ...1970
- Artificial General Intelligence (AGI) to Singularity ... Curious Reasoning ... Emergence ... Moonshots ... Explainable AI ... Automated Learning
- Game_of_Life
- Life Engine
- MATHEMATICAL GAMES: The fantastic combinations of John Conway's new solitaire game "life" | Martin Gardner - Scientific American 223 (October 1970): 120-123.
- Wikipedia
- Evolving Game of Life: Neural Networks, Chaos, and Complexity | Tom Grek - Medium
The Rules
- For a space that is 'populated':
- Each cell with one or no neighbors dies, as if by solitude.
- Each cell with four or more neighbors dies, as if by overpopulation.
- Each cell with two or three neighbors survives.
- For a space that is 'empty' or 'unpopulated'
- Each cell with three neighbors becomes populated.
Donald Waterman: Draw Poker ...1968
Martin Gardner: Hexapawn ...1962
A simple game on a 3x3 grid, where each side has 3 chess pawns. The objective is to get a pawn to the other side of the board, or leave the opponent unable to move. Normal chess rules apply except that the pawns are not allowed a double move from their starting position. Not really intended as a two-player game, it was designed to demonstrate an artificial intelligence learning technique by using beads in matchboxes. (Old enough to remember matchboxes?) Twenty-four matchboxes were used to represent the possible moves. Essentially, there were two phases. The first phase was to "teach" the matchbox computer to play the game, then a second phase allowed the matchbox computer to play other opponents. The learning speed depended on the skill of the opponent in the teaching phase. Martin Gardner first published this in his Mathematical Games column in March 1962, and subsequently in his book, "The Unexpected Hanging". Board Game Geek
Donald Michie: Noughts and Crosses ...1960
- Experiments on the mechanization of game-learning Part I. Characterization of the model and its parameters | Donald Michie
- Play against the online version of MENACE | Matt Scroggs
- Playing Noughts and Crosses using MENACE | Richard Bowles
MENACE (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called Reinforcement Learning (RL). Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust
MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)
Arthur Samuel: Checkers ...1950s
More...
Books
Invent Your Own Computer Games with Python | Al Sweigart
Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson