Difference between revisions of "Gaming"
m |
m (→Other Videos) |
||
(61 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
|title=PRIMO.ai | |title=PRIMO.ai | ||
|titlemode=append | |titlemode=append | ||
− | |keywords=artificial, intelligence, machine, learning, models | + | |keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools |
− | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | + | |
+ | <!-- Google tag (gtag.js) --> | ||
+ | <script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script> | ||
+ | <script> | ||
+ | window.dataLayer = window.dataLayer || []; | ||
+ | function gtag(){dataLayer.push(arguments);} | ||
+ | gtag('js', new Date()); | ||
+ | |||
+ | gtag('config', 'G-4GCWLBVJ7T'); | ||
+ | </script> | ||
}} | }} | ||
− | [ | + | [https://www.youtube.com/results?search_query=game+gaming+artificial+intelligence+ai Youtube search...] |
− | [ | + | [https://www.quora.com/search?q=AI%20game%20gaming ... Quora] |
+ | [https://www.google.com/search?q=game+gaming+artificial+intelligence+ai ...Google search] | ||
+ | [https://news.google.com/search?q=game+gaming+artificial+intelligence+ai ...Google News] | ||
+ | [https://www.bing.com/news/search?q=game+gaming+artificial+intelligence+ai&qft=interval%3d%228%22 ...Bing News] | ||
+ | * [[Gaming]] ... [[Game-Based Learning (GBL)]] ... [[Games - Security|Security]] ... [[Game Development with Generative AI|Generative AI]] ... [[Metaverse#Games - Metaverse|Games - Metaverse]] ... [[Games - Quantum Theme|Quantum]] ... [[Game Theory]] ... [[Game Design | Design]] | ||
* [[Case Studies]] | * [[Case Studies]] | ||
** [[Sports]] | ** [[Sports]] | ||
Line 13: | Line 26: | ||
** [[Toys]] | ** [[Toys]] | ||
** [[Education]] | ** [[Education]] | ||
− | * [[ | + | * [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless]] ... [[Hugging Face]] ... [[Algorithm Administration#AIOps/MLOps|AIOps/MLOps]] ... [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)|AIaaS/MLaaS]] |
− | * | + | * [[Minecraft]]: [[Minecraft#Voyager|Voyager]] ... an AI agent powered by a [[Large Language Model (LLM)]] that has been introduced to the world of [[Minecraft]] |
− | * | + | * [[Python]] ... [[Generative AI with Python|GenAI w/ Python]] ... [[JavaScript]] ... [[Generative AI with JavaScript|GenAI w/ JavaScript]] ... [[TensorFlow]] ... [[PyTorch]] |
− | * [[Generative AI#Roblox | Roblox]] | + | ** [[Game Development with Generative AI#Roblox | Roblox]] ... building tools to allow creators to develop integrated 3D objects that come with behaviour built in. |
− | * [[Metaverse]] | + | ** [[JavaScript#Games_to_Learn|Games to Learn JavaScript and CSS]] |
+ | ** [[Python#Games_to_Learn_Python | Games to Learn Python]] | ||
+ | * [[Immersive Reality]] ... [[Metaverse]] ... [[Omniverse]] ... [[Transhumanism]] ... [[Religion]] | ||
** [[Metaverse#Flight Simulator 2020| Flight Simulator 2020]] | ** [[Metaverse#Flight Simulator 2020| Flight Simulator 2020]] | ||
** [[Metaverse#Fortnite| Fortnite]] | ** [[Metaverse#Fortnite| Fortnite]] | ||
* [[Autonomous Drones]] Racing | * [[Autonomous Drones]] Racing | ||
− | * [[Reinforcement Learning (RL)]] | + | * [[What is Artificial Intelligence (AI)? | Artificial Intelligence (AI)]] ... [[Machine Learning (ML)]] ... [[Deep Learning]] ... [[Neural Network]] ... [[Reinforcement Learning (RL)|Reinforcement]] ... [[Learning Techniques]] |
* [[Q Learning]] | * [[Q Learning]] | ||
** [[Deep Q Network (DQN)]] | ** [[Deep Q Network (DQN)]] | ||
* [[Competitions]] | * [[Competitions]] | ||
− | |||
* [[Blockchain]] | * [[Blockchain]] | ||
− | + | * [[Bayes#Bayesian_Game|Bayesian_Game]] | |
+ | * [[Analytics]] ... [[Visualization]] ... [[Graphical Tools for Modeling AI Components|Graphical Tools]] ... [[Diagrams for Business Analysis|Diagrams]] & [[Generative AI for Business Analysis|Business Analysis]] ... [[Requirements Management|Requirements]] ... [[Loop]] ... [[Bayes]] ... [[Network Pattern]] | ||
* [[GameGAN]] | * [[GameGAN]] | ||
* [[Quantum#Quantum Chess|Quantum Chess]] | * [[Quantum#Quantum Chess|Quantum Chess]] | ||
− | * [[Video]] | + | * [[Video/Image]] ... [[Vision]] ... [[Colorize]] ... [[Image/Video Transfer Learning]] |
− | * [[ | + | * [[Policy]] ... [[Policy vs Plan]] ... [[Constitutional AI]] ... [[Trust Region Policy Optimization (TRPO)]] ... [[Policy Gradient (PG)]] ... [[Proximal Policy Optimization (PPO)]] |
− | * [ | + | * [https://deepindex.org/#Games Deepindex.org list] |
− | * [ | + | * [https://unity.com/solutions/game Unity] Core Platform |
− | * [ | + | * [https://blog.finxter.com/free-python-books/ 101+ Free Python Books | Christian] |
− | ** [ | + | ** [https://inventwithpython.com/inventwithpython_3rd.pdf Making Games with Python & Pygame 3rd Edition 2015 | Al Sweigart - Invent with Python] - 11 games |
− | * [ | + | * [https://venturebeat.com/2019/05/09/ai-is-becoming-esports-secret-weapon/ AI is becoming esports’ secret weapon | Berk Ozer - VentureBeat] |
− | * [ | + | * [https://www.theverge.com/2019/2/1/18185945/live-action-roleplaying-larp-game-design-artificial-intelligence-ethics-issues Inside the LARPS (ive-action role-playing game) that let Human Players Experience AI Life | Tasha Robinson] |
− | * [ | + | * [https://medium.freecodecamp.org/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8 An introduction to Deep Q-Learning: let’s play Doom] |
− | * [ | + | * [https://www.youtube.com/user/tthompso AI and Games Series; an Informed Overview | Dr Tommy Thompson] |
− | * [ | + | * [https://www.amazon.com/gp/product/9056918184 Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI | M. Sadler and N. Regan] |
− | * [ | + | * [https://en.wikipedia.org/wiki/Artificial_intelligence_in_video_games Artificial Intelligence in Video Games | Wikipedia] |
− | * [ | + | * [https://blogs.unity3d.com/2017/12/11/using-machine-learning-agents-in-a-real-game-a-beginners-guide/ Using Machine Learning Agents Toolkit in a real game: a beginner’s guide | Alessia Nigretti - Unity] ...[[Agents]] |
− | * [ | + | * [https://www.tomsguide.com/us/mit-jenga-robot,news-29290.html This AI Robot Will Beat You at Jenga | Jesus Diaz] |
− | * [ | + | * [https://futurism.com/the-byte/browser-game-opponents-neural-networks In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism] |
− | * [ | + | * [https://www.polygon.com/2019/12/6/20998745/ai-dungeon-2-text-adventure-openai-how-to-play-nick-walton You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - Polygon] To play [[Jupyter]]-notebook based game click... |
− | * [ | + | * [https://www.pgs-soft.com/blog/writing-board-game-ai-bots-the-good-the-bad-and-the-ugly/ Writing Board Game AI Bots – The Good, The Bad, and The Ugly | Tomasz Zielinski - PGS Software] |
− | + | * [https://www.intrinsicalgorithm.com/media.php Intrinsic Algorithm | Dave Mark] reducing the world to mathematical equations | |
− | * [ | + | * [https://www.cnbc.com/2021/07/11/future-ai-toys-may-be-smarter-than-parents-and-less-protective.html Future AI toys could be smarter than parents, but a lot less protective | Mikaela Cohen - CNBC Evolve] |
− | * [ | + | * [https://zdnet1.cbsistatic.com/hub/i/r/2019/01/28/acc7879b-fea1-44fe-b894-e7623ed4bfdc/resize/370xauto/9886c4d79b5ee027d4fc791bcab11c4b/google-2019-expanding-the-gamescape.png Future AI toys could be smarter than parents, but a lot less protective | Mikaela Cohen - CNBC] |
− | * [ | + | * [https://www.wired.com/story/this-ai-resurrects-ancient-board-games-lets-you-play-them/ This AI Resurrects Ancient Board Games—and Lets You Play Them; What tabletop games did our ancestors play in 1000 BC? A new research project wants to find out, and make them playable online too. | Samantha Huioi Yow - Wired] ...[https://ludeme.eu/ Digital Ludeme Project; Modelling the Evolution of Traditional Games] |
− | * [ | + | * [https://a16z.com/2022/11/17/the-generative-ai-revolution-in-games/ The Generative AI Revolution in Games | James Gwertzman and Jack Soslow - Andreessen Horowitz] |
− | * [ | + | * [https://towardsdatascience.com/modeling-games-with-markov-chains-c7b614731a7f Modeling Games with Markov Chains | Kairo Morton - Towards Data Science] ... Follow Exploring Probabilistic Modeling using “Shut the Box” |
* [[Google]]: | * [[Google]]: | ||
− | ** [ | + | ** [https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii AlphaStar: Mastering the Real-Time Strategy Game StarCraft II] |
− | ** [ | + | ** [https://www.zdnet.com/article/googles-ai-surfs-the-gamescape-to-conquer-game-theory/ Google’s AI surfs the “gamescape” to conquer game theory | Tiernan Ray] |
− | ** [ | + | ** [https://www.technologyreview.com/f/615429/deepminds-ai-57-atari-games-but-its-still-not-versatile-enough/ DeepMind’s AI can now play all 57 Atari games—but it’s still not versatile enough | MIT Technology Review] ...[https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark Agent57 | DeepMind] ...[[Agents]] |
− | ** [ | + | ** [https://arxiv.org/pdf/1908.09453.pdf OpenSpiel: A Framework for Reinforcement Learning in Games | M. Lanctot, E. Lockhart, J. Lespiau1, V. Zambaldi1, S. Upadhyay, J. Pérolat, S. Srinivasan, F. Timbers, K. Tuyls, S. Omidshafiei, D. Hennes, D. Morrill1, P. Muller, T. Ewalds, R. Faulkner, J. Kramár, B. De Vylder, B. Saeta, J. Bradbury, D. Ding, S. Borgeaud, M. Lai1, J. Schrittwieser, T. Anthony, E. Hughes, I. Danihelka and J. Ryan-Davis - DeepMind] |
− | *** [ | + | *** [https://github.com/deepmind/open_spiel/blob/master/docs/intro.md OpenSpiel | GitHub] |
− | *** [ | + | *** [https://venturebeat.com/2019/08/27/deepmind-details-openspiel-a-collection-of-ai-training-tools-for-video-games/ DeepMind details OpenSpiel, a collection of AI training tools for video games | Kyle Wiggers - VentureBeat] |
+ | * [https://www.makeuseof.com/how-use-chatgpt-my-gpt-bots/ How to Use ChatGPT's "My GPT" Bots to Learn Board Games, Create Images, and Much More | Dreamchild Obari - Make Use Of] ... Game Time ... Do you have a board game somewhere at home that you don't know how to play? Game Time comes in clutch and can explain cards and board games to you. You can also upload images if you don't know what the game is called but have the instructions or an idea of what it is. | ||
+ | * [https://www.engati.com/blog/ai-in-gaming AI in Gaming | 5 Biggest Innovations (+40 AI Games) | Jeremy DSouza - engati] ... benefits, game types, innovations, popular games, & future of AI in gaming | ||
+ | ** [https://colab.research.google.com/github/nickwalton/AIDungeon/blob/master/AIDungeon_2.ipynb AI Dungeon 2] ... uses [[OpenAI]]'s GPT LLM to allow players to engage in text-based adventures where the possibilities are virtually limitless | ||
+ | ** [https://codecombat.com/ Code Combat] ... innovative game-based learning technology | ||
+ | ** [https://screeps.com/ Screeps] ... MMO sandbox game for programmers | ||
+ | |||
+ | |||
+ | |||
− | |||
= Gaming Evolution = | = Gaming Evolution = | ||
== [[Meta]]: Diplomacy 2022 == | == [[Meta]]: Diplomacy 2022 == | ||
− | * [[Negotiation]] | + | * [[Agents]] ... [[Robotic Process Automation (RPA)|Robotic Process Automation]] ... [[Assistants]] ... [[Personal Companions]] ... [[Personal Productivity|Productivity]] ... [[Email]] ... [[Negotiation]] ... [[LangChain]] |
* [https://www.infoq.com/news/2022/12/meta-diplomacy-cicero/ Meta's CICERO AI Wins Online Diplomacy Tournament | Anthony Alford - InfoQ] ... Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players. | * [https://www.infoq.com/news/2022/12/meta-diplomacy-cicero/ Meta's CICERO AI Wins Online Diplomacy Tournament | Anthony Alford - InfoQ] ... Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players. | ||
+ | |||
+ | Cicero, has demonstrated the ability to play the strategy game Diplomacy at a level that rivals human performance. Cicero can engage in game conversations and negotiations without most human players realizing they are interacting with a machine. During gameplay on an online league, Cicero sent over 5,000 messages to human players, and its identity as an AI remained undetected. Cicero's performance was impressive, ranking in the top 10% of players. The integration of AI into the game of Diplomacy has shown that machines can effectively mimic human negotiation tactics and strategic thinking. Cicero's achievements in Diplomacy are a testament to the potential of AI in complex human interactions. As AI continues to evolve, it will undoubtedly transform the landscape of diplomacy, offering new tools and methods to support diplomatic efforts. | ||
<youtube>lNtBiZaLA0k</youtube> | <youtube>lNtBiZaLA0k</youtube> | ||
<youtube>u5192bvUS7k</youtube> | <youtube>u5192bvUS7k</youtube> | ||
− | == [[NVIDIA]]: [ | + | == [[NVIDIA]]: [https://blogs.nvidia.com/blog/2020/05/22/gamegan-research-pacman-anniversary/ 40 Years on, PAC-MAN] ...2020 == |
* [[GameGAN]], a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine. | * [[GameGAN]], a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine. | ||
Line 81: | Line 105: | ||
== [[OpenAI]]: Hide and Seek ... 2019 == | == [[OpenAI]]: Hide and Seek ... 2019 == | ||
− | * [ | + | * [https://openai.com/blog/emergent-tool-use/ Emergent Tool Use from Multi-Agent Interaction |] [[OpenAI]] |
− | * [ | + | * [https://d4mucfpksywv.cloudfront.net/emergent-tool-use/paper/Multi_Agent_Emergence_2019.pdf Emergent Tool Use from Multi-Agent Autocurricula B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch] |
<youtube>Lu56xVlZ40M</youtube> | <youtube>Lu56xVlZ40M</youtube> | ||
<youtube>n6nF9WfpPrA</youtube> | <youtube>n6nF9WfpPrA</youtube> | ||
− | == [[Meta]]: [ | + | == [[Meta]]: [https://ai.facebook.com/blog/pluribus-first-ai-to-beat-pros-in-6-player-poker/ Brown & Sandholm]: 6-player Poker ...2019 == |
* [[Occlusions]] | * [[Occlusions]] | ||
− | * [ | + | * [https://www.theverge.com/2019/7/11/20690078/ai-poker-pluribus-facebook-cmu-texas-hold-em-six-player-no-limit [[Meta|Facebook]] and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge] |
<youtube>u90TbxK7VEA</youtube> | <youtube>u90TbxK7VEA</youtube> | ||
Line 106: | Line 130: | ||
== [[Google DeepMind AlphaGo Zero]]: Go ...2016 == | == [[Google DeepMind AlphaGo Zero]]: Go ...2016 == | ||
− | * [ | + | * [https://deepmind.com/blog/article/alphago-zero-starting-scratch AlphaGo Zero: Starting from scratch | DeepMind] |
− | * [ | + | * [https://asiasociety.org/blog/asia/chinas-sputnik-moment-and-sino-american-battle-ai-supremacy [[Government Services#China|China]]'s 'Sputnik Moment' and the Sino-American Battle for AI Supremacy | ][[Creatives#Kai-Fu Lee |Kai-Fu Lee]] - Asia Society |
− | * [ | + | * [https://www.huffpost.com/entry/move-37-or-how-ai-can-change-the-world_b_58399703e4b0a79f7433b675 Move 37, or how AI can change the world | George Zarkadakis - HuffPost] |
− | * [ | + | * [https://katbailey.github.io/post/was-alphagos-move-37-inevitable/ Was AlphaGo's Move 37 Inevitable? | Katherine Bailey] |
* [https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/ Man beats machine at Go in human victory over AI | Richard Waters - Ars Technica] ... Amateur exploited weakness in systems that have otherwise dominated grandmasters. | * [https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/ Man beats machine at Go in human victory over AI | Richard Waters - Ars Technica] ... Amateur exploited weakness in systems that have otherwise dominated grandmasters. | ||
+ | |||
+ | AlphaGo is a computer program developed by Google DeepMind that uses artificial intelligence (AI) to play the board game Go. In 2016, AlphaGo made history by becoming the first computer program to defeat a professional Go player, Lee Sedol, in a five-game match. | ||
+ | |||
+ | During the second game of the match, AlphaGo made a surprising move, known as Move 37, which stunned the Go community and left Lee Sedol speechless. The move involved placing a stone in an unexpected location on the board, which initially appeared to be a mistake. However, as the game progressed, it became clear that the move was part of a complex strategy that allowed AlphaGo to gain an advantage over Lee Sedol. Move 37 is significant because it demonstrated the power of AlphaGo's AI algorithms and its ability to think creatively and strategically. The move was not based on any known human strategy or prior knowledge of the game, but rather on AlphaGo's own analysis and evaluation of the board position. | ||
+ | |||
+ | |||
+ | <hr><center><b><i> | ||
+ | |||
+ | What would have happened with human-in-the-loop on Move 37? | ||
+ | |||
+ | </i></b></center><hr> | ||
+ | |||
+ | |||
+ | The move highlighted the limitations of human intuition and the potential for AI to uncover new insights and strategies in complex domains. If a human expert had been involved in the decision-making process for Move 37, they might have questioned AlphaGo's choice and suggested a more conventional move. This could have prevented AlphaGo from making the unexpected and seemingly risky move that ultimately led to its victory. | ||
+ | |||
<youtube>WXuK6gekU1Y</youtube> | <youtube>WXuK6gekU1Y</youtube> | ||
Line 116: | Line 155: | ||
=== <span id="Minigo"></span>[[Creatives#Andrew Jackson |Andrew Jackson]] & Josh Hoak: Minigo ...2018 === | === <span id="Minigo"></span>[[Creatives#Andrew Jackson |Andrew Jackson]] & Josh Hoak: Minigo ...2018 === | ||
− | * [ | + | * [https://github.com/tensorflow/minigo Minigo - GitHub] |
an open source, unofficial implementation of AlphaGo Zero using [[Reinforcement Learning (RL)]] approaches can be massively parallelized, so [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] seems like a natural fit, as [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] is all about reducing the overhead for managing applications. However, it can be daunting to wade into [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] and Machine Learning, especially when you add in hardware accelerators like [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU |GPUs or TPUs]]! This talk will break down how you can use [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] and [[TensorFlow]] to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU | GPUs, TPUs]], [[TensorFlow]], [[Kubeflow Pipelines|KubeFlow]], and large-scale [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] Engine clusters. Uses self-play with [[Monte Carlo Tree Search]] and refining the [[Policy vs Plan | Policy/Value]] along the way. | an open source, unofficial implementation of AlphaGo Zero using [[Reinforcement Learning (RL)]] approaches can be massively parallelized, so [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] seems like a natural fit, as [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] is all about reducing the overhead for managing applications. However, it can be daunting to wade into [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] and Machine Learning, especially when you add in hardware accelerators like [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU |GPUs or TPUs]]! This talk will break down how you can use [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] and [[TensorFlow]] to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU | GPUs, TPUs]], [[TensorFlow]], [[Kubeflow Pipelines|KubeFlow]], and large-scale [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] Engine clusters. Uses self-play with [[Monte Carlo Tree Search]] and refining the [[Policy vs Plan | Policy/Value]] along the way. | ||
Line 122: | Line 161: | ||
<youtube>Qra8Aqxu_fo</youtube> | <youtube>Qra8Aqxu_fo</youtube> | ||
− | == Google [ | + | == Google [https://deepmind.com/ DeepMind]: Atari video games ...2015 == |
<youtube>Ih8EfvOzBOY</youtube> | <youtube>Ih8EfvOzBOY</youtube> | ||
<youtube>EfGD2qveGdQ</youtube> | <youtube>EfGD2qveGdQ</youtube> | ||
== [[IBM]]: Watson: Jeopardy ...2011 == | == [[IBM]]: Watson: Jeopardy ...2011 == | ||
− | * [ | + | * [https://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/ IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next | Jo Best - TechRepublic] |
<youtube>7rIf2Njye5k</youtube> | <youtube>7rIf2Njye5k</youtube> | ||
<youtube>4svcCJJ6ciw</youtube> | <youtube>4svcCJJ6ciw</youtube> | ||
Line 135: | Line 174: | ||
<youtube>2Xhd2KNNs-c</youtube> | <youtube>2Xhd2KNNs-c</youtube> | ||
− | == [[Creatives#John Conway |John Conway]]: [ | + | == [[Creatives#John Conway |John Conway]]: [https://playgameoflife.com/ The Game of Life (GoL)] ...1970 == |
− | * [ | + | * [[Artificial General Intelligence (AGI) to Singularity]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ... [[Algorithm Administration#Automated Learning|Automated Learning]] |
− | * [ | + | * [https://playgameoflife.com/ Game_of_Life] |
− | * [ | + | * [https://thelifeengine.net/ Life Engine] |
− | * [ | + | * [https://www.ibiblio.org/lifepatterns/october1970.html MATHEMATICAL GAMES: The fantastic combinations of John Conway's new solitaire game "life" | Martin Gardner - ] [https://www.scientificamerican.com/ Scientific American 223 (October 1970): 120-123.] |
+ | * [https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life Wikipedia] | ||
+ | * [https://medium.com/@tomgrek/evolving-game-of-life-neural-networks-chaos-and-complexity-94b509bc7aa8 Evolving Game of Life: Neural Networks, Chaos, and Complexity | Tom Grek - Medium] | ||
− | + | https://upload.wikimedia.org/wikipedia/commons/e/e5/Gospers_glider_gun.gif | |
The Rules | The Rules | ||
Line 153: | Line 194: | ||
<youtube>FWSR_7kZuYg</youtube> | <youtube>FWSR_7kZuYg</youtube> | ||
<youtube>Gbvy6gY5Ev4</youtube> | <youtube>Gbvy6gY5Ev4</youtube> | ||
+ | <youtube>np6ZVZIs7f8</youtube> | ||
− | == [ | + | == [https://books.google.com/books?id=bz_dgCLhhkUC&pg=PA405&lpg=PA405&dq=Donald+Waterman+Draw+Poker Donald Waterman]: Draw Poker ...1968 == |
* [https://www.researchgate.net/scientific-contributions/2041781574_Donald_A_Waterman Donald Waterman publications - production systems] | * [https://www.researchgate.net/scientific-contributions/2041781574_Donald_A_Waterman Donald Waterman publications - production systems] | ||
Line 164: | Line 206: | ||
== Martin Gardner: Hexapawn ...1962== | == Martin Gardner: Hexapawn ...1962== | ||
− | * [ | + | * [https://www.cs.williams.edu/~freund/cs136-073/GardnerHexapawn.pdf How to build a game-learning machine and then teach it to play and to win |] [https://en.wikipedia.org/wiki/Martin_Gardner Martin Gardner] |
A simple game on a 3x3 grid, where each side has 3 chess pawns. The objective is to get a pawn to the other side of the board, or leave the opponent unable to move. Normal chess rules apply except that the pawns are not allowed a double move from their starting position. Not really intended as a two-player game, it was designed to demonstrate an artificial intelligence learning technique by using beads in matchboxes. (Old enough to remember matchboxes?) Twenty-four matchboxes were used to represent the possible moves. Essentially, there were two phases. The first phase was to "teach" the matchbox computer to play the game, then a second phase allowed the matchbox computer to play other opponents. The learning speed depended on the skill of the opponent in the teaching phase. Martin Gardner first published this in his Mathematical Games column in March 1962, and subsequently in his book, "The Unexpected Hanging". [https://www.boardgamegeek.com/boardgame/33379/hexapawn Board Game Geek] | A simple game on a 3x3 grid, where each side has 3 chess pawns. The objective is to get a pawn to the other side of the board, or leave the opponent unable to move. Normal chess rules apply except that the pawns are not allowed a double move from their starting position. Not really intended as a two-player game, it was designed to demonstrate an artificial intelligence learning technique by using beads in matchboxes. (Old enough to remember matchboxes?) Twenty-four matchboxes were used to represent the possible moves. Essentially, there were two phases. The first phase was to "teach" the matchbox computer to play the game, then a second phase allowed the matchbox computer to play other opponents. The learning speed depended on the skill of the opponent in the teaching phase. Martin Gardner first published this in his Mathematical Games column in March 1962, and subsequently in his book, "The Unexpected Hanging". [https://www.boardgamegeek.com/boardgame/33379/hexapawn Board Game Geek] | ||
Line 170: | Line 212: | ||
<youtube>FFk8S66d8_E</youtube> | <youtube>FFk8S66d8_E</youtube> | ||
− | == [ | + | == [https://en.wikipedia.org/wiki/Donald_Michie Donald Michie]: Noughts and Crosses ...1960 == |
− | * [ | + | * [https://www.dropbox.com/s/ycsycu0l01g9643/DonaldMichie.pdf?dl=0 Experiments on the mechanization of game-learning Part I. Characterization of the model and its parameters | Donald Michie] |
− | * [ | + | * [https://www.mscroggs.co.uk/menace/ Play against the online version of MENACE | Matt Scroggs] |
− | * [ | + | * [https://www.richardbowles.co.uk/ai_with_js/code1/ Playing Noughts and Crosses using MENACE | Richard Bowles] |
− | <b>MENACE</b> (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called [[Reinforcement Learning (RL)]]. [ | + | <b>MENACE</b> (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called [[Reinforcement Learning (RL)]]. [https://chalkdustmagazine.com/features/menace-machine-educable-noughts-crosses-engine/ Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust] |
− | MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. [ | + | MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. [https://medium.com/@ODSC/how-300-matchboxes-learned-to-play-tic-tac-toe-using-menace-35e0e4c29fc How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)] |
− | + | https://i1.wp.com/chalkdustmagazine.com/wp-content/uploads/2016/03/img3.jpg | |
− | <img src=" | + | <img src="https://i1.wp.com/chalkdustmagazine.com/wp-content/uploads/2016/03/menace.jpg" width="600" height="300"> |
<youtube>R9c-_neaxeU</youtube> | <youtube>R9c-_neaxeU</youtube> | ||
Line 191: | Line 233: | ||
− | == [ | + | == [https://en.wikipedia.org/wiki/Arthur_Samuel Arthur Samuel]: Checkers ...1950s == |
− | * [ | + | * [https://infolab.stanford.edu/pub/voy/museum/samuel.html Arthur Samuel - heuristics] |
− | * [ | + | * [https://www.wired.com/2007/07/the-game-of-che/ The Game of Checkers: Solved], 2007 |
<youtube>ipNT1QZV7Ag</youtube> | <youtube>ipNT1QZV7Ag</youtube> | ||
<youtube>jSVqCsinQLM</youtube> | <youtube>jSVqCsinQLM</youtube> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
= More... = | = More... = | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
<youtube>HnkVoOdTiSo</youtube> | <youtube>HnkVoOdTiSo</youtube> | ||
<youtube>Q8BX0nXfPjY</youtube> | <youtube>Q8BX0nXfPjY</youtube> | ||
Line 389: | Line 263: | ||
= Books = | = Books = | ||
− | [ | + | [https://www.amazon.com/Invent-Your-Computer-Games-Python/dp/1593277954/ref=tmm_pap_swatch_0 Invent Your Own Computer Games with Python | Al Sweigart] |
https://images-na.ssl-images-amazon.com/images/I/51mpkckeu4L._SX376_BO1,204,203,200_.jpg | https://images-na.ssl-images-amazon.com/images/I/51mpkckeu4L._SX376_BO1,204,203,200_.jpg | ||
− | [ | + | [https://www.amazon.com/Deep-Learning-Game-Max-Pumperla/dp/1617295329 Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson] |
https://images-na.ssl-images-amazon.com/images/I/51LpAeEYhzL._SX397_BO1,204,203,200_.jpg | https://images-na.ssl-images-amazon.com/images/I/51LpAeEYhzL._SX397_BO1,204,203,200_.jpg | ||
− | [ | + | [https://www.amazon.com/Hands-Deep-Learning-Games-reinforcement/dp/1788994078 Hands-On Deep Learning for Games: Leverage the power of neural networks and reinforcement learning to build intelligent games | Micheal Lanham] |
https://images-na.ssl-images-amazon.com/images/I/517S9nvodoL._SX404_BO1,204,203,200_.jpg | https://images-na.ssl-images-amazon.com/images/I/517S9nvodoL._SX404_BO1,204,203,200_.jpg | ||
− | [ | + | [https://www.amazon.com/Machine-learning-Artificial-Intelligence-Data-ebook/dp/B07V6RQKYX/ref=sr_1_30 Machine learning and Artificial Intelligence 2.0 with Big Data: Building Video Games using Python 3.7 and Pygame | Narendra Mohan Mittal] |
https://images-na.ssl-images-amazon.com/images/I/41eHxTsXXgL.jpg | https://images-na.ssl-images-amazon.com/images/I/41eHxTsXXgL.jpg |
Latest revision as of 09:07, 17 November 2024
Youtube search... ... Quora ...Google search ...Google News ...Bing News
- Gaming ... Game-Based Learning (GBL) ... Security ... Generative AI ... Games - Metaverse ... Quantum ... Game Theory ... Design
- Case Studies
- Development ... Notebooks ... AI Pair Programming ... Codeless ... Hugging Face ... AIOps/MLOps ... AIaaS/MLaaS
- Minecraft: Voyager ... an AI agent powered by a Large Language Model (LLM) that has been introduced to the world of Minecraft
- Python ... GenAI w/ Python ... JavaScript ... GenAI w/ JavaScript ... TensorFlow ... PyTorch
- Roblox ... building tools to allow creators to develop integrated 3D objects that come with behaviour built in.
- Games to Learn JavaScript and CSS
- Games to Learn Python
- Immersive Reality ... Metaverse ... Omniverse ... Transhumanism ... Religion
- Autonomous Drones Racing
- Artificial Intelligence (AI) ... Machine Learning (ML) ... Deep Learning ... Neural Network ... Reinforcement ... Learning Techniques
- Q Learning
- Competitions
- Blockchain
- Bayesian_Game
- Analytics ... Visualization ... Graphical Tools ... Diagrams & Business Analysis ... Requirements ... Loop ... Bayes ... Network Pattern
- GameGAN
- Quantum Chess
- Video/Image ... Vision ... Colorize ... Image/Video Transfer Learning
- Policy ... Policy vs Plan ... Constitutional AI ... Trust Region Policy Optimization (TRPO) ... Policy Gradient (PG) ... Proximal Policy Optimization (PPO)
- Deepindex.org list
- Unity Core Platform
- 101+ Free Python Books | Christian
- AI is becoming esports’ secret weapon | Berk Ozer - VentureBeat
- Inside the LARPS (ive-action role-playing game) that let Human Players Experience AI Life | Tasha Robinson
- An introduction to Deep Q-Learning: let’s play Doom
- AI and Games Series; an Informed Overview | Dr Tommy Thompson
- Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI | M. Sadler and N. Regan
- Artificial Intelligence in Video Games | Wikipedia
- Using Machine Learning Agents Toolkit in a real game: a beginner’s guide | Alessia Nigretti - Unity ...Agents
- This AI Robot Will Beat You at Jenga | Jesus Diaz
- In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism
- You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - Polygon To play Jupyter-notebook based game click...
- Writing Board Game AI Bots – The Good, The Bad, and The Ugly | Tomasz Zielinski - PGS Software
- Intrinsic Algorithm | Dave Mark reducing the world to mathematical equations
- Future AI toys could be smarter than parents, but a lot less protective | Mikaela Cohen - CNBC Evolve
- Future AI toys could be smarter than parents, but a lot less protective | Mikaela Cohen - CNBC
- This AI Resurrects Ancient Board Games—and Lets You Play Them; What tabletop games did our ancestors play in 1000 BC? A new research project wants to find out, and make them playable online too. | Samantha Huioi Yow - Wired ...Digital Ludeme Project; Modelling the Evolution of Traditional Games
- The Generative AI Revolution in Games | James Gwertzman and Jack Soslow - Andreessen Horowitz
- Modeling Games with Markov Chains | Kairo Morton - Towards Data Science ... Follow Exploring Probabilistic Modeling using “Shut the Box”
- Google:
- AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
- Google’s AI surfs the “gamescape” to conquer game theory | Tiernan Ray
- DeepMind’s AI can now play all 57 Atari games—but it’s still not versatile enough | MIT Technology Review ...Agent57 | DeepMind ...Agents
- OpenSpiel: A Framework for Reinforcement Learning in Games | M. Lanctot, E. Lockhart, J. Lespiau1, V. Zambaldi1, S. Upadhyay, J. Pérolat, S. Srinivasan, F. Timbers, K. Tuyls, S. Omidshafiei, D. Hennes, D. Morrill1, P. Muller, T. Ewalds, R. Faulkner, J. Kramár, B. De Vylder, B. Saeta, J. Bradbury, D. Ding, S. Borgeaud, M. Lai1, J. Schrittwieser, T. Anthony, E. Hughes, I. Danihelka and J. Ryan-Davis - DeepMind
- How to Use ChatGPT's "My GPT" Bots to Learn Board Games, Create Images, and Much More | Dreamchild Obari - Make Use Of ... Game Time ... Do you have a board game somewhere at home that you don't know how to play? Game Time comes in clutch and can explain cards and board games to you. You can also upload images if you don't know what the game is called but have the instructions or an idea of what it is.
- AI in Gaming | 5 Biggest Innovations (+40 AI Games) | Jeremy DSouza - engati ... benefits, game types, innovations, popular games, & future of AI in gaming
- AI Dungeon 2 ... uses OpenAI's GPT LLM to allow players to engage in text-based adventures where the possibilities are virtually limitless
- Code Combat ... innovative game-based learning technology
- Screeps ... MMO sandbox game for programmers
Contents
- 1 Gaming Evolution
- 1.1 Meta: Diplomacy 2022
- 1.2 NVIDIA: 40 Years on, PAC-MAN ...2020
- 1.3 OpenAI: Hide and Seek ... 2019
- 1.4 Meta: Brown & Sandholm: 6-player Poker ...2019
- 1.5 Google DeepMind AlphaStar: StarCraft II ... 2019
- 1.6 OpenAI: Dota 2 ...2018
- 1.7 Google DeepMind AlphaGo Zero: Go ...2016
- 1.8 Google DeepMind: Atari video games ...2015
- 1.9 IBM: Watson: Jeopardy ...2011
- 1.10 IBM: Deep Blue: Chess ...1997
- 1.11 John Conway: The Game of Life (GoL) ...1970
- 1.12 Donald Waterman: Draw Poker ...1968
- 1.13 Martin Gardner: Hexapawn ...1962
- 1.14 Donald Michie: Noughts and Crosses ...1960
- 1.15 Arthur Samuel: Checkers ...1950s
- 2 More...
- 3 Books
Gaming Evolution
Meta: Diplomacy 2022
- Agents ... Robotic Process Automation ... Assistants ... Personal Companions ... Productivity ... Email ... Negotiation ... LangChain
- Meta's CICERO AI Wins Online Diplomacy Tournament | Anthony Alford - InfoQ ... Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players.
Cicero, has demonstrated the ability to play the strategy game Diplomacy at a level that rivals human performance. Cicero can engage in game conversations and negotiations without most human players realizing they are interacting with a machine. During gameplay on an online league, Cicero sent over 5,000 messages to human players, and its identity as an AI remained undetected. Cicero's performance was impressive, ranking in the top 10% of players. The integration of AI into the game of Diplomacy has shown that machines can effectively mimic human negotiation tactics and strategic thinking. Cicero's achievements in Diplomacy are a testament to the potential of AI in complex human interactions. As AI continues to evolve, it will undoubtedly transform the landscape of diplomacy, offering new tools and methods to support diplomatic efforts.
NVIDIA: 40 Years on, PAC-MAN ...2020
- GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.
OpenAI: Hide and Seek ... 2019
- Emergent Tool Use from Multi-Agent Interaction | OpenAI
- Emergent Tool Use from Multi-Agent Autocurricula B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch
Meta: Brown & Sandholm: 6-player Poker ...2019
- Occlusions
- Facebook and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge
Google DeepMind AlphaStar: StarCraft II ... 2019
OpenAI: Dota 2 ...2018
Google DeepMind AlphaGo Zero: Go ...2016
- AlphaGo Zero: Starting from scratch | DeepMind
- China's 'Sputnik Moment' and the Sino-American Battle for AI Supremacy | Kai-Fu Lee - Asia Society
- Move 37, or how AI can change the world | George Zarkadakis - HuffPost
- Was AlphaGo's Move 37 Inevitable? | Katherine Bailey
- Man beats machine at Go in human victory over AI | Richard Waters - Ars Technica ... Amateur exploited weakness in systems that have otherwise dominated grandmasters.
AlphaGo is a computer program developed by Google DeepMind that uses artificial intelligence (AI) to play the board game Go. In 2016, AlphaGo made history by becoming the first computer program to defeat a professional Go player, Lee Sedol, in a five-game match.
During the second game of the match, AlphaGo made a surprising move, known as Move 37, which stunned the Go community and left Lee Sedol speechless. The move involved placing a stone in an unexpected location on the board, which initially appeared to be a mistake. However, as the game progressed, it became clear that the move was part of a complex strategy that allowed AlphaGo to gain an advantage over Lee Sedol. Move 37 is significant because it demonstrated the power of AlphaGo's AI algorithms and its ability to think creatively and strategically. The move was not based on any known human strategy or prior knowledge of the game, but rather on AlphaGo's own analysis and evaluation of the board position.
What would have happened with human-in-the-loop on Move 37?
The move highlighted the limitations of human intuition and the potential for AI to uncover new insights and strategies in complex domains. If a human expert had been involved in the decision-making process for Move 37, they might have questioned AlphaGo's choice and suggested a more conventional move. This could have prevented AlphaGo from making the unexpected and seemingly risky move that ultimately led to its victory.
Andrew Jackson & Josh Hoak: Minigo ...2018
an open source, unofficial implementation of AlphaGo Zero using Reinforcement Learning (RL) approaches can be massively parallelized, so Kubernetes seems like a natural fit, as Kubernetes is all about reducing the overhead for managing applications. However, it can be daunting to wade into Kubernetes and Machine Learning, especially when you add in hardware accelerators like GPUs or TPUs! This talk will break down how you can use Kubernetes and TensorFlow to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on GPUs, TPUs, TensorFlow, KubeFlow, and large-scale Kubernetes Engine clusters. Uses self-play with Monte Carlo Tree Search and refining the Policy/Value along the way.
Google DeepMind: Atari video games ...2015
IBM: Watson: Jeopardy ...2011
IBM: Deep Blue: Chess ...1997
John Conway: The Game of Life (GoL) ...1970
- Artificial General Intelligence (AGI) to Singularity ... Curious Reasoning ... Emergence ... Moonshots ... Explainable AI ... Automated Learning
- Game_of_Life
- Life Engine
- MATHEMATICAL GAMES: The fantastic combinations of John Conway's new solitaire game "life" | Martin Gardner - Scientific American 223 (October 1970): 120-123.
- Wikipedia
- Evolving Game of Life: Neural Networks, Chaos, and Complexity | Tom Grek - Medium
The Rules
- For a space that is 'populated':
- Each cell with one or no neighbors dies, as if by solitude.
- Each cell with four or more neighbors dies, as if by overpopulation.
- Each cell with two or three neighbors survives.
- For a space that is 'empty' or 'unpopulated'
- Each cell with three neighbors becomes populated.
Donald Waterman: Draw Poker ...1968
Martin Gardner: Hexapawn ...1962
A simple game on a 3x3 grid, where each side has 3 chess pawns. The objective is to get a pawn to the other side of the board, or leave the opponent unable to move. Normal chess rules apply except that the pawns are not allowed a double move from their starting position. Not really intended as a two-player game, it was designed to demonstrate an artificial intelligence learning technique by using beads in matchboxes. (Old enough to remember matchboxes?) Twenty-four matchboxes were used to represent the possible moves. Essentially, there were two phases. The first phase was to "teach" the matchbox computer to play the game, then a second phase allowed the matchbox computer to play other opponents. The learning speed depended on the skill of the opponent in the teaching phase. Martin Gardner first published this in his Mathematical Games column in March 1962, and subsequently in his book, "The Unexpected Hanging". Board Game Geek
Donald Michie: Noughts and Crosses ...1960
- Experiments on the mechanization of game-learning Part I. Characterization of the model and its parameters | Donald Michie
- Play against the online version of MENACE | Matt Scroggs
- Playing Noughts and Crosses using MENACE | Richard Bowles
MENACE (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called Reinforcement Learning (RL). Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust
MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)
Arthur Samuel: Checkers ...1950s
More...
Books
Invent Your Own Computer Games with Python | Al Sweigart
Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson