Difference between revisions of "Gaming"
m (→Cybersecurity - Gaming) |
m |
||
Line 23: | Line 23: | ||
** [[Bayes#Bayesian_Game|Bayesian_Game]] | ** [[Bayes#Bayesian_Game|Bayesian_Game]] | ||
* [[GameGAN]] | * [[GameGAN]] | ||
+ | * [[Quantum#Quantum Chess|Quantum Chess]] | ||
* [http://deepindex.org/#Games Deepindex.org list] | * [http://deepindex.org/#Games Deepindex.org list] | ||
* [http://unity.com/solutions/game Unity] Core Platform | * [http://unity.com/solutions/game Unity] Core Platform |
Revision as of 21:35, 13 December 2020
Youtube search... ...Google search
- Case Studies
- Metaverse
- Autonomous Drones Racing
- Reinforcement Learning (RL)
- Q Learning
- Competitions
- Game Theory
- GameGAN
- Quantum Chess
- Deepindex.org list
- Unity Core Platform
- 101+ Free Python Books | Christian
- AI is becoming esports’ secret weapon | Berk Ozer - VentureBeat
- Inside the LARPS (ive-action role-playing game) that let Human Players Experience AI Life | Tasha Robinson
- An introduction to Deep Q-Learning: let’s play Doom
- AI and Games Series; an Informed Overview | Dr Tommy Thompson
- Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI | M. Sadler and N. Regan
- Artificial Intelligence in Video Games | Wikipedia
- Using Machine Learning Agents Toolkit in a real game: a beginner’s guide | Alessia Nigretti - Unity
- Google:
- AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
- Google’s AI surfs the “gamescape” to conquer game theory | Tiernan Ray
- DeepMind’s AI can now play all 57 Atari games—but it’s still not versatile enough | MIT Technology Review ...Agent57 | DeepMind
- OpenSpiel: A Framework for Reinforcement Learning in Games | M. Lanctot, E. Lockhart, J. Lespiau1, V. Zambaldi1, S. Upadhyay, J. Pérolat, S. Srinivasan, F. Timbers, K. Tuyls, S. Omidshafiei, D. Hennes, D. Morrill1, P. Muller, T. Ewalds, R. Faulkner, J. Kramár, B. De Vylder, B. Saeta, J. Bradbury, D. Ding, S. Borgeaud, M. Lai1, J. Schrittwieser, T. Anthony, E. Hughes, I. Danihelka and J. Ryan-Davis - DeepMind
- This AI Robot Will Beat You at Jenga | Jesus Diaz
- In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism
- You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - Polygon To play Jupyter-notebook based game click...
- AI Dungeon 2
- Writing Board Game AI Bots – The Good, The Bad, and The Ugly | Tomasz Zielinski - PGS Software
- Intrinsic Algorithm | Dave Mark reducing the world to mathematical equations
Contents
- 1 NVIDIA: 40 Years on, PAC-MAN ...2020
- 2 OpenAI: Hide and Seek ... 2019
- 3 Brown & Sandholm: 6-player Poker ...2019
- 4 Google DeepMind AlphaStar: StarCraft II ... 2019
- 5 OpenAI: Dota 2 ...2018
- 6 Google DeepMind AlphaGo Zero: Go ...2016
- 7 Google DeepMind: Atari video games ...2015
- 8 IBM: Watson: Jeopardy ...2011
- 9 IBM: Deep Blue: Chess ...1997
- 10 John Conway: The Game of Life (GoL) ...1970
- 11 Donald Waterman: Draw Poker ...1968
- 12 Donald Michie: Noughts and Crosses ...1960
- 13 Arthur Samuel: Checkers ...1950s
- 14 Cybersecurity - Gaming
- 15 Airport CEO
NVIDIA: 40 Years on, PAC-MAN ...2020
- GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.
OpenAI: Hide and Seek ... 2019
- Emergent Tool Use from Multi-Agent Interaction | OpenAI
- Emergent Tool Use from Multi-Agent Autocurricula B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch
Brown & Sandholm: 6-player Poker ...2019
- Occlusions
- Facebook and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge
Google DeepMind AlphaStar: StarCraft II ... 2019
OpenAI: Dota 2 ...2018
Google DeepMind AlphaGo Zero: Go ...2016
- AlphaGo Zero: Starting from scratch | DeepMind
- China's 'Sputnik Moment' and the Sino-American Battle for AI Supremacy | Kai-Fu Lee - Asia Society
- Move 37, or how AI can change the world | George Zarkadakis - HuffPost
- Was AlphaGo's Move 37 Inevitable? | Katherine Bailey
Andrew Jackson & Josh Hoak: Minigo ...2018
an open source, unofficial implementation of AlphaGo Zero using Reinforcement Learning (RL) approaches can be massively parallelized, so Kubernetes seems like a natural fit, as Kubernetes is all about reducing the overhead for managing applications. However, it can be daunting to wade into Kubernetes and Machine Learning, especially when you add in hardware accelerators like GPUs or TPUs! This talk will break down how you can use Kubernetes and TensorFlow to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on GPUs, TPUs, TensorFlow, KubeFlow, and large-scale Kubernetes Engine clusters. Uses self-play with Monte Carlo Tree Search and refining the Policy/Value along the way.
Google DeepMind: Atari video games ...2015
IBM: Watson: Jeopardy ...2011
IBM: Deep Blue: Chess ...1997
John Conway: The Game of Life (GoL) ...1970
- Game_of_Life
- MATHEMATICAL GAMES: The fantastic combinations of John Conway's new solitaire game "life" | Martin Gardner - Scientific American 223 (October 1970): 120-123.
- Wikipedia
- Evolving Game of Life: Neural Networks, Chaos, and Complexity | Tom Grek - Medium
The Rules
- For a space that is 'populated':
- Each cell with one or no neighbors dies, as if by solitude.
- Each cell with four or more neighbors dies, as if by overpopulation.
- Each cell with two or three neighbors survives.
- For a space that is 'empty' or 'unpopulated'
- Each cell with three neighbors becomes populated.
Donald Waterman: Draw Poker ...1968
Donald Michie: Noughts and Crosses ...1960
- Experiments on the mechanization of game-learning Part I. Characterization of the model and its parameters | Donald Michie
- Play against the online version of MENACE | Matt Scroggs
- Playing Noughts and Crosses using MENACE | Richard Bowles
MENACE (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called Reinforcement Learning (RL). Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust
MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)
Hexapawn
Arthur Samuel: Checkers ...1950s
Cybersecurity - Gaming
Youtube search... ...Google search
|
|
|
|
Airport CEO
Youtube search... ...Google search
|
|
|
|
More...
Books
Invent Your Own Computer Games with Python | Al Sweigart
Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson