Difference between revisions of "Gaming"
(→IBM Deep Blue: Chess ...1997) |
(→IBM: Deep Blue: Chess ...1997) |
||
Line 98: | Line 98: | ||
<youtube>ctwVRJksjJ4</youtube> | <youtube>ctwVRJksjJ4</youtube> | ||
<youtube>2Xhd2KNNs-c</youtube> | <youtube>2Xhd2KNNs-c</youtube> | ||
+ | |||
+ | == [[Creatives#John Conway |John Conway]]: [http://playgameoflife.com/ The Game of Life] ...1970 == | ||
+ | <youtube>FWSR_7kZuYg</youtube> | ||
== [http://books.google.com/books?id=bz_dgCLhhkUC&pg=PA405&lpg=PA405&dq=Donald+Waterman+Draw+Poker Donald Waterman]: Draw Poker ...1968 == | == [http://books.google.com/books?id=bz_dgCLhhkUC&pg=PA405&lpg=PA405&dq=Donald+Waterman+Draw+Poker Donald Waterman]: Draw Poker ...1968 == |
Revision as of 07:07, 8 August 2020
Youtube search... ...Google search
- Case Studies
- Deepindex.org list
- Autonomous Drones Racing
- Reinforcement Learning (RL)
- Q Learning
- Competitions
- Game Theory
- GameGAN
- Metaverse
- Unity Core Platform
- 101+ Free Python Books | Christian
- AI is becoming esports’ secret weapon | Berk Ozer - VentureBeat
- Inside the LARPS (ive-action role-playing game) that let Human Players Experience AI Life | Tasha Robinson
- An introduction to Deep Q-Learning: let’s play Doom
- AI and Games Series; an Informed Overview | Dr Tommy Thompson
- Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI | M. Sadler and N. Regan
- Artificial Intelligence in Video Games | Wikipedia
- Using Machine Learning Agents Toolkit in a real game: a beginner’s guide | Alessia Nigretti - Unity
- Google:
- AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
- Google’s AI surfs the “gamescape” to conquer game theory | Tiernan Ray
- DeepMind’s AI can now play all 57 Atari games—but it’s still not versatile enough | MIT Technology Review ...Agent57 | DeepMind
- OpenSpiel: A Framework for Reinforcement Learning in Games | M. Lanctot, E. Lockhart, J. Lespiau1, V. Zambaldi1, S. Upadhyay, J. Pérolat, S. Srinivasan, F. Timbers, K. Tuyls, S. Omidshafiei, D. Hennes, D. Morrill1, P. Muller, T. Ewalds, R. Faulkner, J. Kramár, B. De Vylder, B. Saeta, J. Bradbury, D. Ding, S. Borgeaud, M. Lai1, J. Schrittwieser, T. Anthony, E. Hughes, I. Danihelka and J. Ryan-Davis - DeepMind
- This AI Robot Will Beat You at Jenga | Jesus Diaz
- In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism
- You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - Polygon To play Jupyter-notebook based game click... AI Dungeon 2
- Intrinsic Algorithm | Dave Mark reducing the world to mathematical equations
Contents
- 1 NVIDIA: 40 Years on, PAC-MAN ...2020
- 2 OpenAI: Hide and Seek ... 2019
- 3 Brown & Sandholm: 6-player Poker ...2019
- 4 Google DeepMind AlphaStar: StarCraft II ... 2019
- 5 OpenAI: Dota 2 ...2018
- 6 Google DeepMind AlphaGo Zero: Go ...2016
- 7 Google DeepMind: Atari video games ...2015
- 8 IBM: Watson: Jeopardy ...2011
- 9 IBM: Deep Blue: Chess ...1997
- 10 John Conway: The Game of Life ...1970
- 11 Donald Waterman: Draw Poker ...1968
- 12 Donald Michie: Noughts and Crosses ...1960
- 13 Arthur Samuel: Checkers ...1950s
- 14 Fun!
- 15 Books
NVIDIA: 40 Years on, PAC-MAN ...2020
- GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.
OpenAI: Hide and Seek ... 2019
- Emergent Tool Use from Multi-Agent Interaction | OpenAI
- Emergent Tool Use from Multi-Agent Autocurricula B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch
Brown & Sandholm: 6-player Poker ...2019
- Occlusions
- Facebook and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge
Google DeepMind AlphaStar: StarCraft II ... 2019
OpenAI: Dota 2 ...2018
Google DeepMind AlphaGo Zero: Go ...2016
- China's 'Sputnik Moment' and the Sino-American Battle for AI Supremacy | Kai-Fu Lee - Asia Society
- Move 37, or how AI can change the world | George Zarkadakis - HuffPost
- Was AlphaGo's Move 37 Inevitable? | Katherine Bailey
Google DeepMind: Atari video games ...2015
IBM: Watson: Jeopardy ...2011
IBM: Deep Blue: Chess ...1997
John Conway: The Game of Life ...1970
Donald Waterman: Draw Poker ...1968
Donald Michie: Noughts and Crosses ...1960
- Experiments on the mechanization of game-learning Part I. Characterization of the model and its parameters | Donald Michie
- Play against the online version of MENACE | Matt Scroggs
- Playing Noughts and Crosses using MENACE | Richard Bowles
MENACE (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called Reinforcement Learning (RL). Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust
MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)
Hexapawn
Arthur Samuel: Checkers ...1950s
Fun!
Books
Invent Your Own Computer Games with Python | Al Sweigart
Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson