Difference between revisions of "Gaming"
(→40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers | Nvidia ...2020) |
(→Arthur Samuel: Checkers ...1950s) |
||
| Line 81: | Line 81: | ||
<youtube>KI__-MzkjK4</youtube> | <youtube>KI__-MzkjK4</youtube> | ||
<youtube>qndXrHcV1sM</youtube> | <youtube>qndXrHcV1sM</youtube> | ||
| + | |||
| + | == Donald Michie: MENACE ...1960 == | ||
| + | * [http://medium.com/@ODSC/how-300-matchboxes-learned-to-play-tic-tac-toe-using-menace-35e0e4c29fc How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - ODSC] | ||
| + | * [http://chalkdustmagazine.com/features/menace-machine-educable-noughts-crosses-engine/ Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust] | ||
| + | |||
| + | Menace “learns” to play noughts and crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called [[Reinforcement Learning]]. | ||
| + | |||
| + | http://i1.wp.com/chalkdustmagazine.com/wp-content/uploads/2016/03/img3.jpg | ||
| + | |||
| + | <youtube>ipNT1QZV7Ag</youtube> | ||
| + | <youtube>jSVqCsinQLM</youtube> | ||
== Arthur Samuel: Checkers ...1950s == | == Arthur Samuel: Checkers ...1950s == | ||
Revision as of 15:58, 5 July 2020
Youtube search... ...Google search
- Case Studies
- Deepindex.org list
- Autonomous Drones Racing
- Reinforcement Learning (RL)
- Deep Q Network (DQN)
- Competitions
- AI is becoming esports’ secret weapon | Berk Ozer - VentureBeat
- Inside the LARPS (ive-action role-playing game) that let Human Players Experience AI Life | Tasha Robinson
- An introduction to Deep Q-Learning: let’s play Doom
- AI and Games Series; an Informed Overview | Dr Tommy Thompson
- Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI | M. Sadler and N. Regan
- Artificial Intelligence in Video Games | Wikipedia
- Using Machine Learning Agents Toolkit in a real game: a beginner’s guide | Alessia Nigretti - Unity
- Google:
- AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
- Google’s AI surfs the “gamescape” to conquer game theory | Tiernan Ray
- DeepMind’s AI can now play all 57 Atari games—but it’s still not versatile enough | MIT Technology Review ...Agent57 | DeepMind
- OpenSpiel: A Framework for Reinforcement Learning in Games | M. Lanctot, E. Lockhart, J. Lespiau1, V. Zambaldi1, S. Upadhyay, J. Pérolat, S. Srinivasan, F. Timbers, K. Tuyls, S. Omidshafiei, D. Hennes, D. Morrill1, P. Muller, T. Ewalds, R. Faulkner, J. Kramár, B. De Vylder, B. Saeta, J. Bradbury, D. Ding, S. Borgeaud, M. Lai1, J. Schrittwieser, T. Anthony, E. Hughes, I. Danihelka and J. Ryan-Davis - DeepMind
- This AI Robot Will Beat You at Jenga | Jesus Diaz
- In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism
- You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - Polygon To play Jupyter-notebook based game click... AI Dungeon 2
Contents
- 1 40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers | Nvidia ...2020
- 2 Brown & Sandholm: 6-player Poker ...2019
- 3 Google DeepMind AlphaStar: StarCraft II ... 2019
- 4 OpenAI: Dota 2 ...2018
- 5 Google DeepMind AlphaGo Zero: Go ...2016
- 6 Google DeepMind: Atari video games ...2015
- 7 IBM Watson: Jeopardy ...2011
- 8 IBM Deep Blue: Chess ...1997
- 9 Donald Waterman: Draw Poker ...1968
- 10 Donald Michie: MENACE ...1960
- 11 Arthur Samuel: Checkers ...1950s
- 12 Fun!
- 13 Books
40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers | Nvidia ...2020
- GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.
Brown & Sandholm: 6-player Poker ...2019
- Occlusions
- Facebook and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge
Google DeepMind AlphaStar: StarCraft II ... 2019
OpenAI: Dota 2 ...2018
Google DeepMind AlphaGo Zero: Go ...2016
Google DeepMind: Atari video games ...2015
IBM Watson: Jeopardy ...2011
IBM Deep Blue: Chess ...1997
Donald Waterman: Draw Poker ...1968
Donald Michie: MENACE ...1960
- How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - ODSC
- Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust
Menace “learns” to play noughts and crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called Reinforcement Learning.
Arthur Samuel: Checkers ...1950s
Fun!
Books
Invent Your Own Computer Games with Python | Al Sweigart
Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson