Difference between revisions of "Gaming"

From
Jump to: navigation, search
(40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers | Nvidia ...2020)
(Arthur Samuel: Checkers ...1950s)
Line 81: Line 81:
 
<youtube>KI__-MzkjK4</youtube>
 
<youtube>KI__-MzkjK4</youtube>
 
<youtube>qndXrHcV1sM</youtube>
 
<youtube>qndXrHcV1sM</youtube>
 +
 +
== Donald Michie: MENACE ...1960 ==
 +
* [http://medium.com/@ODSC/how-300-matchboxes-learned-to-play-tic-tac-toe-using-menace-35e0e4c29fc How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - ODSC]
 +
* [http://chalkdustmagazine.com/features/menace-machine-educable-noughts-crosses-engine/ Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust]
 +
 +
Menace “learns” to play noughts and crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called [[Reinforcement Learning]].
 +
 +
http://i1.wp.com/chalkdustmagazine.com/wp-content/uploads/2016/03/img3.jpg
 +
 +
<youtube>ipNT1QZV7Ag</youtube>
 +
<youtube>jSVqCsinQLM</youtube>
  
 
== Arthur Samuel: Checkers ...1950s ==
 
== Arthur Samuel: Checkers ...1950s ==

Revision as of 15:58, 5 July 2020

Youtube search... ...Google search


google-2019-expanding-the-gamescape.png


40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers | Nvidia ...2020

  • GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.

Brown & Sandholm: 6-player Poker ...2019

Google DeepMind AlphaStar: StarCraft II ... 2019

OpenAI: Dota 2 ...2018

Google DeepMind AlphaGo Zero: Go ...2016

Google DeepMind: Atari video games ...2015

IBM Watson: Jeopardy ...2011

IBM Deep Blue: Chess ...1997

Donald Waterman: Draw Poker ...1968

Donald Michie: MENACE ...1960

Menace “learns” to play noughts and crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called Reinforcement Learning.

img3.jpg

Arthur Samuel: Checkers ...1950s

Fun!

Books

Invent Your Own Computer Games with Python | Al Sweigart

51mpkckeu4L._SX376_BO1,204,203,200_.jpg

Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson

51LpAeEYhzL._SX397_BO1,204,203,200_.jpg

Hands-On Deep Learning for Games: Leverage the power of neural networks and reinforcement learning to build intelligent games | Micheal Lanham

517S9nvodoL._SX404_BO1,204,203,200_.jpg

Machine learning and Artificial Intelligence 2.0 with Big Data: Building Video Games using Python 3.7 and Pygame | Narendra Mohan Mittal

41eHxTsXXgL.jpg