Jump to: navigation, search

Youtube search... ...Google search

NVIDIA: 40 Years on, PAC-MAN ...2020

  • GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.

OpenAI: Hide and Seek ... 2019

Brown & Sandholm: 6-player Poker ...2019

Google DeepMind AlphaStar: StarCraft II ... 2019

OpenAI: Dota 2 ...2018

Google DeepMind AlphaGo Zero: Go ...2016

Andrew Jackson & Josh Hoak: Minigo ...2018

an open source, unofficial implementation of AlphaGo Zero using Reinforcement Learning (RL) approaches can be massively parallelized, so Kubernetes seems like a natural fit, as Kubernetes is all about reducing the overhead for managing applications. However, it can be daunting to wade into Kubernetes and Machine Learning, especially when you add in hardware accelerators like GPUs or TPUs! This talk will break down how you can use Kubernetes and TensorFlow to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on GPUs, TPUs, TensorFlow, KubeFlow, and large-scale Kubernetes Engine clusters. Uses self-play with Monte Carlo Tree Search and refining the Policy/Value along the way.

Google DeepMind: Atari video games ...2015

IBM: Watson: Jeopardy ...2011

IBM: Deep Blue: Chess ...1997

John Conway: The Game of Life (GoL) ...1970


The Rules

  • For a space that is 'populated':
    • Each cell with one or no neighbors dies, as if by solitude.
    • Each cell with four or more neighbors dies, as if by overpopulation.
    • Each cell with two or three neighbors survives.
  • For a space that is 'empty' or 'unpopulated'
    • Each cell with three neighbors becomes populated.

Donald Waterman: Draw Poker ...1968

Donald Michie: Noughts and Crosses ...1960

MENACE (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called Reinforcement Learning (RL). Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust

MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)



Arthur Samuel: Checkers ...1950s

Cybersecurity - Gaming

Youtube search... ...Google search

CSIAC Podcast - Hypergaming for Cyber - Strategy for Gaming a Wicked Problem
CSIAC Learn more Cyber as a domain and battlespace coincides with the defined attributes of a “wicked problem” with complexity and inter-domain interactions to spare. Since its elevation to domain status, cyber has continued to defy many attempts to explain its reach, importance, and fundamental definition. Corresponding to these intricacies, cyber also presents many interlaced attributes with other information related capabilities (IRCs), namely electromagnetic warfare (EW), information operations (IO), and intelligence, surveillance, and reconnaissance (ISR), within an information warfare (IW) construct that serves to add to its multifaceted nature. In this cyber analysis, the concept of hypergaming will be defined and discussed in reference to its potential as a way to examine cyber as a discipline and domain, and to explore how hypergaming can address cyber’s “wicked” nature from the perspectives of decision making, modeling, operational research (OR), IO, and finally IW. Finally, a cyber-centric hypergame model (CHM) will be presented.

Live Project Ares Walk Through
If you are interested in playing Project Ares, please fill out this form - Project Ares Gamified Cyber Security Training from Circadence

CyberStart Game - Video1
How to quickly get up and running with the the CyberStart game. This video includes an overview of the game Intro, the Basic Layout and the Field Manual.

TryHackMe - Beginner Learning Path
ActualTom Broadcasted live on Twitch -- Watch live at

Airport CEO

Youtube search... ...Google search

S1:E1 Airport CEO - Extreme Difficulty - An Aggressive Start
In this episode, we kick off a new series playing Airport CEO on extreme difficulty and showcasing a very aggressive start where we embrace debt and expand rapidly. Airport CEO is a city-builder / tycoon game where the player is acting as CEO of an airport.

BETTER Baggage Security! | Airport CEO
Come fly with me..



Invent Your Own Computer Games with Python | Al Sweigart


Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson


Hands-On Deep Learning for Games: Leverage the power of neural networks and reinforcement learning to build intelligent games | Micheal Lanham


Machine learning and Artificial Intelligence 2.0 with Big Data: Building Video Games using Python 3.7 and Pygame | Narendra Mohan Mittal