Difference between revisions of "Gaming"

From
Jump to: navigation, search
m (Gamification)
m (More...)
Line 241: Line 241:
  
 
= More... =
 
= More... =
 
== <span id="Wordle"></span>Wordle ==
 
* [https://www.powerlanguage.co.uk/wordle/ Wordle]
 
* [[JavaScript#Wordle |JavaScript ~ Wordle]]
 
 
{|<!-- T -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>fRed0Xmc2Wg</youtube>
 
<b>Oh, wait, actually the best Wordle opener is not “crane”
 
</b><br>Following up on the [https://youtu.be/v68zYyaEmEA Worlde-solver], discussing a minor bug and more details about how the best first word was chosen.
 
|}
 
|<!-- M -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>R_9qLkVim4s</youtube>
 
<b>Wordle Champion - Coding Perfect Wordle Ai
 
</b><br>ey, have you heard of Wordle yet? Stop lying - I know you have! And while it is a super fun game to play for humans, I wanted to let computers in on the action. In this video, I explain and implement a Wordle algorithm in Python which is ALWAYS able to find the secret word in 6 tries or fewer (4 tries on average).
 
 
I test the Wordle bot not only on the official wordle website (https://www.powerlanguage.co.uk/wordle/) but on another website that lets you play many times a day (https://octokatherine.github.io/word-...).
 
 
I even enter a Wordle Bot Competition and place in the top 10 (https://botfights.io/game/wordle).
 
 
So what is the bot's wordle strategy? Well, we consider every possible guessing word and every possible answer. Every guessing word yields us information (green for correct letter in correct location, yellow for correct letter in wrong location, and gray for incorrect letter). We choose a guess that helps us narrow down our answer list the most. Particularly, we choose the guess that guarantees the most narrowing down. I explain this algorithm in detail in the video.
 
 
After that, we get the list of all possible guess and answer words from this subreddit (https://www.reddit.com/r/wordle/comme...). Then, it is just a matter of Python implementation.
 
 
The solution is 100% accurate and typically takes under 3 seconds to solve an entire puzzle. When I run it on the websites mentioned above, it is clear that it excels at its job!
 
 
Do you know how my wordle ai can be improved? Or at least sped up? Let me know in the comments and maybe we can crack top 5 in the botfights competition to be a true wordle hacker!
 
 
If you watched this video, you probably like code!
 
Here is the code for my wordle solver: https://github.com/techtribeyt/Wordle
 
|}
 
|}<!-- B -->
 
 
== <span id="Airport CEO"></span>Airport CEO ==
 
[https://www.youtube.com/results?search_query=Airport+CEO Youtube search...]
 
[https://www.google.com/search?q=Airport+CEO ...Google search]
 
 
* [https://www.airportceo.com/ Airport CEO]
 
* [[Screening; Passenger, Luggage, & Cargo]]
 
* [[Metaverse]]
 
** [[Metaverse#Flight Simulator 2020| Flight Simulator 2020]]
 
 
{|<!-- T -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>XoHA907Pcdo</youtube>
 
<b>S1:E1 Airport CEO - Extreme Difficulty - An Aggressive Start
 
</b><br>In this episode, we kick off a new series playing Airport CEO on extreme difficulty and showcasing a very aggressive start where we embrace debt and expand rapidly.
 
Airport CEO is a city-builder / tycoon game where the player is acting as CEO of an airport.
 
|}
 
|<!-- M -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>G0m0yM40qDA</youtube>
 
<b>BETTER Baggage Security! | Airport CEO
 
</b><br>Come fly with me..
 
|}
 
|}<!-- B -->
 
  
 
= Other Videos =
 
= Other Videos =

Revision as of 09:06, 17 November 2024

Youtube search... ... Quora ...Google search ...Google News ...Bing News



Gaming Evolution

Meta: Diplomacy 2022

Cicero, has demonstrated the ability to play the strategy game Diplomacy at a level that rivals human performance. Cicero can engage in game conversations and negotiations without most human players realizing they are interacting with a machine. During gameplay on an online league, Cicero sent over 5,000 messages to human players, and its identity as an AI remained undetected. Cicero's performance was impressive, ranking in the top 10% of players. The integration of AI into the game of Diplomacy has shown that machines can effectively mimic human negotiation tactics and strategic thinking. Cicero's achievements in Diplomacy are a testament to the potential of AI in complex human interactions. As AI continues to evolve, it will undoubtedly transform the landscape of diplomacy, offering new tools and methods to support diplomatic efforts.

NVIDIA: 40 Years on, PAC-MAN ...2020

  • GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.

OpenAI: Hide and Seek ... 2019

Meta: Brown & Sandholm: 6-player Poker ...2019

  • Occlusions
  • Facebook and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge

Google DeepMind AlphaStar: StarCraft II ... 2019

OpenAI: Dota 2 ...2018

Google DeepMind AlphaGo Zero: Go ...2016

AlphaGo is a computer program developed by Google DeepMind that uses artificial intelligence (AI) to play the board game Go. In 2016, AlphaGo made history by becoming the first computer program to defeat a professional Go player, Lee Sedol, in a five-game match.

During the second game of the match, AlphaGo made a surprising move, known as Move 37, which stunned the Go community and left Lee Sedol speechless. The move involved placing a stone in an unexpected location on the board, which initially appeared to be a mistake. However, as the game progressed, it became clear that the move was part of a complex strategy that allowed AlphaGo to gain an advantage over Lee Sedol. Move 37 is significant because it demonstrated the power of AlphaGo's AI algorithms and its ability to think creatively and strategically. The move was not based on any known human strategy or prior knowledge of the game, but rather on AlphaGo's own analysis and evaluation of the board position.



What would have happened with human-in-the-loop on Move 37?



The move highlighted the limitations of human intuition and the potential for AI to uncover new insights and strategies in complex domains. If a human expert had been involved in the decision-making process for Move 37, they might have questioned AlphaGo's choice and suggested a more conventional move. This could have prevented AlphaGo from making the unexpected and seemingly risky move that ultimately led to its victory.


Andrew Jackson & Josh Hoak: Minigo ...2018

an open source, unofficial implementation of AlphaGo Zero using Reinforcement Learning (RL) approaches can be massively parallelized, so Kubernetes seems like a natural fit, as Kubernetes is all about reducing the overhead for managing applications. However, it can be daunting to wade into Kubernetes and Machine Learning, especially when you add in hardware accelerators like GPUs or TPUs! This talk will break down how you can use Kubernetes and TensorFlow to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on GPUs, TPUs, TensorFlow, KubeFlow, and large-scale Kubernetes Engine clusters. Uses self-play with Monte Carlo Tree Search and refining the Policy/Value along the way.

Google DeepMind: Atari video games ...2015

IBM: Watson: Jeopardy ...2011

IBM: Deep Blue: Chess ...1997

John Conway: The Game of Life (GoL) ...1970

Gospers_glider_gun.gif

The Rules

  • For a space that is 'populated':
    • Each cell with one or no neighbors dies, as if by solitude.
    • Each cell with four or more neighbors dies, as if by overpopulation.
    • Each cell with two or three neighbors survives.
  • For a space that is 'empty' or 'unpopulated'
    • Each cell with three neighbors becomes populated.

Donald Waterman: Draw Poker ...1968

Martin Gardner: Hexapawn ...1962

A simple game on a 3x3 grid, where each side has 3 chess pawns. The objective is to get a pawn to the other side of the board, or leave the opponent unable to move. Normal chess rules apply except that the pawns are not allowed a double move from their starting position. Not really intended as a two-player game, it was designed to demonstrate an artificial intelligence learning technique by using beads in matchboxes. (Old enough to remember matchboxes?) Twenty-four matchboxes were used to represent the possible moves. Essentially, there were two phases. The first phase was to "teach" the matchbox computer to play the game, then a second phase allowed the matchbox computer to play other opponents. The learning speed depended on the skill of the opponent in the teaching phase. Martin Gardner first published this in his Mathematical Games column in March 1962, and subsequently in his book, "The Unexpected Hanging". Board Game Geek

Donald Michie: Noughts and Crosses ...1960

MENACE (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called Reinforcement Learning (RL). Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust

MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)

img3.jpg




Arthur Samuel: Checkers ...1950s

More...

Other Videos

Books

Invent Your Own Computer Games with Python | Al Sweigart

51mpkckeu4L._SX376_BO1,204,203,200_.jpg

Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson

51LpAeEYhzL._SX397_BO1,204,203,200_.jpg

Hands-On Deep Learning for Games: Leverage the power of neural networks and reinforcement learning to build intelligent games | Micheal Lanham

517S9nvodoL._SX404_BO1,204,203,200_.jpg

Machine learning and Artificial Intelligence 2.0 with Big Data: Building Video Games using Python 3.7 and Pygame | Narendra Mohan Mittal

41eHxTsXXgL.jpg