Difference between revisions of "Gaming"

From
Jump to: navigation, search
m
m (Other Videos)
 
(122 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
|title=PRIMO.ai
 
|title=PRIMO.ai
 
|titlemode=append
 
|titlemode=append
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Facebook
+
|keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
+
 
 +
<!-- Google tag (gtag.js) -->
 +
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script>
 +
<script>
 +
  window.dataLayer = window.dataLayer || [];
 +
  function gtag(){dataLayer.push(arguments);}
 +
  gtag('js', new Date());
 +
 
 +
  gtag('config', 'G-4GCWLBVJ7T');
 +
</script>
 
}}
 
}}
[http://www.youtube.com/results?search_query=game+gamegan+artificial+intelligence+deep+learning+ai Youtube search...]
+
[https://www.youtube.com/results?search_query=game+gaming+artificial+intelligence+ai Youtube search...]
[http://www.google.com/search?q=game+gaming+gamegan+deep+machine+learning+ML+ai ...Google search]
+
[https://www.quora.com/search?q=AI%20game%20gaming ... Quora]
 +
[https://www.google.com/search?q=game+gaming+artificial+intelligence+ai ...Google search]
 +
[https://news.google.com/search?q=game+gaming+artificial+intelligence+ai ...Google News]
 +
[https://www.bing.com/news/search?q=game+gaming+artificial+intelligence+ai&qft=interval%3d%228%22 ...Bing News]  
  
 +
* [[Gaming]] ... [[Game-Based Learning (GBL)]] ... [[Games - Security|Security]] ... [[Game Development with Generative AI|Generative AI]] ... [[Metaverse#Games - Metaverse|Games - Metaverse]] ... [[Games - Quantum Theme|Quantum]] ... [[Game Theory]] ... [[Game Design | Design]]
 
* [[Case Studies]]
 
* [[Case Studies]]
 
** [[Sports]]
 
** [[Sports]]
Line 13: Line 26:
 
** [[Toys]]
 
** [[Toys]]
 
** [[Education]]
 
** [[Education]]
* [[Metaverse]]
+
* [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless]] ... [[Hugging Face]] ... [[Algorithm Administration#AIOps/MLOps|AIOps/MLOps]] ... [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)|AIaaS/MLaaS]]
 +
* [[Minecraft]]: [[Minecraft#Voyager|Voyager]] ... an AI agent powered by a [[Large Language Model (LLM)]] that has been introduced to the world of [[Minecraft]]
 +
* [[Python]] ... [[Generative AI with Python|GenAI w/ Python]] ... [[JavaScript]] ... [[Generative AI with JavaScript|GenAI w/ JavaScript]] ... [[TensorFlow]] ... [[PyTorch]]
 +
** [[Game Development with Generative AI#Roblox | Roblox]]  ... building tools to allow creators to develop integrated 3D objects that come with behaviour built in.
 +
** [[JavaScript#Games_to_Learn|Games to Learn JavaScript and CSS]]
 +
** [[Python#Games_to_Learn_Python | Games to Learn Python]]
 +
* [[Immersive Reality]] ... [[Metaverse]] ... [[Omniverse]] ... [[Transhumanism]] ... [[Religion]]                                     
 
** [[Metaverse#Flight Simulator 2020| Flight Simulator 2020]]
 
** [[Metaverse#Flight Simulator 2020| Flight Simulator 2020]]
 
** [[Metaverse#Fortnite| Fortnite]]
 
** [[Metaverse#Fortnite| Fortnite]]
 
* [[Autonomous Drones]] Racing
 
* [[Autonomous Drones]] Racing
* [[Reinforcement Learning (RL)]]
+
* [[What is Artificial Intelligence (AI)? | Artificial Intelligence (AI)]] ... [[Machine Learning (ML)]] ... [[Deep Learning]] ... [[Neural Network]] ... [[Reinforcement Learning (RL)|Reinforcement]] ... [[Learning Techniques]]
 
* [[Q Learning]]
 
* [[Q Learning]]
 
** [[Deep Q Network (DQN)]]
 
** [[Deep Q Network (DQN)]]
 
* [[Competitions]]
 
* [[Competitions]]
* [[Game Theory]]
+
* [[Blockchain]]
** [[Bayes#Bayesian_Game|Bayesian_Game]]
+
* [[Bayes#Bayesian_Game|Bayesian_Game]]
 +
* [[Analytics]] ... [[Visualization]] ... [[Graphical Tools for Modeling AI Components|Graphical Tools]] ... [[Diagrams for Business Analysis|Diagrams]] & [[Generative AI for Business Analysis|Business Analysis]] ... [[Requirements Management|Requirements]] ... [[Loop]] ... [[Bayes]] ... [[Network Pattern]]
 
* [[GameGAN]]
 
* [[GameGAN]]
 
* [[Quantum#Quantum Chess|Quantum Chess]]
 
* [[Quantum#Quantum Chess|Quantum Chess]]
* [http://deepindex.org/#Games Deepindex.org list]
+
* [[Video/Image]] ... [[Vision]] ... [[Colorize]] ... [[Image/Video Transfer Learning]]
* [http://unity.com/solutions/game Unity] Core Platform
+
* [[Policy]]  ... [[Policy vs Plan]] ... [[Constitutional AI]] ... [[Trust Region Policy Optimization (TRPO)]] ... [[Policy Gradient (PG)]] ... [[Proximal Policy Optimization (PPO)]]
* [http://blog.finxter.com/free-python-books/ 101+ Free Python Books | Christian]
+
* [https://deepindex.org/#Games Deepindex.org list]
** [http://inventwithpython.com/inventwithpython_3rd.pdf Making Games with Python & Pygame 3rd Edition 2015 | Al Sweigart - Invent with Python] - 11 games
+
* [https://unity.com/solutions/game Unity] Core Platform
* [http://venturebeat.com/2019/05/09/ai-is-becoming-esports-secret-weapon/ AI is becoming esports’ secret weapon | Berk Ozer - VentureBeat]
+
* [https://blog.finxter.com/free-python-books/ 101+ Free Python Books | Christian]
* [http://www.theverge.com/2019/2/1/18185945/live-action-roleplaying-larp-game-design-artificial-intelligence-ethics-issues Inside the LARPS (ive-action role-playing game) that let Human Players Experience AI Life | Tasha Robinson]
+
** [https://inventwithpython.com/inventwithpython_3rd.pdf Making Games with Python & Pygame 3rd Edition 2015 | Al Sweigart - Invent with Python] - 11 games
* [http://medium.freecodecamp.org/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8 An introduction to Deep Q-Learning: let’s play Doom]
+
* [https://venturebeat.com/2019/05/09/ai-is-becoming-esports-secret-weapon/ AI is becoming esports’ secret weapon | Berk Ozer - VentureBeat]
* [http://www.youtube.com/user/tthompso AI and Games Series; an Informed Overview | Dr Tommy Thompson]
+
* [https://www.theverge.com/2019/2/1/18185945/live-action-roleplaying-larp-game-design-artificial-intelligence-ethics-issues Inside the LARPS (ive-action role-playing game) that let Human Players Experience AI Life | Tasha Robinson]
* [http://www.amazon.com/gp/product/9056918184 Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI | M. Sadler and N. Regan]  
+
* [https://medium.freecodecamp.org/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8 An introduction to Deep Q-Learning: let’s play Doom]
* [http://en.wikipedia.org/wiki/Artificial_intelligence_in_video_games Artificial Intelligence in Video Games | Wikipedia]
+
* [https://www.youtube.com/user/tthompso AI and Games Series; an Informed Overview | Dr Tommy Thompson]
* [http://blogs.unity3d.com/2017/12/11/using-machine-learning-agents-in-a-real-game-a-beginners-guide/ Using Machine Learning Agents Toolkit in a real game: a beginner’s guide | Alessia Nigretti - Unity]
+
* [https://www.amazon.com/gp/product/9056918184 Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI | M. Sadler and N. Regan]  
* Google:
+
* [https://en.wikipedia.org/wiki/Artificial_intelligence_in_video_games Artificial Intelligence in Video Games | Wikipedia]
** [http://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii AlphaStar: Mastering the Real-Time Strategy Game StarCraft II]
+
* [https://blogs.unity3d.com/2017/12/11/using-machine-learning-agents-in-a-real-game-a-beginners-guide/ Using Machine Learning Agents Toolkit in a real game: a beginner’s guide | Alessia Nigretti - Unity]  ...[[Agents]]
** [http://www.zdnet.com/article/googles-ai-surfs-the-gamescape-to-conquer-game-theory/ Google’s AI surfs the “gamescape” to conquer game theory | Tiernan Ray]
+
* [https://www.tomsguide.com/us/mit-jenga-robot,news-29290.html This AI Robot Will Beat You at Jenga | Jesus Diaz]
** [http://www.technologyreview.com/f/615429/deepminds-ai-57-atari-games-but-its-still-not-versatile-enough/ DeepMind’s AI can now play all 57 Atari games—but it’s still not versatile enough | MIT Technology Review] ...[http://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark Agent57 | DeepMind]
+
* [https://futurism.com/the-byte/browser-game-opponents-neural-networks In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism]
** [http://arxiv.org/pdf/1908.09453.pdf OpenSpiel: A Framework for Reinforcement Learning in Games | M. Lanctot, E. Lockhart, J. Lespiau1, V. Zambaldi1, S. Upadhyay, J. Pérolat, S. Srinivasan, F. Timbers, K. Tuyls, S. Omidshafiei, D. Hennes, D. Morrill1, P. Muller, T. Ewalds, R. Faulkner, J. Kramár, B. De Vylder, B. Saeta, J. Bradbury, D. Ding, S. Borgeaud, M. Lai1, J. Schrittwieser, T. Anthony, E. Hughes, I. Danihelka and J. Ryan-Davis - DeepMind]
+
* [https://www.polygon.com/2019/12/6/20998745/ai-dungeon-2-text-adventure-openai-how-to-play-nick-walton You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - Polygon] To play [[Jupyter]]-notebook based game click... 
*** [http://github.com/deepmind/open_spiel/blob/master/docs/intro.md OpenSpiel | GitHub]
+
* [https://www.pgs-soft.com/blog/writing-board-game-ai-bots-the-good-the-bad-and-the-ugly/ Writing Board Game AI Bots – The Good, The Bad, and The Ugly | Tomasz Zielinski - PGS Software]
*** [http://venturebeat.com/2019/08/27/deepmind-details-openspiel-a-collection-of-ai-training-tools-for-video-games/ DeepMind details OpenSpiel, a collection of AI training tools for video games | Kyle Wiggers - VentureBeat]
+
* [https://www.intrinsicalgorithm.com/media.php Intrinsic Algorithm | Dave Mark] reducing the world to mathematical equations
* [http://www.tomsguide.com/us/mit-jenga-robot,news-29290.html This AI Robot Will Beat You at Jenga | Jesus Diaz]
+
* [https://www.cnbc.com/2021/07/11/future-ai-toys-may-be-smarter-than-parents-and-less-protective.html Future AI toys could be smarter than parents, but a lot less protective | Mikaela Cohen - CNBC Evolve]
* [http://futurism.com/the-byte/browser-game-opponents-neural-networks In This Browser Game, Your Opponents Are Neural Networks | Dan Robitzski - Futurism]
+
* [https://zdnet1.cbsistatic.com/hub/i/r/2019/01/28/acc7879b-fea1-44fe-b894-e7623ed4bfdc/resize/370xauto/9886c4d79b5ee027d4fc791bcab11c4b/google-2019-expanding-the-gamescape.png Future AI toys could be smarter than parents, but a lot less protective | Mikaela Cohen - CNBC]
* [http://www.polygon.com/2019/12/6/20998745/ai-dungeon-2-text-adventure-openai-how-to-play-nick-walton You can do nearly anything you want in this incredible AI-powered game | Patricia Hernandez - PolygonTo play [[Jupyter]]-notebook based game click..
+
* [https://www.wired.com/story/this-ai-resurrects-ancient-board-games-lets-you-play-them/ This AI Resurrects Ancient Board Games—and Lets You Play Them; What tabletop games did our ancestors play in 1000 BC? A new research project wants to find out, and make them playable online too. | Samantha Huioi Yow - Wired]  ...[https://ludeme.eu/ Digital Ludeme Project; Modelling the Evolution of Traditional Games]
* [http://colab.research.google.com/github/nickwalton/AIDungeon/blob/master/AIDungeon_2.ipynb AI Dungeon 2]
+
* [https://a16z.com/2022/11/17/the-generative-ai-revolution-in-games/ The Generative AI Revolution in Games | James Gwertzman and Jack Soslow - Andreessen Horowitz]
* [http://www.pgs-soft.com/blog/writing-board-game-ai-bots-the-good-the-bad-and-the-ugly/ Writing Board Game AI Bots – The Good, The Bad, and The Ugly | Tomasz Zielinski - PGS Software]
+
* [https://towardsdatascience.com/modeling-games-with-markov-chains-c7b614731a7f Modeling Games with Markov Chains | Kairo Morton - Towards Data Science]  ... Follow Exploring Probabilistic Modeling using “Shut the Box”
* [http://www.intrinsicalgorithm.com/media.php Intrinsic Algorithm | Dave Mark] reducing the world to mathematical equations
+
* [[Google]]:
* [http://www.cnbc.com/2021/07/11/future-ai-toys-may-be-smarter-than-parents-and-less-protective.html
+
** [https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii AlphaStar: Mastering the Real-Time Strategy Game StarCraft II]
* [http://zdnet1.cbsistatic.com/hub/i/r/2019/01/28/acc7879b-fea1-44fe-b894-e7623ed4bfdc/resize/370xauto/9886c4d79b5ee027d4fc791bcab11c4b/google-2019-expanding-the-gamescape.png Future AI toys could be smarter than parents, but a lot less protective | Mikaela Cohen - CNBC]
+
** [https://www.zdnet.com/article/googles-ai-surfs-the-gamescape-to-conquer-game-theory/ Google’s AI surfs the “gamescape” to conquer game theory | Tiernan Ray]
* [http://www.wired.com/story/this-ai-resurrects-ancient-board-games-lets-you-play-them/ This AI Resurrects Ancient Board Games—and Lets You Play Them; What tabletop games did our ancestors play in 1000 BC? A new research project wants to find out, and make them playable online too. | Samantha Huioi Yow - Wired]  ...[http://ludeme.eu/ Digital Ludeme Project; Modelling the Evolution of Traditional Games]
+
** [https://www.technologyreview.com/f/615429/deepminds-ai-57-atari-games-but-its-still-not-versatile-enough/ DeepMind’s AI can now play all 57 Atari games—but it’s still not versatile enough | MIT Technology Review]  ...[https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark Agent57 | DeepMind] ...[[Agents]]
 
+
** [https://arxiv.org/pdf/1908.09453.pdf OpenSpiel: A Framework for Reinforcement Learning in Games | M. Lanctot, E. Lockhart, J. Lespiau1, V. Zambaldi1, S. Upadhyay, J. Pérolat, S. Srinivasan, F. Timbers, K. Tuyls, S. Omidshafiei, D. Hennes, D. Morrill1, P. Muller, T. Ewalds, R. Faulkner, J. Kramár, B. De Vylder, B. Saeta, J. Bradbury, D. Ding, S. Borgeaud, M. Lai1, J. Schrittwieser, T. Anthony, E. Hughes, I. Danihelka and J. Ryan-Davis - DeepMind]
 +
*** [https://github.com/deepmind/open_spiel/blob/master/docs/intro.md OpenSpiel | GitHub]
 +
*** [https://venturebeat.com/2019/08/27/deepmind-details-openspiel-a-collection-of-ai-training-tools-for-video-games/ DeepMind details OpenSpiel, a collection of AI training tools for video games | Kyle Wiggers - VentureBeat]
 +
* [https://www.makeuseof.com/how-use-chatgpt-my-gpt-bots/ How to Use ChatGPT's "My GPT" Bots to Learn Board Games, Create Images, and Much More | Dreamchild Obari - Make Use Of] ... Game Time ... Do you have a board game somewhere at home that you don't know how to play? Game Time comes in clutch and can explain cards and board games to you. You can also upload images if you don't know what the game is called but have the instructions or an idea of what it is.
 +
* [https://www.engati.com/blog/ai-in-gaming AI in Gaming | 5 Biggest Innovations (+40 AI Games) | Jeremy DSouza - engati] ... benefits, game types, innovations, popular games, & future of AI in gaming
 +
** [https://colab.research.google.com/github/nickwalton/AIDungeon/blob/master/AIDungeon_2.ipynb  AI Dungeon 2] ... uses [[OpenAI]]'s GPT LLM to allow players to engage in text-based adventures where the possibilities are virtually limitless
 +
** [https://codecombat.com/ Code Combat] ... innovative game-based learning technology
 +
** [https://screeps.com/ Screeps] ... MMO sandbox game for programmers
 +
 
 +
   
 +
 
 +
 
 
= Gaming Evolution =  
 
= Gaming Evolution =  
  
==  [[NVIDIA]]: [http://blogs.nvidia.com/blog/2020/05/22/gamegan-research-pacman-anniversary/ 40 Years on, PAC-MAN]    ...2020 ==
+
== [[Meta]]: Diplomacy 2022 ==
 +
 
 +
* [[Agents]] ... [[Robotic Process Automation (RPA)|Robotic Process Automation]] ... [[Assistants]] ... [[Personal Companions]] ... [[Personal Productivity|Productivity]] ... [[Email]] ... [[Negotiation]] ... [[LangChain]]
 +
* [https://www.infoq.com/news/2022/12/meta-diplomacy-cicero/ Meta's CICERO AI Wins Online Diplomacy Tournament | Anthony Alford - InfoQ]  ... Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players.
 +
 
 +
Cicero, has demonstrated the ability to play the strategy game Diplomacy at a level that rivals human performance. Cicero can engage in game conversations and negotiations without most human players realizing they are interacting with a machine. During gameplay on an online league, Cicero sent over 5,000 messages to human players, and its identity as an AI remained undetected. Cicero's performance was impressive, ranking in the top 10% of players. The integration of AI into the game of Diplomacy has shown that machines can effectively mimic human negotiation tactics and strategic thinking. Cicero's achievements in Diplomacy are a testament to the potential of AI in complex human interactions. As AI continues to evolve, it will undoubtedly transform the landscape of diplomacy, offering new tools and methods to support diplomatic efforts.
 +
 
 +
<youtube>lNtBiZaLA0k</youtube>
 +
<youtube>u5192bvUS7k</youtube>
 +
 
 +
==  [[NVIDIA]]: [https://blogs.nvidia.com/blog/2020/05/22/gamegan-research-pacman-anniversary/ 40 Years on, PAC-MAN]    ...2020 ==
  
 
* [[GameGAN]], a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.
 
* [[GameGAN]], a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.
Line 64: Line 105:
 
== [[OpenAI]]: Hide and Seek ... 2019 ==
 
== [[OpenAI]]: Hide and Seek ... 2019 ==
  
* [http://openai.com/blog/emergent-tool-use/ Emergent Tool Use from Multi-Agent Interaction |] [[OpenAI]]
+
* [https://openai.com/blog/emergent-tool-use/ Emergent Tool Use from Multi-Agent Interaction |] [[OpenAI]]
* [http://d4mucfpksywv.cloudfront.net/emergent-tool-use/paper/Multi_Agent_Emergence_2019.pdf Emergent Tool Use from Multi-Agent Autocurricula B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch]
+
* [https://d4mucfpksywv.cloudfront.net/emergent-tool-use/paper/Multi_Agent_Emergence_2019.pdf Emergent Tool Use from Multi-Agent Autocurricula B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch]
  
 
<youtube>Lu56xVlZ40M</youtube>
 
<youtube>Lu56xVlZ40M</youtube>
 
<youtube>n6nF9WfpPrA</youtube>
 
<youtube>n6nF9WfpPrA</youtube>
  
== [http://ai.facebook.com/blog/pluribus-first-ai-to-beat-pros-in-6-player-poker/  Brown & Sandholm]: 6-player Poker  ...2019 ==
+
== [[Meta]]: [https://ai.facebook.com/blog/pluribus-first-ai-to-beat-pros-in-6-player-poker/  Brown & Sandholm]: 6-player Poker  ...2019 ==
  
 
* [[Occlusions]]
 
* [[Occlusions]]
* [http://www.theverge.com/2019/7/11/20690078/ai-poker-pluribus-facebook-cmu-texas-hold-em-six-player-no-limit Facebook and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge]
+
* [https://www.theverge.com/2019/7/11/20690078/ai-poker-pluribus-facebook-cmu-texas-hold-em-six-player-no-limit [[Meta|Facebook]] and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge]
  
 
<youtube>u90TbxK7VEA</youtube>
 
<youtube>u90TbxK7VEA</youtube>
Line 89: Line 130:
  
 
== [[Google DeepMind AlphaGo Zero]]: Go ...2016 ==
 
== [[Google DeepMind AlphaGo Zero]]: Go ...2016 ==
* [http://deepmind.com/blog/article/alphago-zero-starting-scratch AlphaGo Zero: Starting from scratch | DeepMind]
+
* [https://deepmind.com/blog/article/alphago-zero-starting-scratch AlphaGo Zero: Starting from scratch | DeepMind]
* [http://asiasociety.org/blog/asia/chinas-sputnik-moment-and-sino-american-battle-ai-supremacy [[Government Services#China|China]]'s 'Sputnik Moment' and the Sino-American Battle for AI Supremacy | ][[Creatives#Kai-Fu Lee |Kai-Fu Lee]] - Asia Society
+
* [https://asiasociety.org/blog/asia/chinas-sputnik-moment-and-sino-american-battle-ai-supremacy [[Government Services#China|China]]'s 'Sputnik Moment' and the Sino-American Battle for AI Supremacy | ][[Creatives#Kai-Fu Lee |Kai-Fu Lee]] - Asia Society
* [http://www.huffpost.com/entry/move-37-or-how-ai-can-change-the-world_b_58399703e4b0a79f7433b675 Move 37, or how AI can change the world | George Zarkadakis - HuffPost]
+
* [https://www.huffpost.com/entry/move-37-or-how-ai-can-change-the-world_b_58399703e4b0a79f7433b675 Move 37, or how AI can change the world | George Zarkadakis - HuffPost]
* [http://katbailey.github.io/post/was-alphagos-move-37-inevitable/ Was AlphaGo's Move 37 Inevitable? | Katherine Bailey]  
+
* [https://katbailey.github.io/post/was-alphagos-move-37-inevitable/ Was AlphaGo's Move 37 Inevitable? | Katherine Bailey]  
 +
* [https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/ Man beats machine at Go in human victory over AI | Richard Waters - Ars Technica]  ... Amateur exploited weakness in systems that have otherwise dominated grandmasters.
 +
 
 +
AlphaGo is a computer program developed by Google DeepMind that uses artificial intelligence (AI) to play the board game Go. In 2016, AlphaGo made history by becoming the first computer program to defeat a professional Go player, Lee Sedol, in a five-game match.
 +
 
 +
During the second game of the match, AlphaGo made a surprising move, known as Move 37, which stunned the Go community and left Lee Sedol speechless. The move involved placing a stone in an unexpected location on the board, which initially appeared to be a mistake. However, as the game progressed, it became clear that the move was part of a complex strategy that allowed AlphaGo to gain an advantage over Lee Sedol. Move 37 is significant because it demonstrated the power of AlphaGo's AI algorithms and its ability to think creatively and strategically. The move was not based on any known human strategy or prior knowledge of the game, but rather on AlphaGo's own analysis and evaluation of the board position.
 +
 
 +
 
 +
<hr><center><b><i>
 +
 
 +
What would have happened with human-in-the-loop on Move 37?
 +
 
 +
</i></b></center><hr>
 +
 
 +
 
 +
The move highlighted the limitations of human intuition and the potential for AI to uncover new insights and strategies in complex domains. If a human expert had been involved in the decision-making process for Move 37, they might have questioned AlphaGo's choice and suggested a more conventional move. This could have prevented AlphaGo from making the unexpected and seemingly risky move that ultimately led to its victory.
 +
 
  
 
<youtube>WXuK6gekU1Y</youtube>
 
<youtube>WXuK6gekU1Y</youtube>
Line 98: Line 155:
  
 
=== <span id="Minigo"></span>[[Creatives#Andrew Jackson |Andrew Jackson]] & Josh Hoak: Minigo ...2018 ===
 
=== <span id="Minigo"></span>[[Creatives#Andrew Jackson |Andrew Jackson]] & Josh Hoak: Minigo ...2018 ===
* [http://github.com/tensorflow/minigo Minigo - GitHub]
+
* [https://github.com/tensorflow/minigo Minigo - GitHub]
 
an open source, unofficial implementation of AlphaGo Zero using [[Reinforcement Learning (RL)]] approaches can be massively parallelized, so [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] seems like a natural fit, as [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] is all about reducing the overhead for managing applications. However, it can be daunting to wade into [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] and Machine Learning, especially when you add in hardware accelerators like [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU |GPUs or TPUs]]! This talk will break down how you can use [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] and [[TensorFlow]] to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU | GPUs, TPUs]], [[TensorFlow]], [[Kubeflow Pipelines|KubeFlow]], and large-scale [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] Engine clusters. Uses self-play with [[Monte Carlo Tree Search]] and refining the [[Policy vs Plan | Policy/Value]] along the way.  
 
an open source, unofficial implementation of AlphaGo Zero using [[Reinforcement Learning (RL)]] approaches can be massively parallelized, so [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] seems like a natural fit, as [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] is all about reducing the overhead for managing applications. However, it can be daunting to wade into [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] and Machine Learning, especially when you add in hardware accelerators like [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU |GPUs or TPUs]]! This talk will break down how you can use [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] and [[TensorFlow]] to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU | GPUs, TPUs]], [[TensorFlow]], [[Kubeflow Pipelines|KubeFlow]], and large-scale [[Containers; Docker, Kubernetes & Microservices | Kubernetes]] Engine clusters. Uses self-play with [[Monte Carlo Tree Search]] and refining the [[Policy vs Plan | Policy/Value]] along the way.  
  
Line 104: Line 161:
 
<youtube>Qra8Aqxu_fo</youtube>
 
<youtube>Qra8Aqxu_fo</youtube>
  
== Google [http://deepmind.com/ DeepMind]: Atari video games ...2015 ==
+
== Google [https://deepmind.com/ DeepMind]: Atari video games ...2015 ==
 
<youtube>Ih8EfvOzBOY</youtube>
 
<youtube>Ih8EfvOzBOY</youtube>
 
<youtube>EfGD2qveGdQ</youtube>
 
<youtube>EfGD2qveGdQ</youtube>
  
 
== [[IBM]]: Watson: Jeopardy ...2011 ==
 
== [[IBM]]: Watson: Jeopardy ...2011 ==
* [http://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/ IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next | Jo Best - TechRepublic]
+
* [https://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/ IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next | Jo Best - TechRepublic]
 
<youtube>7rIf2Njye5k</youtube>
 
<youtube>7rIf2Njye5k</youtube>
 
<youtube>4svcCJJ6ciw</youtube>
 
<youtube>4svcCJJ6ciw</youtube>
Line 117: Line 174:
 
<youtube>2Xhd2KNNs-c</youtube>
 
<youtube>2Xhd2KNNs-c</youtube>
  
== [[Creatives#John Conway |John Conway]]: [http://playgameoflife.com/ The Game of Life (GoL)] ...1970 ==
+
== [[Creatives#John Conway |John Conway]]: [https://playgameoflife.com/ The Game of Life (GoL)] ...1970 ==
* [http://playgameoflife.com/ Game_of_Life]
+
* [[Artificial General Intelligence (AGI) to Singularity]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ...  [[Algorithm Administration#Automated Learning|Automated Learning]]
* [http://www.ibiblio.org/lifepatterns/october1970.html MATHEMATICAL GAMES: The fantastic combinations of John Conway's new solitaire game "life" | Martin Gardner - ] [http://www.scientificamerican.com/ Scientific American 223 (October 1970): 120-123.]
+
* [https://playgameoflife.com/ Game_of_Life]
* [http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life Wikipedia]
+
* [https://thelifeengine.net/ Life Engine]
* [http://medium.com/@tomgrek/evolving-game-of-life-neural-networks-chaos-and-complexity-94b509bc7aa8 Evolving Game of Life: Neural Networks, Chaos, and Complexity | Tom Grek - Medium]
+
* [https://www.ibiblio.org/lifepatterns/october1970.html MATHEMATICAL GAMES: The fantastic combinations of John Conway's new solitaire game "life" | Martin Gardner - ] [https://www.scientificamerican.com/ Scientific American 223 (October 1970): 120-123.]
 +
* [https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life Wikipedia]
 +
* [https://medium.com/@tomgrek/evolving-game-of-life-neural-networks-chaos-and-complexity-94b509bc7aa8 Evolving Game of Life: Neural Networks, Chaos, and Complexity | Tom Grek - Medium]
  
http://upload.wikimedia.org/wikipedia/commons/e/e5/Gospers_glider_gun.gif
+
https://upload.wikimedia.org/wikipedia/commons/e/e5/Gospers_glider_gun.gif
  
 
The Rules
 
The Rules
Line 135: Line 194:
 
<youtube>FWSR_7kZuYg</youtube>
 
<youtube>FWSR_7kZuYg</youtube>
 
<youtube>Gbvy6gY5Ev4</youtube>
 
<youtube>Gbvy6gY5Ev4</youtube>
 +
<youtube>np6ZVZIs7f8</youtube>
  
== [http://books.google.com/books?id=bz_dgCLhhkUC&pg=PA405&lpg=PA405&dq=Donald+Waterman+Draw+Poker Donald Waterman]: Draw Poker ...1968 ==
+
== [https://books.google.com/books?id=bz_dgCLhhkUC&pg=PA405&lpg=PA405&dq=Donald+Waterman+Draw+Poker Donald Waterman]: Draw Poker ...1968 ==
 
* [https://www.researchgate.net/scientific-contributions/2041781574_Donald_A_Waterman Donald Waterman publications - production systems]
 
* [https://www.researchgate.net/scientific-contributions/2041781574_Donald_A_Waterman Donald Waterman publications - production systems]
  
Line 146: Line 206:
 
== Martin Gardner: Hexapawn ...1962==
 
== Martin Gardner: Hexapawn ...1962==
  
* [http://www.cs.williams.edu/~freund/cs136-073/GardnerHexapawn.pdf How to build a game-learning machine and then teach it to play and to win |] [http://en.wikipedia.org/wiki/Martin_Gardner Martin Gardner]  
+
* [https://www.cs.williams.edu/~freund/cs136-073/GardnerHexapawn.pdf How to build a game-learning machine and then teach it to play and to win |] [https://en.wikipedia.org/wiki/Martin_Gardner Martin Gardner]  
  
 
A simple game on a 3x3 grid, where each side has 3 chess pawns. The objective is to get a pawn to the other side of the board, or leave the opponent unable to move. Normal chess rules apply except that the pawns are not allowed a double move from their starting position. Not really intended as a two-player game, it was designed to demonstrate an artificial intelligence learning technique by using beads in matchboxes. (Old enough to remember matchboxes?) Twenty-four matchboxes were used to represent the possible moves. Essentially, there were two phases. The first phase was to "teach" the matchbox computer to play the game, then a second phase allowed the matchbox computer to play other opponents. The learning speed depended on the skill of the opponent in the teaching phase. Martin Gardner first published this in his Mathematical Games column in March 1962, and subsequently in his book, "The Unexpected Hanging". [https://www.boardgamegeek.com/boardgame/33379/hexapawn Board Game Geek]
 
A simple game on a 3x3 grid, where each side has 3 chess pawns. The objective is to get a pawn to the other side of the board, or leave the opponent unable to move. Normal chess rules apply except that the pawns are not allowed a double move from their starting position. Not really intended as a two-player game, it was designed to demonstrate an artificial intelligence learning technique by using beads in matchboxes. (Old enough to remember matchboxes?) Twenty-four matchboxes were used to represent the possible moves. Essentially, there were two phases. The first phase was to "teach" the matchbox computer to play the game, then a second phase allowed the matchbox computer to play other opponents. The learning speed depended on the skill of the opponent in the teaching phase. Martin Gardner first published this in his Mathematical Games column in March 1962, and subsequently in his book, "The Unexpected Hanging". [https://www.boardgamegeek.com/boardgame/33379/hexapawn Board Game Geek]
Line 152: Line 212:
 
<youtube>FFk8S66d8_E</youtube>
 
<youtube>FFk8S66d8_E</youtube>
  
== [http://en.wikipedia.org/wiki/Donald_Michie Donald Michie]: Noughts and Crosses ...1960 ==
+
== [https://en.wikipedia.org/wiki/Donald_Michie Donald Michie]: Noughts and Crosses ...1960 ==
  
* [http://www.dropbox.com/s/ycsycu0l01g9643/DonaldMichie.pdf?dl=0 Experiments on the mechanization of game-learning Part I. Characterization of the model and its parameters | Donald Michie]
+
* [https://www.dropbox.com/s/ycsycu0l01g9643/DonaldMichie.pdf?dl=0 Experiments on the mechanization of game-learning Part I. Characterization of the model and its parameters | Donald Michie]
* [http://www.mscroggs.co.uk/menace/ Play against the online version of MENACE | Matt Scroggs]
+
* [https://www.mscroggs.co.uk/menace/ Play against the online version of MENACE | Matt Scroggs]
* [http://www.richardbowles.co.uk/ai_with_js/code1/ Playing Noughts and Crosses using MENACE | Richard Bowles]
+
* [https://www.richardbowles.co.uk/ai_with_js/code1/ Playing Noughts and Crosses using MENACE | Richard Bowles]
  
<b>MENACE</b> (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called [[Reinforcement Learning (RL)]]. [http://chalkdustmagazine.com/features/menace-machine-educable-noughts-crosses-engine/ Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust]  
+
<b>MENACE</b> (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called [[Reinforcement Learning (RL)]]. [https://chalkdustmagazine.com/features/menace-machine-educable-noughts-crosses-engine/ Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust]  
  
MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. [http://medium.com/@ODSC/how-300-matchboxes-learned-to-play-tic-tac-toe-using-menace-35e0e4c29fc How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)]
+
MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. [https://medium.com/@ODSC/how-300-matchboxes-learned-to-play-tic-tac-toe-using-menace-35e0e4c29fc How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)]
  
http://i1.wp.com/chalkdustmagazine.com/wp-content/uploads/2016/03/img3.jpg
+
https://i1.wp.com/chalkdustmagazine.com/wp-content/uploads/2016/03/img3.jpg
  
  
<img src="http://i1.wp.com/chalkdustmagazine.com/wp-content/uploads/2016/03/menace.jpg" width="600" height="300">
+
<img src="https://i1.wp.com/chalkdustmagazine.com/wp-content/uploads/2016/03/menace.jpg" width="600" height="300">
  
 
<youtube>R9c-_neaxeU</youtube>
 
<youtube>R9c-_neaxeU</youtube>
Line 173: Line 233:
  
  
== [http://en.wikipedia.org/wiki/Arthur_Samuel Arthur Samuel]: Checkers ...1950s ==
+
== [https://en.wikipedia.org/wiki/Arthur_Samuel Arthur Samuel]: Checkers ...1950s ==
* [http://infolab.stanford.edu/pub/voy/museum/samuel.html Arthur Samuel - heuristics]
+
* [https://infolab.stanford.edu/pub/voy/museum/samuel.html Arthur Samuel - heuristics]
* [http://www.wired.com/2007/07/the-game-of-che/ The Game of Checkers: Solved], 2007
+
* [https://www.wired.com/2007/07/the-game-of-che/ The Game of Checkers: Solved], 2007
  
 
<youtube>ipNT1QZV7Ag</youtube>
 
<youtube>ipNT1QZV7Ag</youtube>
 
<youtube>jSVqCsinQLM</youtube>
 
<youtube>jSVqCsinQLM</youtube>
 
 
 
= <span id="Gamification"></span>Gamification =
 
[http://www.youtube.com/results?search_query=Gamification+Metaverse+blockchain+NFT Youtube search...]
 
[http://www.google.com/search?q=Gamification+Metaverse+blockchain+NFT ...Google search]
 
 
* [[Metaverse]]
 
 
Gamification is the strategic attempt to enhance systems, services, organizations, and activities in order to create similar experiences to those experienced when playing games in order to motivate and engage users. This is generally accomplished through the application of game-design elements and game principles (dynamics and mechanics) in non-game contexts. It can also be defined as a set of activities and processes to solve problems by using or applying the characteristics of game elements. Gamification is part of persuasive system design, and it commonly employs game design elements to improve user engagement, organizational productivity, flow, learning, crowdsourcing, knowledge retention, employee recruitment and evaluation, ease of use, usefulness of systems, physical exercise, traffic violations, voter apathy, public attitudes about alternative energy, and more. A collection of research on gamification shows that a majority of studies on gamification find it has positive effects on individuals. However, individual and contextual differences exist. [http://en.wikipedia.org/wiki/Gamification Wikipedia]
 
 
<youtube>WlB1QoQGW2Q</youtube>
 
<youtube>BtCSVmq46dI</youtube>
 
 
=== <span id="CryptoMetaverse"></span>Crypto Metaverse === 
 
 
[http://www.youtube.com/results?search_query=Gamification+NFT+game+Crypto+Metaverse Youtube search...]
 
[http://www.google.com/search?q=Gamification+NFT+game+Crypto+Metaverse ...Google search]
 
 
* [[Decentralized: Federated & Distributed]]
 
* [http://gamefi.org/ GameFi]
 
* [http://nftplazas.com/ NFT Plazas]
 
 
[http://www.yahoo.com/now/understanding-metaverse-relates-cryptocurrency-192229918.html Since the concept is slowly starting to become more mainstream as several big-name companies are embracing it and some analysts are calling it “the next big investment theme.”]
 
 
<b>Non-fungible Token NFT</b> - is a unique and non-interchangeable unit of data stored on a digital ledger (blockchain). NFTs can be associated with easily-reproducible items such as photos, videos, audio, and other types of digital files as unique items (analogous to a certificate of authenticity), and use blockchain technology to give the NFT a public proof of ownership. Copies of the original file are not restricted to the owner of the NFT, and can be copied and shared like any file. The lack of interchangeability (fungibility) distinguishes NFTs from blockchain cryptocurrencies, such as Bitcoin. [http://en.wikipedia.org/wiki/Non-fungible_token Wikipedia]
 
 
==== Sandbox ====
 
[http://www.youtube.com/results?search_query=Sandbox+NFT+chain+game Youtube search...]
 
[http://www.google.com/search?q=Sandbox+NFT+chain+game ...Google search]
 
 
* [http://www.sandbox.game/ Sandbox]
 
The Sandbox is a community-driven platform where creators can monetize voxel ASSETS and gaming experiences on the blockchain.
 
 
The Sandbox is a virtual gaming world, where players are able to create, build, trade, own, and monetise their gaming within the Ethereum Blockchain. The aim is to provide gamers with actual ownership of in-game items as NFTs (non-fungible tokens) and reward them for their playtime and participation within the games ecosystem – contrary to existing game makers like Minecraft and Roblox.  $SAND is the currency of THE SANDBOX and is used for all transactions. It allows users access to the platform, enables them to play games, stake (if desired), and earn rewards. $SAND token is currently listed on over 20 exchanges, with the most well-known being; Crypto.com, Bittrex, and Kraken amongst others. The Sandbox is currently one of the top 5 Metaverse projects representing approximately 7% of the sector.  [http://bitcoinist.com/the-metaverse-is-coming-and-it-will-be-huge/ The METAVERSE Is Coming and It Will Be HUGE | Bitcoinist]
 
 
The Sandbox, a subsidiary of Animoca Brands, is one of the decentralized virtual worlds that has been fueling the recent growth of virtual real-estate demand, having partnered with major IPs and brands including The Walking Dead, Atari, Rollercoaster Tycoon, Care Bears, The Smurfs, Shaun the Sheep, and Binance. Building on existing The Sandbox IP that has more than 40 million global installs on mobile, The Sandbox metaverse offers players and creators a decentralized and intuitive platform to create immersive 3D worlds and game experiences and to safely store, trade, and monetize their creations. [http://medium.com/superfarm/superfarm-enters-the-sandbox-6f5c1421ec32 SuperFarm Enters the Sandbox | Elliot Wainman - Medium]
 
 
==== Axie Infinity ====
 
[http://www.youtube.com/results?search_query=AXIE+INFINITY+NFT+chain+game Youtube search...]
 
[http://www.google.com/search?q=AXIE+INFINITY+NFT+chain+game ...Google search]
 
 
* [http://axieinfinity.com Axie Infinity]
 
 
Axie was built as a fun and educational way to introduce the world to blockchain technology. Many of the original team members met playing Crypto kitties, and it was their first time ever using Blockchain for anything other than pure speculation. They soon started working on Axie to introduce the magic of Blockchain technology to billions of players.
 
The Vision
 
* We believe in a future where work and play become one.
 
* We believe in empowering our players and giving them economic opportunities.
 
* Most of all, we have a dream that battling and collecting cute creatures can change the world.
 
* Welcome to our revolution.
 
 
In short, Axie Infinity is a Pokemon-inspired game based on the blockchain, where players can battle other players and earn money. Axies are unique digital assets stored on Axie’s’ own blockchain and owned in the form of an NFT. The most expensive Axie sold to date was for 300 ETH (Ethereum).  To create a new Axie, existing Axie owners must “breed” them by spending in-game currency earned within the game or purchased from an exchange. By winning battles, or selling their Axies to another player, owners can earn the in-game currency. Any earnings can then be sold or traded on the open market for money, generating income for players.  Released in March 2018, Axie was one of the first to combine Crypto, Play To Earn, NFT’s, and METAVERSE and it continues to grow with a total trading volume that exceeds $2.4 billion.[http://bitcoinist.com/the-metaverse-is-coming-and-it-will-be-huge/ The METAVERSE Is Coming and It Will Be HUGE | Bitcoinist]
 
 
==== JEDSTAR ====
 
[http://www.youtube.com/results?search_query=JEDSTAR+NFT+chain+game Youtube search...]
 
[http://www.google.com/search?q=JEDSTAR+NFT+chain+game ...Google search]
 
 
* [http://jedstar.app/ JEDSTAR]
 
 
JEDSTAR is a fairly new project, launching in August 2021.  It’s a 3 token, decentralized ecosystem; which consists of $JED, a DeFi token, $KRED, a GameFi token, and $ZED, a Governance token.  They will also launch AGORA – an NFT marketplace where players can buy, sell and trade their in-game NFTs. The first token launched was the DeFi token $JED in August 2021 and the second token $KRED will be the in-game currency and the currency for the AGORA NFT Marketplace which will be launching in November 2021. $ZED, the governance token, will be launching sometime after that, possibly early 2022. Also in development at JEDSTAR is a DCCG (Digital Collectible Card Game) which is under construction at Frag Games and a Massively Multiplayer Online Role-Playing Game (MMORPG) both games will use $KRED as the in-game currency. They also have a partnership with SkillGaming for the upcoming STARDOME which will also use $KRED as its in-game currency but this partnership will also enable players to easily convert their fiat currency into $KRED facilitating an easier way for people to access cryptocurrency, therefore, helping mass adoption of cryptocurrency in general.[http://bitcoinist.com/the-metaverse-is-coming-and-it-will-be-huge/ The METAVERSE Is Coming and It Will Be HUGE | Bitcoinist]
 
 
==== Black Eye Galaxy ====
 
[http://www.youtube.com/results?search_query=black+eye+galaxy+NFT+chain+game Youtube search...]
 
[http://www.google.com/search?q=black+eye+galaxy+NFT+chain+game ...Google search]
 
 
* [http://www.blackeyegalaxy.space/ Black Eye Galaxy]
 
* [http://hodooi.com/ Hodooi]
 
 
Vast cross-chain VR space odyssey, Black Eye Galaxy, has partnered with NFT marketplace, [http://hodooi.com/ Hodooi], for the next step of their cosmic journey. Now, they have a versatile trading platform to match their multi-chain ambitions. Black Eye Galaxy is a massive play-to-earn blockchain game where NFTs represent spaceships and planets, thus allowing gamers to explore the vast unknown. On their journey through this most final of all frontiers, players can mine planets resources, explore space, and develop intricate in-game economies. The [http://primo.ai/index.php?title=Immersive_Reality VR] space experience will test more than just battle prowess. Players can terraform planets, build civilizations, levy taxes, and forge alliances. The gaming mechanics even allow for custom currencies to feed homegrown economies. The result is a game that will potentially allow users to unleash their inner tyrant, and rule with an iron fist. Furthermore, Black Eye Galaxy prides itself in its cross-chain vision. Consequently, each “star cluster” involved represents a different layer 1 chain. With this in mind, Binance will kick things off, with additional blockchains arriving at a later date. Expect a land sale later in the year, and Ethereum connectivity in Q1 2022. [http://hodooi.com/ Hodooi] is set to help them on the way with their own in-built interoperability, allowing the seamless trading of in-game assets between factions. The result is an interesting experiment in multi-chain gaming.[http://nftplazas.com/black-eye-galaxy-hodooi/ Black Eye Galaxy Play-To-Earn Forms Cross-Chain Alliance with Hodooi | Russell - NFT Gaming News]
 
 
==== SuperFarm ====
 
[http://www.youtube.com/results?search_query=SuperFarm+NFT+chain+game Youtube search...]
 
[http://www.google.com/search?q=SuperFarm+NFT+chain+game ...Google search]
 
 
* [http://superfarm.com/ SuperFarm]
 
 
SuperFarm is proudly partnering with The Sandbox, a community-driven platform where creators can monetize voxel assets and gaming experiences on the blockchain.
 
The SuperFarm ecosystem is excited to announce the acquisition of an XL estate that the SuperFarm community will soon call home. This contiguous plot is a whopping 24x24 (or 576) parcels in size, allowing players to build and explore in a 3-dimensional virtual space. This collaboration between The Sandbox and SuperFarm lays the foundation for a robust player-driven ecosystem that spans the metaverse and drives even more utility to SuperFarm NFTs!  This is HUGE for SuperFarm related NFT projects which will now have a dedicated home in the Sandbox. From the EllioTrades Collection to SuperFarm Genesis cards and more, soon holders will be able to flex their incredible assets in the blockchain’s favorite voxel-verse. This partnership will be paired with a ton of community-centric initiatives to drive excitement and engagement with existing and future NFT collections.
 
 
==== MetaWars ====
 
[http://www.youtube.com/results?search_query=MetaWars+NFT+chain+game Youtube search...]
 
[http://www.google.com/search?q=MetaWars+NFT+chain+game ...Google search]
 
 
* [http://metawars.gg/ MetaWars]
 
 
MetaWars is a multiplayer strategy / roleplaying game with a vast universe powered by a growing digital economy built on blockchain technology. Choose your own path using a vast collection of NFTs and impact every major event across the Galaxy. As Battles rage and governments fall, it is up to you to earn your share of the vast fortunes that await. 
 
 
Introducing MetaWars, the newest play-to-earn NFT(non-fungible token) game. MetaWars is a multiplayer strategy and roleplaying game powered by a growing digital economy built on blockchain technology. The gameplay invites players to join a highly immersive digital metaverse game set in space, allowing users to earn cryptocurrency and NFTs as rewards. MetaWars has infinite universes where players can choose their own path using a vast collection of NFTs, and impact major events across the galaxy. The game challenges its players to earn their share of the vast fortunes as battle rages and governments fall in the MetaWars universe. [http://boxmining.com/metawars-nft/ MetaWars ($WARS, $GAM): NFT Gaming in Space | Amree Wayne]
 
 
= <span id="Cybersecurity - Gaming"></span>Cybersecurity - Gaming =
 
[http://www.youtube.com/results?search_query=~game+~hypergaming+Cyber+Cybersecurity+artificial+intelligence+deep+learning+ai Youtube search...]
 
[http://www.google.com/search?q=~game+~hypergaming+Cyber+Cybersecurity+artificial+intelligence+deep+learning+ai ...Google search]
 
 
* [[Cybersecurity]]
 
* [http://www.csiac.org/csiac-report/hypergaming-for-cyber-strategy-for-gaming-a-wicked-problem/ Hypergaming for Cyber - Strategy for Gaming a Wicked Problem]
 
* [http://www.circadence.com/ Circadence]
 
* [http://cyberstart.com/ CyberStart]
 
 
{|<!-- T -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>1IoY_RyX1-w</youtube>
 
<b>CSIAC Podcast - Hypergaming for Cyber - Strategy for Gaming a Wicked Problem
 
</b><br>CSIAC [http://www.csiac.org/podcast/hypergaming-for-cyber-strategy-for-gaming-a-wicked-problem/ Learn more]  Cyber as a domain and battlespace coincides with the defined attributes of a “wicked problem” with complexity and inter-domain interactions to spare. Since its elevation to domain status, cyber has continued to defy many attempts to explain its reach, importance, and fundamental definition. Corresponding to these intricacies, cyber also presents many interlaced attributes with other information related capabilities (IRCs), namely electromagnetic warfare (EW), information operations (IO), and intelligence, surveillance, and reconnaissance (ISR), within an information warfare (IW) construct that serves to add to its multifaceted nature. In this cyber analysis, the concept of hypergaming will be defined and discussed in reference to its potential as a way to examine cyber as a discipline and domain, and to explore how hypergaming can address cyber’s “wicked” nature from the perspectives of decision making, modeling, operational research (OR), IO, and finally IW. Finally, a cyber-centric hypergame model (CHM) will be presented.
 
|}
 
|<!-- M -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>wv7I_TjPWDE</youtube>
 
<b>Live Project Ares Walk Through
 
</b><br>If you are interested in playing Project Ares, please fill out this form - http://bit.ly/ITCQ-ARES-Q  Project Ares Gamified Cyber Security Training from Circadence  http://www.circadence.com  www.zachtalkstech.com  teespring.com/stores/it-career-questions
 
|}
 
|}<!-- B -->
 
{|<!-- T -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>ulmo00-3h7k</youtube>
 
<b>CyberStart Game - Video1
 
</b><br>How to quickly get up and running with the the CyberStart game.  This video includes an overview of the game Intro, the Basic Layout and the Field Manual.
 
|}
 
|<!-- M -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>83AychLyugc</youtube>
 
<b>TryHackMe - Beginner Learning Path
 
</b><br>ActualTom  Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/actual_tom
 
|}
 
|}<!-- B -->
 
 
  
 
= More... =
 
= More... =
 
== <span id="Airport CEO"></span>Airport CEO ==
 
[http://www.youtube.com/results?search_query=Airport+CEO Youtube search...]
 
[http://www.google.com/search?q=Airport+CEO ...Google search]
 
 
* [http://www.airportceo.com/ Airport CEO]
 
* [[Screening; Passenger, Luggage, & Cargo]]
 
* [[Metaverse]]
 
** [[Metaverse#Flight Simulator 2020| Flight Simulator 2020]]
 
 
{|<!-- T -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>XoHA907Pcdo</youtube>
 
<b>S1:E1 Airport CEO - Extreme Difficulty - An Aggressive Start
 
</b><br>In this episode, we kick off a new series playing Airport CEO on extreme difficulty and showcasing a very aggressive start where we embrace debt and expand rapidly.
 
Airport CEO is a city-builder / tycoon game where the player is acting as CEO of an airport.
 
|}
 
|<!-- M -->
 
| valign="top" |
 
{| class="wikitable" style="width: 550px;"
 
||
 
<youtube>G0m0yM40qDA</youtube>
 
<b>BETTER Baggage Security! | Airport CEO
 
</b><br>Come fly with me..
 
|}
 
|}<!-- B -->
 
  
  
Line 361: Line 260:
 
<youtube>79pmNdyxEGo</youtube>
 
<youtube>79pmNdyxEGo</youtube>
 
<youtube>MMLtza3CZFM</youtube>
 
<youtube>MMLtza3CZFM</youtube>
 
  
 
= Books =
 
= Books =
  
[http://www.amazon.com/Invent-Your-Computer-Games-Python/dp/1593277954/ref=tmm_pap_swatch_0 Invent Your Own Computer Games with Python | Al Sweigart]
+
[https://www.amazon.com/Invent-Your-Computer-Games-Python/dp/1593277954/ref=tmm_pap_swatch_0 Invent Your Own Computer Games with Python | Al Sweigart]
  
 
https://images-na.ssl-images-amazon.com/images/I/51mpkckeu4L._SX376_BO1,204,203,200_.jpg
 
https://images-na.ssl-images-amazon.com/images/I/51mpkckeu4L._SX376_BO1,204,203,200_.jpg
  
[http://www.amazon.com/Deep-Learning-Game-Max-Pumperla/dp/1617295329 Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson]  
+
[https://www.amazon.com/Deep-Learning-Game-Max-Pumperla/dp/1617295329 Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson]  
  
 
https://images-na.ssl-images-amazon.com/images/I/51LpAeEYhzL._SX397_BO1,204,203,200_.jpg
 
https://images-na.ssl-images-amazon.com/images/I/51LpAeEYhzL._SX397_BO1,204,203,200_.jpg
  
[http://www.amazon.com/Hands-Deep-Learning-Games-reinforcement/dp/1788994078 Hands-On Deep Learning for Games: Leverage the power of neural networks and reinforcement learning to build intelligent games | Micheal Lanham]
+
[https://www.amazon.com/Hands-Deep-Learning-Games-reinforcement/dp/1788994078 Hands-On Deep Learning for Games: Leverage the power of neural networks and reinforcement learning to build intelligent games | Micheal Lanham]
  
 
https://images-na.ssl-images-amazon.com/images/I/517S9nvodoL._SX404_BO1,204,203,200_.jpg
 
https://images-na.ssl-images-amazon.com/images/I/517S9nvodoL._SX404_BO1,204,203,200_.jpg
  
[http://www.amazon.com/Machine-learning-Artificial-Intelligence-Data-ebook/dp/B07V6RQKYX/ref=sr_1_30 Machine learning and Artificial Intelligence 2.0 with Big Data: Building Video Games using Python 3.7 and Pygame |  Narendra Mohan Mittal]
+
[https://www.amazon.com/Machine-learning-Artificial-Intelligence-Data-ebook/dp/B07V6RQKYX/ref=sr_1_30 Machine learning and Artificial Intelligence 2.0 with Big Data: Building Video Games using Python 3.7 and Pygame |  Narendra Mohan Mittal]
  
 
https://images-na.ssl-images-amazon.com/images/I/41eHxTsXXgL.jpg
 
https://images-na.ssl-images-amazon.com/images/I/41eHxTsXXgL.jpg

Latest revision as of 09:07, 17 November 2024

Youtube search... ... Quora ...Google search ...Google News ...Bing News



Gaming Evolution

Meta: Diplomacy 2022

Cicero, has demonstrated the ability to play the strategy game Diplomacy at a level that rivals human performance. Cicero can engage in game conversations and negotiations without most human players realizing they are interacting with a machine. During gameplay on an online league, Cicero sent over 5,000 messages to human players, and its identity as an AI remained undetected. Cicero's performance was impressive, ranking in the top 10% of players. The integration of AI into the game of Diplomacy has shown that machines can effectively mimic human negotiation tactics and strategic thinking. Cicero's achievements in Diplomacy are a testament to the potential of AI in complex human interactions. As AI continues to evolve, it will undoubtedly transform the landscape of diplomacy, offering new tools and methods to support diplomatic efforts.

NVIDIA: 40 Years on, PAC-MAN ...2020

  • GameGAN, a generative adversarial network trained on 50,000 PAC-MAN episodes, produces a fully functional version of the dot-munching classic without an underlying game engine.

OpenAI: Hide and Seek ... 2019

Meta: Brown & Sandholm: 6-player Poker ...2019

  • Occlusions
  • Facebook and Carnegie Mellon (CMU) ‘superhuman’ poker AI beats human pros, ‘It can bluff better than any human.’ | James Vincent - The Verge

Google DeepMind AlphaStar: StarCraft II ... 2019

OpenAI: Dota 2 ...2018

Google DeepMind AlphaGo Zero: Go ...2016

AlphaGo is a computer program developed by Google DeepMind that uses artificial intelligence (AI) to play the board game Go. In 2016, AlphaGo made history by becoming the first computer program to defeat a professional Go player, Lee Sedol, in a five-game match.

During the second game of the match, AlphaGo made a surprising move, known as Move 37, which stunned the Go community and left Lee Sedol speechless. The move involved placing a stone in an unexpected location on the board, which initially appeared to be a mistake. However, as the game progressed, it became clear that the move was part of a complex strategy that allowed AlphaGo to gain an advantage over Lee Sedol. Move 37 is significant because it demonstrated the power of AlphaGo's AI algorithms and its ability to think creatively and strategically. The move was not based on any known human strategy or prior knowledge of the game, but rather on AlphaGo's own analysis and evaluation of the board position.



What would have happened with human-in-the-loop on Move 37?



The move highlighted the limitations of human intuition and the potential for AI to uncover new insights and strategies in complex domains. If a human expert had been involved in the decision-making process for Move 37, they might have questioned AlphaGo's choice and suggested a more conventional move. This could have prevented AlphaGo from making the unexpected and seemingly risky move that ultimately led to its victory.


Andrew Jackson & Josh Hoak: Minigo ...2018

an open source, unofficial implementation of AlphaGo Zero using Reinforcement Learning (RL) approaches can be massively parallelized, so Kubernetes seems like a natural fit, as Kubernetes is all about reducing the overhead for managing applications. However, it can be daunting to wade into Kubernetes and Machine Learning, especially when you add in hardware accelerators like GPUs or TPUs! This talk will break down how you can use Kubernetes and TensorFlow to create, in relatively few lines of code, a tabula rasa AI that can play the game of Go, inspired by the AlphaZero algorithm published by Deepmind. This talk will rely on GPUs, TPUs, TensorFlow, KubeFlow, and large-scale Kubernetes Engine clusters. Uses self-play with Monte Carlo Tree Search and refining the Policy/Value along the way.

Google DeepMind: Atari video games ...2015

IBM: Watson: Jeopardy ...2011

IBM: Deep Blue: Chess ...1997

John Conway: The Game of Life (GoL) ...1970

Gospers_glider_gun.gif

The Rules

  • For a space that is 'populated':
    • Each cell with one or no neighbors dies, as if by solitude.
    • Each cell with four or more neighbors dies, as if by overpopulation.
    • Each cell with two or three neighbors survives.
  • For a space that is 'empty' or 'unpopulated'
    • Each cell with three neighbors becomes populated.

Donald Waterman: Draw Poker ...1968

Martin Gardner: Hexapawn ...1962

A simple game on a 3x3 grid, where each side has 3 chess pawns. The objective is to get a pawn to the other side of the board, or leave the opponent unable to move. Normal chess rules apply except that the pawns are not allowed a double move from their starting position. Not really intended as a two-player game, it was designed to demonstrate an artificial intelligence learning technique by using beads in matchboxes. (Old enough to remember matchboxes?) Twenty-four matchboxes were used to represent the possible moves. Essentially, there were two phases. The first phase was to "teach" the matchbox computer to play the game, then a second phase allowed the matchbox computer to play other opponents. The learning speed depended on the skill of the opponent in the teaching phase. Martin Gardner first published this in his Mathematical Games column in March 1962, and subsequently in his book, "The Unexpected Hanging". Board Game Geek

Donald Michie: Noughts and Crosses ...1960

MENACE (the Machine Educable Noughts And Crosses Engine) “learns” to play Noughts and Crosses by playing the game repeatedly against another player, each time refining its strategy until after having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “rewarded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called Reinforcement Learning (RL). Menace: the Machine Educable Noughts And Crosses Engine | Oliver Child - Chalkdust

MENACE makes a move when the human player randomly picks a bead out of the box that represents the game’s current state. The colour of the bead determines where MENACE will move. In some versions of MENACE, there were beads that only represented more blatant moves such as the side, centre, or corner. The human player chooses the beads at random, just like a neural network’s weights are random at the start. Also like weights, the beads are adjusted when there is failure or success. At the end of each game, if MENACE loses, each bead MENACE used is removed from each box. If MENACE wins, three beads the same as the colour used during each individual turn are added to their respective box. If if the game resulted in a draw, one bead is added. How 300 Matchboxes Learned to Play Tic-Tac-Toe Using MENACE | Caspar Wylie - Open Data Science (ODSC)

img3.jpg




Arthur Samuel: Checkers ...1950s

More...

Books

Invent Your Own Computer Games with Python | Al Sweigart

51mpkckeu4L._SX376_BO1,204,203,200_.jpg

Deep Learning and the Game of Go | Max Pumperla, Kevin Ferguson

51LpAeEYhzL._SX397_BO1,204,203,200_.jpg

Hands-On Deep Learning for Games: Leverage the power of neural networks and reinforcement learning to build intelligent games | Micheal Lanham

517S9nvodoL._SX404_BO1,204,203,200_.jpg

Machine learning and Artificial Intelligence 2.0 with Big Data: Building Video Games using Python 3.7 and Pygame | Narendra Mohan Mittal

41eHxTsXXgL.jpg