Difference between revisions of "Google DeepMind AlphaGo Zero"
(→Monte Carlo Tree Search) |
|||
| Line 29: | Line 29: | ||
http://cdn-images-1.medium.com/max/1000/1*0pn33bETjYOimWjlqDLLNw.png | http://cdn-images-1.medium.com/max/1000/1*0pn33bETjYOimWjlqDLLNw.png | ||
| − | == [[Monte Carlo Tree Search]] == | + | == [[<span id="Monte Carlo Tree Search"></span>Monte Carlo Tree Search]] == |
[http://www.youtube.com/results?search_query=Monte+Carlo+Tree+Search Youtube search...] | [http://www.youtube.com/results?search_query=Monte+Carlo+Tree+Search Youtube search...] | ||
Revision as of 20:38, 1 September 2019
Youtube search... ...Google search
- Service Capabilities
- Evolutionary Computation / Genetic Algorithms
- Architectures
- Google DeepMind AlphaFold
- Google DeepMind AlphaStar
- Minigo
- Monte Carlo Tree Search
- Mastering the game of Go with deep neural networks and tree search | D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel & D. Hassabis - Nature
- Mastering the game of Go without human knowledge | Google DeepMind: D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. wan den Driessehe, T. Graepel, & D. Hassabis
- A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play | Google DeepMind: D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. wan den Driessehe, T. Graepel, & D. Hassabis - Science
[[Monte Carlo Tree Search]]