Difference between revisions of "Advanced Actor Critic (A2C)"
m (Text replacement - "http:" to "https:") |
|||
(One intermediate revision by the same user not shown) | |||
Line 5: | Line 5: | ||
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | ||
}} | }} | ||
− | [ | + | [https://www.youtube.com/results?search_query=Advanced+A2C+Actor+Critic+Reinforcement+Machine+Learning YouTube search...] |
− | [ | + | [https://www.google.com/search?q=Advanced+A2C+Actor+Critic+Reinforcement+Machine+Learning ...Google search] |
* [[Reinforcement Learning (RL)]] | * [[Reinforcement Learning (RL)]] | ||
Line 18: | Line 18: | ||
** [[Evolutionary Computation / Genetic Algorithms]] | ** [[Evolutionary Computation / Genetic Algorithms]] | ||
** [[Actor Critic]] | ** [[Actor Critic]] | ||
+ | *** [[Asynchronous Advantage Actor Critic (A3C)]] | ||
*** Advanced Actor Critic (A2C) | *** Advanced Actor Critic (A2C) | ||
− | |||
*** [[Lifelong Latent Actor-Critic (LILAC)]] | *** [[Lifelong Latent Actor-Critic (LILAC)]] | ||
** [[Hierarchical Reinforcement Learning (HRL)]] | ** [[Hierarchical Reinforcement Learning (HRL)]] | ||
− | * [ | + | |
+ | * [https://towardsdatascience.com/advanced-reinforcement-learning-6d769f529eb3 Beyond DQN/A3C: A Survey in Advanced Reinforcement Learning | Joyce Xu - Towards Data Science] | ||
* [[Policy Gradient (PG)]] | * [[Policy Gradient (PG)]] | ||
* [[Proximal Policy Optimization (PPO)]] | * [[Proximal Policy Optimization (PPO)]] | ||
− | A2C produces comparable performance to [[Asynchronous Advantage Actor Critic (A3C)]] while being more efficient. A2C is like A3C but without the asynchronous part; this means a single-worker variant of the A3C. [ | + | A2C produces comparable performance to [[Asynchronous Advantage Actor Critic (A3C)]] while being more efficient. A2C is like A3C but without the asynchronous part; this means a single-worker variant of the A3C. [https://towardsdatascience.com/understanding-actor-critic-methods-931b97b6df3f Understanding Actor Critic Methods and A2C | Chris Yoon - Towards Data Science] |
Latest revision as of 19:29, 27 March 2023
YouTube search... ...Google search
- Reinforcement Learning (RL)
- Monte Carlo (MC) Method - Model Free Reinforcement Learning
- Markov Decision Process (MDP)
- State-Action-Reward-State-Action (SARSA)
- Q Learning
- Deep Reinforcement Learning (DRL) DeepRL
- Distributed Deep Reinforcement Learning (DDRL)
- Evolutionary Computation / Genetic Algorithms
- Actor Critic
- Asynchronous Advantage Actor Critic (A3C)
- Advanced Actor Critic (A2C)
- Lifelong Latent Actor-Critic (LILAC)
- Hierarchical Reinforcement Learning (HRL)
- Beyond DQN/A3C: A Survey in Advanced Reinforcement Learning | Joyce Xu - Towards Data Science
- Policy Gradient (PG)
- Proximal Policy Optimization (PPO)
A2C produces comparable performance to Asynchronous Advantage Actor Critic (A3C) while being more efficient. A2C is like A3C but without the asynchronous part; this means a single-worker variant of the A3C. Understanding Actor Critic Methods and A2C | Chris Yoon - Towards Data Science