0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

[Review] Experience Replay

Last updated at Posted at 2018-06-07

Foreword

Recently I have been curious about the causes helping the success of DQN. Since 2015, many researchers developed the concept of the migration of DL and RL rapidly. And in my point view, We need more novel techniques to support the combination of them, usually people call this domain as Deep Reinforcement Learning. For running example, in DQN experience replay, which is the technique to store the encountered states in the memory to replay them later for efficient learning , is used. So in this article, i have developed my insight of ER more to get better understanding of deep reinforcement learning.

Paper Information

  1. Self-Improving Reactive Agents Based On Reinforcement Learning, Planning and Teaching
  2. Experience Replay for Real-Time Reinforcement Learning Control
  3. Real-Time Reinforcement Learning by Sequential Actor-Critics and Experience Replay
  4. SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY
  5. Hindsight Experience Replay
  6. Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning
  7. PRIORITIZED EXPERIENCE REPLAY
  8. A Deeper Look at Experience Replay
  9. DISTRIBUTED PRIORITIZED EXPERIENCE REPLAY

1. Self-Improving Reactive Agents Based On Reinforcement Learning, Planning and Teaching

Author: LONG-JI LIN
Published Year: 1992
Notes:

Issue: RL agents need huge time to learn
Approach: Experience Replay can enhance the efficient learning and store the precious experiences for future usage.
Concept: some experiences are quite rare and costly to obtain it again.
So if we can remember all of them, the agent can efficiently learn from it. Reusing experiences to feed them to the agent as if it experiences those again.
Remaining Challenges: if the env has dynamically changed, the past experiences become meaningless.

2. Experience Replay for Real-Time Reinforcement Learning Control

Authors: Sander Adam, Lucian Bus¸oniu, and Robert Babuska
Published Year: 2012
Notes:
They have applied ER to control of the real time robots, which is a soccer game in this paper.
As they explicitly mentioned, their contribution goes mainly to the experimental evaluation of the usage of ER RL for real time control of the robots.

image.png

The paper is organized as follows. Section II presents the necessary background in exact and approximate RL. In Section III, we introduce and discuss our ER framework. In Section IV, the performance of the ER algorithms is evaluated in an extensive simulation and experimental study. Section V concludes the paper.

According to their deep analysis, ER has three benefits.

  1. Reutilisation of the past data
  2. Having asymptotically similar effects with eligibility traces, because it propagates similar information to that transmitted by the eligibility traces.
  3. Aggregation of information from multiple trajectories

image.png

3. Real-Time Reinforcement Learning by Sequential Actor-Critics and Experience Replay

Author: Pawel Wawrzynski
Publish Year: 11 May 2016
Notes:
This paper shows how Actor-Critics algorithms can be augmented by ER without degrading their convergence properties, by appropriately estimating the policy change direction. This is achieved by truncated importance sampling applied to the recorded past experiences.

The paper is organized as follows. In Section 2 the problem of our interest
is defined along with the class of algorithms that encompasses sequential
Actor-Critics. It is shown that a given Actor-Critic method defines certain improvement directions of its parameter vectors. Section 3 shows how to estimate
these directions using the data from the preceding state transition and to accelerate a sequential algorithm by combining these estimators with experience
replay. Conditions for asymptotic unbiasedness of these estimators are established
in Section 4 that enable the algorithm with experience replay to inherit
the limit properties of the original sequential method. The experimental study
on the introduced methodology is presented in Section 5 and the last Section
concludes.

Screen Shot 2018-06-08 at 8.44.02.png Screen Shot 2018-06-08 at 8.46.43.png Screen Shot 2018-06-08 at 8.55.46.png

4. SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY

Authors: Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas
Published Year: 2017 July

Issues:
The design of stable, efficient actor-critic methods being applicable to both continuous and discrete action spaces has been a long-standing hurdle of RL.

Approaches:
Applying these three techniques to Actor-Critic learning method, they have achieved novel result.

  1. Truncated Importance Sampling with bias correction: (marginalisation of IS)Degris et al in 2012, (truncation of IS) Wawrzynski in 2009
  2. Stochastic Dueling Networks: Wang et al. (2016)
  3. Efficient Trust Region Optimisation Method: Schulman et al in 2015

Model Architectures:

  • Basic Actor Critic Algo
Screen Shot 2018-06-17 at 16.38.07.png - A3C Screen Shot 2018-06-17 at 16.38.16.png check the details here! https://qiita.com/Rowing0914/items/214922c600640d143ad7 - Policy Gradient with Importance Sampling Screen Shot 2018-06-17 at 16.40.41.png - Policy Gradient with Marginalised Importance Sampling by Degris et al in 2012 He has marginalised the importance sampling using expectation => because the product computation generates huge bias!! Screen Shot 2018-06-17 at 16.41.18.png - Retrace algo for discrete action space Screen Shot 2018-06-17 at 16.43.44.png Screen Shot 2018-06-17 at 16.45.43.png - Truncated Importance Sampling Screen Shot 2018-06-17 at 16.46.44.png Screen Shot 2018-06-17 at 16.47.14.png - ACER update Screen Shot 2018-06-17 at 16.47.28.png - Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a) Screen Shot 2018-06-17 at 16.48.09.png - Dueling network Wang et al. (2016) Screen Shot 2018-06-17 at 16.48.29.png Screen Shot 2018-06-17 at 16.49.14.png
  • ACER
Screen Shot 2018-06-17 at 16.49.56.png

5. Hindsight Experience Replay

Authors:
Published Year: 2017
Concept: One ability humans have is to learn from our mistakes and adjust next time to avoid making the same mistake. And RL doesn't have it yet. So by setting multiple goals during one scenario, we can re-examine this trajectory with a different goal - while this trajectory may not help us learn how to achieve the state g, but it definitely tells us something key to solve the state $s_T$.

Base algo: DDPG(Deep Deterministic Policy Gradient) Proposed by Timothy P. Lillicrap et al.
They presented an actor-critic, model-free algorithm based on the deterministic
policy gradient that can operate over continuous action spaces.
Isseus: DQN is not applicable to continuous action spaces. So it's not for Robotics! By the way, their work was based on the deterministic policy gradient (DPG) algorithm (Silver et al., 2014)
Approaches: They advanced the DPG(David Silver in 2014) algo by adapting DQN approach.
Remaining Issues: A robust model-free approach may be an important component of larger systems which may attack these limitations (Glascher et al., 2010).
Model Architecture:
Screen Shot 2018-06-17 at 17.41.47.png

Issues:
Comparing to human beings, RL needs huge time to adjust itself to achieve the task in continuous action spaces.

Approaches:
Adapting UVFA(universal value function approximators which is proposed by Schaul et al in 2015) to DDPG(deep deterministic policy gradient which is proposed by Lillicrap et al in 2015), we store the multiple goals corresponding to each state, action in the episode to the experience memory, then replay that.

Result:
For a running example, they have evaluated DQN with/without HER(hindsight experience replay), and the result is shown below.
Screen Shot 2018-06-17 at 22.51.41.png

Result of the experiment, i.e. to apply HER to other off-policy learning methods, like DQN and DDPG.
Screen Shot 2018-06-17 at 23.25.34.png

Algo:
Screen Shot 2018-06-17 at 23.24.17.png

6. Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning

Authors: Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip. H. S. Torr, Pushmeet Kohli, Shimon Whiteson
Published Year: 2018 May

Prerequisites:
check Independent Q Learning
https://qiita.com/Rowing0914/items/6467b7c82ab83eb0baae

Contents:
Issues: How to apply the success case of DL in Single RL to MARL situations. Because the currently dominant technique, which is called Independent Q Learning, is not quite applicable to Deep Reinforcement Learning like DQN. It is because, IQL considers other agents' policies as an environment and this indeed generates huge non-stationarity making the environment dynamic. Hence, the Experience Reply doesn't apply anymore.

Approaches:
They have two ways to address this issue.

  1. Using a multi-agent variant of importance sampling to naturally decay obsolete data
  2. conditioning each agent's value function on a fingerprint that disambiguates the age of the data sampled from the reply memory.

Simple fingerprint contains the information such as an index of the episode, and epsilon of the rate of the exploration.

Task:
Screen Shot 2018-06-25 at 19.03.26.png

Result:
Screen Shot 2018-06-25 at 19.03.50.png

7. PRIORITIZED EXPERIENCE REPLAY

Authors: Tom Schaul, John Quan, Ioannis Antonoglou and David Silver
Published Year: 2015 November

Issues:

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?