4
5

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

高次元空間を扱うRL, IRLサーベイメモ (not deep)

Posted at

はじめに

高次元の状態空間を扱う強化学習,逆強化学習手法を簡単にまとめました.

印象

連続状態空間をそのまま扱うものは少なく,「如何に高次元な離散状態空間を効率的に計算するか」を扱った論文が多かった.
→連続な状態空間,行動空間を扱うIRL手法はdeepなものが多い.Levine先生資料

連続空間をそのまま扱うIRLは,

  • 連続状態空間&連続行動空間
    • Continuous Inverse Optimal Control with Locally Optimal Examples [Levine and Koltun, ICML2012]
  • 連続状態空間&連続時間
    • Inverse reinforcement learning in continuous time and space [Kamalapurkar, 2018]

連続空間をそのまま扱うRLは,

  • 連続状態空間
    • Value Function Approximation in Reinforcement Learning Using the Fourier Basis [Konidaris+, AAAI2011]
    • Efficient high-dimensional maximum entropy modeling via symmetric partition functions [Vernaza and Bagnell, NIPS2012]
  • 連続状態空間&連続行動空間
    • An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward [Hoffman+, 2009]
    • Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods [Lazaric+, NIPS2007]
    • Reinforcement Learning in Continuous State and Action Spaces [Hasselt, 2012]
  • 連続状態空間&連続時間空間&恐らく連続行動空間
    • Reinforcement Learning In Continuous Time and Space [Doya, 2000]

くらいで,恐らく他は高次元の離散空間を効率的に解く手法.

目次

順番に特に意味は無いです.

IRL

  • Fast Inverse Reinforcement Learning with Interval Consistent Graph for Driving Behavior Prediction [Shimosaka+, AAAI2017], pdf
  • First-Person Activity Forecasting with Online Inverse Reinforcement Learning [Rhinehart and Kitani, ICCV2017], pdf
  • Continuous Inverse Optimal Control with Locally Optimal Examples [Levine and Koltun, ICML2012], pdf
  • Approximate MaxEnt Inverse Optimal Control and its Application for Mental Simulation of Human Interactions [Huang+, AAAI2015], pdf
    • Action-Reaction:Forecasting the Dynamics of Human Interaction [Huang and Kitani, ECCV 2014], pdf
  • Inverse reinforcement learning in continuous time and space [Kamalapurkar, 2018], pdf

RL

  • Kernelized Value Function Approximation for Reinforcement Learning [Taylor and Parr, ICML2009], pdf
  • Value Function Approximation in Reinforcement Learning Using the Fourier Basis [Konidaris+, AAAI2011], pdf
  • An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward [Hoffman+, 2009], pdf
  • Efficient high-dimensional maximum entropy modeling via symmetric partition functions [Vernaza and Bagnell, NIPS2012], pdf
  • Reinforcement Learning In Continuous Time and Space [Doya, 2000], pdf
  • Reinforcement Learning in Continuous State and Action Spaces [Hasselt, 2012], pdf
  • Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods [Lazaric+, NIPS2007], pdf

IRL

Fast Inverse Reinforcement Learning with Interval Consistent Graph for Driving Behavior Prediction [Shimosaka+, AAAI2017], pdf

ロボットを対象としたgraph-based MaxEnt IRLを一般的な状態空間へ拡張.
画像はgraph-based MaxEnt IRLのもの.

image.png

  • グラフを用いた状態空間の離散化.
  • 時間間隔の一貫性を保証することで計算コストを抑える.
  • Driving behavior prediction on residential roadsでメッシュグリッド状態空間(MG-IRL),連続状態空間(C-IRL),グラフベース(GB-IRL),モンテカルロサンプリング(FQI-IRL)のIRL手法より誤差が少なく,計算コストが抑えられていることを確認.

image.png

First-Person Activity Forecasting with Online Inverse Reinforcement Learning [Rhinehart and Kitani, ICCV2017], pdf

一人称視点映像からシーン(部屋)や行動を認識,行き先を推定.
連続空間を扱ってはなさそう.

  • シーン,行動はdeepの手法で識別.
  • DARKOアルゴリズムを提案.

image.png

Continuous Inverse Optimal Control with Locally Optimal Examples [Levine and Koltun, ICML2012], pdf

連続状態空間,連続行動空間を扱う.

  • 報酬関数のlocalな最適化を用いる(globalに最適なexpertは必要が無くなる?).
  • 報酬関数が線形(よくある$\theta^Tf$の形か),非線形で与えられる場合の二つのバリエーションで提案.
  • 分配関数をラプラス近似することでコスト削減.
  • Robot arm control, Planar navigation, simulated drivingで有効性を確認.

Robot arm control

提案手法は'linear'と'nonlinear'.
比較手法よりも小さなロスに収束.

image.png

image.png

アームの関節数を増やして実験.
比較手法は関節数を増やすとロスが収束せず,また計算時間が発散.提案手法は収束.

image.png

Planar navigation

局所的に最適な16本の軌跡から報酬関数を計算.

image.png

global, localに最適な軌跡で報酬関数を計算.
比較手法はlocalに最適な軌跡ではうまく収束しない.

image.png

simulated driving

積極的,回避する,前の車に付いていく運転を学習.

動画

image.png

Approximate MaxEnt Inverse Optimal Control and its Application for Mental Simulation of Human Interactions [Huang+, AAAI2015], pdf

MaxEnt IOCの近似手法を提案,分配関数の計算コストを削減.

  • 状態空間と行動空間をquantized.
  • approximate dynamic programming
  • Monte Carlo sampling
  • サンプル数が少ないとうまくいかないか.

image.png

Inverse reinforcement learning in continuous time and space [Kamalapurkar, 2018], pdf

連続状態空間,連続時間空間を扱う.
数式だらけでほとんど理解できなかった...

  • 線形な動的システムを仮定.
  • inverse Bellman errorを最小化するように学習.

RL

Kernelized Value Function Approximation for Reinforcement Learning [Taylor and Parr, ICML2009], pdf

様々なkernel-basedな価値関数を統合.

  • Kernelized LSTD (KLSTD)
  • Gaussian Process Temporal Difference learning (GPTD)
  • Gaussian Pro- cesses in Reinforcement Learning (GPRL)
  • Bellman error decompositionで正規化項のパラメータチューニング.
  • Bellman error decompositionの効果を制御する$\Sigma_p$を変えて実験,$0.1I$にした方がスムースに.

image.png

Value Function Approximation in Reinforcement Learning Using the Fourier Basis [Konidaris+, AAAI2011], pdf

連続状態空間を扱う.

  • 価値関数の近似にフーリエ級数(Fourier series).
  • 放射基底関数(radial basis functions),多項式基底(polynomial basis)を使用した価値関数の近似よりも良い結果に.

The Swing-Up Acrobat

Sarsaアルゴリズムで学習.
近似手法は

  • The polynomial Basis
  • Radial Basis Functions (RBFs)
  • Proto-Value Functions (PVFs)
  • Fourier Basis

image.png

The Discontinuous Room

離散状態空間でも比較.

image.png

Mountain Car

image.png

An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward [Hoffman+, 2009], pdf

連続状態空間,連続行動空間を扱う.
EMアルゴリズムで最適制御.

  • 報酬がガウス分布の線形和でモデル化できる場合に有効か.

Efficient high-dimensional maximum entropy modeling via symmetric partition functions [Vernaza and Bagnell, NIPS2012], pdf

連続状態空間を扱う.
低次元の空間での動的計画法が可能.

  • 分配関数が回転(rotations)について対称になることを利用?

image.png

image.png

image.png

Reinforcement Learning In Continuous Time and Space [Doya, 2000], pdf

連続状態空間,連続時間空間,恐らく連続行動空間を扱う.
Continuous Actor-CriticとValue-Gradient Based Policyを提案.

  • HJB equationに基づき,連続な場合に拡張.
  • 価値関数を使って,最適な方策を見つける.
  • Discrete Actor-CriticよりもContinuous Actor-Criticの方が,少ないステップで方策獲得.
  • Continuous Actor-CriticよりもValue-Gradient Based Policyの方が良さそう.

Reinforcement Learning in Continuous State and Action Spaces [Hasselt, 2012], pdf

連続状態空間,連続行動空間に対する強化学習手法の紹介.
Cacla (continuous actor-critic learning-automaton)がメイン.

  • 教科書的な解説.
  • Value-approximationな方法とPolicy-approximationな方法.
  • 連続状態空間かつ連続行動空間を扱うにはPolicy-approximationな方法 (actor-critic)でないと難しい.
    • Cacla (continuous actor-critic learning-automaton)
    • CMA-ES (Covariance matrix adaptation evolution strategies):gradient freeな手法,Monte Carlo
    • NAC (natural actor-critic)
  • double-pole balancing problemで比較.Caclaが最も良い.

Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods [Lazaric+, NIPS2007], pdf

連続状態空間,連続行動空間を扱う.
actor-criticを拡張したSMC(Sequential Monte Carlo)-learningを提案

  1. Action Selection
  2. Critic Update(行動価値関数を計算)
  3. Actor Update
    の3段階
  • 取りうる行動の集合(この時点で離散な気も)からサンプリング.
  • Monte Carlo Methodで行動をサンプリング.
  • 行動価値関数の更新にはOn-Policyのもの(SARSA)を使用.
  • importance samplingで行動のサンプリング重み.
  • 行動のサンプル数も適宜更新.

image.png

image.png

image.png

4
5
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
4
5

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?