1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-18 03:30:19 +01:00
Files
coach/docs_raw/source/components/agents/policy_optimization/acer.rst
shadiendrawis 2b5d1dabe6 ACER algorithm (#184)
* initial ACER commit

* Code cleanup + several fixes

* Q-retrace bug fix + small clean-ups

* added documentation for acer

* ACER benchmarks

* update benchmarks table

* Add nightly running of golden and trace tests. (#202)

Resolves #200

* comment out nightly trace tests until values reset.

* remove redundant observe ignore (#168)

* ensure nightly test env containers exist. (#205)

Also bump integration test timeout

* wxPython removal (#207)

Replacing wxPython with Python's Tkinter.
Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner.

* Create CONTRIBUTING.md (#210)

* Create CONTRIBUTING.md.  Resolves #188

* run nightly golden tests sequentially. (#217)

Should reduce resource requirements and potential CPU contention but increases
overall execution time.

* tests: added new setup configuration + test args (#211)

- added utils for future tests and conftest
- added test args

* new docs build

* golden test update
2019-02-20 23:52:34 +02:00

2.5 KiB
Raw Blame History

Actions space: Discrete

References: Sample Efficient Actor-Critic with Experience Replay

Network Structure

/_static/img/design_imgs/acer.png

Algorithm Description

Choosing an action - Discrete actions

The policy network is used in order to predict action probabilites. While training, a sample is taken from a categorical distribution assigned with these probabilities. When testing, the action with the highest probability is used.

Training the network

Each iteration perform one on-policy update with a batch of the last Tmax transitions, and n (replay ratio) off-policy updates from batches of Tmax transitions sampled from the replay buffer.

Each update perform the following procedure:

  1. Calculate state values:

    V(st)โ€‰=โ€‰๐”ผaโ€‰โˆผโ€‰ฯ€[Q(st,โ€‰a)]
  2. Calculate Q retrace:

    Qret(st,โ€‰at)โ€‰=โ€‰rtโ€‰+โ€‰ฮณโ€’ฯtโ€‰+โ€‰1[Qret(stโ€‰+โ€‰1,โ€‰atโ€‰+โ€‰1)โ€‰โˆ’โ€‰Q(stโ€‰+โ€‰1,โ€‰atโ€‰+โ€‰1)]โ€‰+โ€‰ฮณV(stโ€‰+โ€‰1)
    whereโ€โ€’ฯtโ€‰=โ€‰min{c,โ€‰ฯt},โ€‰โ€ฯtโ€‰=โ€‰(ฯ€(atโ€‰โˆฃโ€‰st))/(ฮผ(atโ€‰โˆฃโ€‰st))
  3. Accumulate gradients:

    โ€ข Policy gradients (with bias correction):

    ฤpolicyt โ€‰=โ€‰ โ€’ฯtโˆ‡logฯ€(atโ€‰โˆฃโ€‰st)[Qret(st,โ€‰at)โ€‰โˆ’โ€‰V(st)] โ€… โ€… โ€… โ€‰+โ€‰๐”ผaโ€‰โˆผโ€‰ฯ€โŽ›โŽโŽกโŽฃ(ฯt(a)โ€‰โˆ’โ€‰c)/(ฯt(a))โŽคโŽฆโˆ‡logฯ€(aโ€‰โˆฃโ€‰st)[Q(st,โ€‰a)โ€‰โˆ’โ€‰V(st)]โŽžโŽ 

    โ€ข Q-Head gradients (MSE):

    ฤQtโ€‰=โ€‰(Qret(st,โ€‰at)โ€‰โˆ’โ€‰Q(st,โ€‰at))โˆ‡Q(st,โ€‰at) โ€…
  4. (Optional) Trust region update: change the policy loss gradient w.r.t network output:

    ฤtrustโ€‰โˆ’โ€‰regiontโ€‰=โ€‰ฤpolicytโ€‰โˆ’โ€‰maxโŽงโŽฉ0,โ€‰(kTฤpolicytโ€‰โˆ’โ€‰ฮด)/(โ€–kโ€–22)โŽซโŽญk
    whereโ€kโ€‰=โ€‰โˆ‡DKL[ฯ€avgโ€‰โˆฅโ€‰ฯ€]

    The average policy network is an exponential moving average of the parameters of the network (ฮธavgโ€‰=โ€‰ฮฑฮธavgโ€‰+โ€‰(1โ€‰โˆ’โ€‰ฮฑ)ฮธ). The goal of the trust region update is to the difference between the updated policy and the average policy to ensure stability.