1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-18 19:50:17 +01:00
Files
coach/docs_raw/source/components/agents/policy_optimization/acer.rst
guyk1971 74db141d5e SAC algorithm (#282)
* SAC algorithm

* SAC - updates to agent (learn_from_batch), sac_head and sac_q_head to fix problem in gradient calculation. Now SAC agents is able to train.
gym_environment - fixing an error in access to gym.spaces

* Soft Actor Critic - code cleanup

* code cleanup

* V-head initialization fix

* SAC benchmarks

* SAC Documentation

* typo fix

* documentation fixes

* documentation and version update

* README typo
2019-05-01 18:37:49 +03:00

2.5 KiB
Raw Blame History

Actions space: Discrete

References: Sample Efficient Actor-Critic with Experience Replay

Network Structure

/_static/img/design_imgs/acer.png

Algorithm Description

Choosing an action - Discrete actions

The policy network is used in order to predict action probabilites. While training, a sample is taken from a categorical distribution assigned with these probabilities. When testing, the action with the highest probability is used.

Training the network

Each iteration perform one on-policy update with a batch of the last Tmax transitions, and n (replay ratio) off-policy updates from batches of Tmax transitions sampled from the replay buffer.

Each update perform the following procedure:

  1. Calculate state values:

    V(st)โ€‰=โ€‰๐”ผaโ€‰โˆผโ€‰ฯ€[Q(st,โ€‰a)]
  2. Calculate Q retrace:

    Qret(st,โ€‰at)โ€‰=โ€‰rtโ€‰+โ€‰ฮณโ€’ฯtโ€‰+โ€‰1[Qret(stโ€‰+โ€‰1,โ€‰atโ€‰+โ€‰1)โ€‰โˆ’โ€‰Q(stโ€‰+โ€‰1,โ€‰atโ€‰+โ€‰1)]โ€‰+โ€‰ฮณV(stโ€‰+โ€‰1)
    whereโ€โ€’ฯtโ€‰=โ€‰min{c,โ€‰ฯt},โ€‰โ€ฯtโ€‰=โ€‰(ฯ€(atโ€‰โˆฃโ€‰st))/(ฮผ(atโ€‰โˆฃโ€‰st))
  3. Accumulate gradients:

    โ€ข Policy gradients (with bias correction):

    ฤpolicyt โ€‰=โ€‰ โ€’ฯtโˆ‡logฯ€(atโ€‰โˆฃโ€‰st)[Qret(st,โ€‰at)โ€‰โˆ’โ€‰V(st)] โ€… โ€… โ€… โ€‰+โ€‰๐”ผaโ€‰โˆผโ€‰ฯ€โŽ›โŽโŽกโŽฃ(ฯt(a)โ€‰โˆ’โ€‰c)/(ฯt(a))โŽคโŽฆโˆ‡logฯ€(aโ€‰โˆฃโ€‰st)[Q(st,โ€‰a)โ€‰โˆ’โ€‰V(st)]โŽžโŽ 

    โ€ข Q-Head gradients (MSE):

    ฤQtโ€‰=โ€‰(Qret(st,โ€‰at)โ€‰โˆ’โ€‰Q(st,โ€‰at))โˆ‡Q(st,โ€‰at) โ€…
  4. (Optional) Trust region update: change the policy loss gradient w.r.t network output:

    ฤtrustโ€‰โˆ’โ€‰regiontโ€‰=โ€‰ฤpolicytโ€‰โˆ’โ€‰maxโŽงโŽฉ0,โ€‰(kTฤpolicytโ€‰โˆ’โ€‰ฮด)/(โ€–kโ€–22)โŽซโŽญk
    whereโ€kโ€‰=โ€‰โˆ‡DKL[ฯ€avgโ€‰โˆฅโ€‰ฯ€]

    The average policy network is an exponential moving average of the parameters of the network (ฮธavgโ€‰=โ€‰ฮฑฮธavgโ€‰+โ€‰(1โ€‰โˆ’โ€‰ฮฑ)ฮธ). The goal of the trust region update is to the difference between the updated policy and the average policy to ensure stability.