1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-18 03:30:19 +01:00
Files
coach/docs_raw/source/components/agents/value_optimization/rainbow.rst
Itai Caspi 6d40ad1650 update of api docstrings across coach and tutorials [WIP] (#91)
* updating the documentation website
* adding the built docs
* update of api docstrings across coach and tutorials 0-2
* added some missing api documentation
* New Sphinx based documentation
2018-11-15 15:00:13 +02:00

1.8 KiB
Raw Blame History

Actions space: Discrete

References: Rainbow: Combining Improvements in Deep Reinforcement Learning

Network Structure

/_static/img/design_imgs/rainbow.png

Algorithm Description

Rainbow combines 6 recent advancements in reinforcement learning:

  • N-step returns

  • Distributional state-action value learning

  • Dueling networks

  • Noisy Networks

  • Double DQN

  • Prioritized Experience Replay

Training the network

  1. Sample a batch of transitions from the replay buffer.

  2. The Bellman update is projected to the set of atoms representing the Q values distribution, such that the ith component of the projected update is calculated as follows:

    Zθ(st,at))i=N1j=0[1(|[zj]VMAXVMINzi|)/(Δz)]10 pj(st+1,π(st+1))

    where: * [⋅] bounds its argument in the range [a,b] * zj is the Bellman update for atom zj: zj : =rt+γrt+1+...+γrt+n1+γn1zj

  3. Network is trained with the cross entropy loss between the resulting probability distribution and the target probability distribution. Only the target of the actions that were actually taken is updated.

  4. Once in every few thousand steps, weights are copied from the online network to the target network.

  5. After every training step, the priorities of the batch transitions are updated in the prioritized replay buffer using the KL divergence loss that is returned from the network.