1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-17 11:10:20 +01:00
Files
coach/docs/docs/algorithms/value_optimization/distributional_dqn.md
Gal Leibovich 1d4c3455e7 coach v0.8.0
2017-10-19 13:10:15 +03:00

1.1 KiB

Action space: Discrete

Paper

Network Structure

Algorithmic Description

Training the network

  1. Sample a batch of transitions from the replay buffer.

  2. The Bellman update is projected to the set of atoms representing the Q values distribution, such that the i-th component of the projected update is calculated as follows:

    (\Phi \hat{T} Z_{\theta}(s_t,a_t))_i=\sum_{j=0}^{N-1}\Big[1-\frac{|[\hat{T}_{z_{j}}]^{V_{MAX}}_{V_{MIN}}-z_i|}{\Delta z}\Big]^1_0 \ p_j(s_{t+1}, \pi(s_{t+1}))

    where: * [ \cdot ] bounds its argument in the range [a, b] * \hat{T}_{z_{j}} is the Bellman update for atom z_j:     \hat{T}_{z_{j}} := r+\gamma z_j

  3. Network is trained with the cross entropy loss between the resulting probability distribution and the target probability distribution. Only the target of the actions that were actually taken is updated.

  4. Once in every few thousand steps, weights are copied from the online network to the target network.