mirror of
https://github.com/gryf/coach.git
synced 2025-12-18 03:30:19 +01:00
* updating the documentation website * adding the built docs * update of api docstrings across coach and tutorials 0-2 * added some missing api documentation * New Sphinx based documentation
36 lines
1.4 KiB
ReStructuredText
36 lines
1.4 KiB
ReStructuredText
Double DQN
|
|
==========
|
|
|
|
**Actions space:** Discrete
|
|
|
|
**References:** `Deep Reinforcement Learning with Double Q-learning <https://arxiv.org/abs/1509.06461.pdf>`_
|
|
|
|
Network Structure
|
|
-----------------
|
|
|
|
.. image:: /_static/img/design_imgs/dqn.png
|
|
:align: center
|
|
|
|
Algorithm Description
|
|
---------------------
|
|
|
|
Training the network
|
|
++++++++++++++++++++
|
|
|
|
1. Sample a batch of transitions from the replay buffer.
|
|
|
|
2. Using the next states from the sampled batch, run the online network in order to find the $Q$ maximizing
|
|
action :math:`argmax_a Q(s_{t+1},a)`. For these actions, use the corresponding next states and run the target
|
|
network to calculate :math:`Q(s_{t+1},argmax_a Q(s_{t+1},a))`.
|
|
|
|
3. In order to zero out the updates for the actions that were not played (resulting from zeroing the MSE loss),
|
|
use the current states from the sampled batch, and run the online network to get the current Q values predictions.
|
|
Set those values as the targets for the actions that were not actually played.
|
|
|
|
4. For each action that was played, use the following equation for calculating the targets of the network:
|
|
:math:`y_t=r(s_t,a_t )+\gamma \cdot Q(s_{t+1},argmax_a Q(s_{t+1},a))`
|
|
|
|
5. Finally, train the online network using the current states as inputs, and with the aforementioned targets.
|
|
|
|
6. Once in every few thousand steps, copy the weights from the online network to the target network.
|