mirror of
https://github.com/gryf/coach.git
synced 2025-12-18 11:40:18 +01:00
* initial ACER commit * Code cleanup + several fixes * Q-retrace bug fix + small clean-ups * added documentation for acer * ACER benchmarks * update benchmarks table * Add nightly running of golden and trace tests. (#202) Resolves #200 * comment out nightly trace tests until values reset. * remove redundant observe ignore (#168) * ensure nightly test env containers exist. (#205) Also bump integration test timeout * wxPython removal (#207) Replacing wxPython with Python's Tkinter. Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner. * Create CONTRIBUTING.md (#210) * Create CONTRIBUTING.md. Resolves #188 * run nightly golden tests sequentially. (#217) Should reduce resource requirements and potential CPU contention but increases overall execution time. * tests: added new setup configuration + test args (#211) - added utils for future tests and conftest - added test args * new docs build * golden test update
36 lines
1.4 KiB
ReStructuredText
36 lines
1.4 KiB
ReStructuredText
Double DQN
|
|
==========
|
|
|
|
**Actions space:** Discrete
|
|
|
|
**References:** `Deep Reinforcement Learning with Double Q-learning <https://arxiv.org/abs/1509.06461.pdf>`_
|
|
|
|
Network Structure
|
|
-----------------
|
|
|
|
.. image:: /_static/img/design_imgs/dqn.png
|
|
:align: center
|
|
|
|
Algorithm Description
|
|
---------------------
|
|
|
|
Training the network
|
|
++++++++++++++++++++
|
|
|
|
1. Sample a batch of transitions from the replay buffer.
|
|
|
|
2. Using the next states from the sampled batch, run the online network in order to find the :math:`Q` maximizing
|
|
action :math:`argmax_a Q(s_{t+1},a)`. For these actions, use the corresponding next states and run the target
|
|
network to calculate :math:`Q(s_{t+1},argmax_a Q(s_{t+1},a))`.
|
|
|
|
3. In order to zero out the updates for the actions that were not played (resulting from zeroing the MSE loss),
|
|
use the current states from the sampled batch, and run the online network to get the current Q values predictions.
|
|
Set those values as the targets for the actions that were not actually played.
|
|
|
|
4. For each action that was played, use the following equation for calculating the targets of the network:
|
|
:math:`y_t=r(s_t,a_t )+\gamma \cdot Q(s_{t+1},argmax_a Q(s_{t+1},a))`
|
|
|
|
5. Finally, train the online network using the current states as inputs, and with the aforementioned targets.
|
|
|
|
6. Once in every few thousand steps, copy the weights from the online network to the target network.
|