mirror of
https://github.com/gryf/coach.git
synced 2025-12-17 19:20:19 +01:00
ACER algorithm (#184)
* initial ACER commit * Code cleanup + several fixes * Q-retrace bug fix + small clean-ups * added documentation for acer * ACER benchmarks * update benchmarks table * Add nightly running of golden and trace tests. (#202) Resolves #200 * comment out nightly trace tests until values reset. * remove redundant observe ignore (#168) * ensure nightly test env containers exist. (#205) Also bump integration test timeout * wxPython removal (#207) Replacing wxPython with Python's Tkinter. Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner. * Create CONTRIBUTING.md (#210) * Create CONTRIBUTING.md. Resolves #188 * run nightly golden tests sequentially. (#217) Should reduce resource requirements and potential CPU contention but increases overall execution time. * tests: added new setup configuration + test args (#211) - added utils for future tests and conftest - added test args * new docs build * golden test update
This commit is contained in:
@@ -190,6 +190,14 @@ The algorithms are ordered by their release date in descending order.
|
||||
learning stability and speed, both for discrete and continuous action spaces.
|
||||
</span>
|
||||
</div>
|
||||
<div class="algorithm discrete on-policy requires-multi-worker" data-year="201707">
|
||||
<span class="badge">
|
||||
<a href="components/agents/policy_optimization/acer.html">ACER</a>
|
||||
<br>
|
||||
Similar to A3C with the addition of experience replay and off-policy training. to reduce variance and
|
||||
improve stability it also employs bias correction and trust region optimization techniques.
|
||||
</span>
|
||||
</div>
|
||||
<div class="algorithm continuous off-policy" data-year="201509">
|
||||
<span class="badge">
|
||||
<a href="components/agents/policy_optimization/ddpg.html">DDPG</a>
|
||||
|
||||
Reference in New Issue
Block a user