1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-17 19:20:19 +01:00

SAC algorithm (#282)

* SAC algorithm

* SAC - updates to agent (learn_from_batch), sac_head and sac_q_head to fix problem in gradient calculation. Now SAC agents is able to train.
gym_environment - fixing an error in access to gym.spaces

* Soft Actor Critic - code cleanup

* code cleanup

* V-head initialization fix

* SAC benchmarks

* SAC Documentation

* typo fix

* documentation fixes

* documentation and version update

* README typo
This commit is contained in:
guyk1971
2019-05-01 18:37:49 +03:00
committed by shadiendrawis
parent 33dc29ee99
commit 74db141d5e
92 changed files with 2812 additions and 402 deletions

View File

@@ -198,6 +198,14 @@ The algorithms are ordered by their release date in descending order.
improve stability it also employs bias correction and trust region optimization techniques.
</span>
</div>
<div class="algorithm continuous off-policy" data-year="201808">
<span class="badge">
<a href="components/agents/policy_optimization/sac.html">SAC</a>
<br>
Soft Actor-Critic is an algorithm which optimizes a stochastic policy in an off-policy way.
One of the key features of SAC is that it solves a maximum entropy reinforcement learning problem.
</span>
</div>
<div class="algorithm continuous off-policy" data-year="201509">
<span class="badge">
<a href="components/agents/policy_optimization/ddpg.html">DDPG</a>