mirror of
https://github.com/gryf/coach.git
synced 2025-12-17 19:20:19 +01:00
SAC algorithm (#282)
* SAC algorithm * SAC - updates to agent (learn_from_batch), sac_head and sac_q_head to fix problem in gradient calculation. Now SAC agents is able to train. gym_environment - fixing an error in access to gym.spaces * Soft Actor Critic - code cleanup * code cleanup * V-head initialization fix * SAC benchmarks * SAC Documentation * typo fix * documentation fixes * documentation and version update * README typo
This commit is contained in:
@@ -372,6 +372,14 @@ $(document).ready(function() {
|
||||
improve stability it also employs bias correction and trust region optimization techniques.
|
||||
</span>
|
||||
</div>
|
||||
<div class="algorithm continuous off-policy" data-year="201808">
|
||||
<span class="badge">
|
||||
<a href="components/agents/policy_optimization/sac.html">SAC</a>
|
||||
<br>
|
||||
Soft Actor-Critic is an algorithm which optimizes a stochastic policy in an off-policy way.
|
||||
One of the key features of SAC is that it solves a maximum entropy reinforcement learning problem.
|
||||
</span>
|
||||
</div>
|
||||
<div class="algorithm continuous off-policy" data-year="201509">
|
||||
<span class="badge">
|
||||
<a href="components/agents/policy_optimization/ddpg.html">DDPG</a>
|
||||
|
||||
Reference in New Issue
Block a user