mirror of
https://github.com/gryf/coach.git
synced 2025-12-18 19:50:17 +01:00
* SAC algorithm * SAC - updates to agent (learn_from_batch), sac_head and sac_q_head to fix problem in gradient calculation. Now SAC agents is able to train. gym_environment - fixing an error in access to gym.spaces * Soft Actor Critic - code cleanup * code cleanup * V-head initialization fix * SAC benchmarks * SAC Documentation * typo fix * documentation fixes * documentation and version update * README typo
46 lines
1.1 KiB
ReStructuredText
46 lines
1.1 KiB
ReStructuredText
Agents
|
|
======
|
|
|
|
Coach supports many state-of-the-art reinforcement learning algorithms, which are separated into three main classes -
|
|
value optimization, policy optimization and imitation learning.
|
|
A detailed description of those algorithms can be found by navigating to each of the algorithm pages.
|
|
|
|
.. image:: /_static/img/algorithms.png
|
|
:width: 600px
|
|
:align: center
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:caption: Agents
|
|
|
|
policy_optimization/ac
|
|
policy_optimization/acer
|
|
imitation/bc
|
|
value_optimization/bs_dqn
|
|
value_optimization/categorical_dqn
|
|
imitation/cil
|
|
policy_optimization/cppo
|
|
policy_optimization/ddpg
|
|
policy_optimization/sac
|
|
other/dfp
|
|
value_optimization/double_dqn
|
|
value_optimization/dqn
|
|
value_optimization/dueling_dqn
|
|
value_optimization/mmc
|
|
value_optimization/n_step
|
|
value_optimization/naf
|
|
value_optimization/nec
|
|
value_optimization/pal
|
|
policy_optimization/pg
|
|
policy_optimization/ppo
|
|
value_optimization/rainbow
|
|
value_optimization/qr_dqn
|
|
|
|
|
|
.. autoclass:: rl_coach.base_parameters.AgentParameters
|
|
|
|
.. autoclass:: rl_coach.agents.agent.Agent
|
|
:members:
|
|
:inherited-members:
|
|
|