1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-18 19:50:17 +01:00
Files
coach/docs_raw/source/components/agents/policy_optimization/ac.rst
Itai Caspi 6d40ad1650 update of api docstrings across coach and tutorials [WIP] (#91)
* updating the documentation website
* adding the built docs
* update of api docstrings across coach and tutorials 0-2
* added some missing api documentation
* New Sphinx based documentation
2018-11-15 15:00:13 +02:00

1.4 KiB
Raw Blame History

Actions space: Discrete | Continuous

References: Asynchronous Methods for Deep Reinforcement Learning

Network Structure

/_static/img/design_imgs/ac.png

Algorithm Description

Choosing an action - Discrete actions

The policy network is used in order to predict action probabilites. While training, a sample is taken from a categorical distribution assigned with these probabilities. When testing, the action with the highest probability is used.

Training the network

A batch of Tmax transitions is used, and the advantages are calculated upon it.

Advantages can be calculated by either of the following methods (configured by the selected preset) -

  1. A_VALUE - Estimating advantage directly: A(st,at)=i=t+k1i=tγitri+γkV(st+k)Q(st,at)V(st) where k is TmaxState_Index for each state in the batch.

  2. GAE - By following the Generalized Advantage Estimation paper.

The advantages are then used in order to accumulate gradients according to L=\mathop𝔼[log(π)⋅A]