mirror of
https://github.com/gryf/coach.git
synced 2026-01-28 19:15:46 +01:00
update of api docstrings across coach and tutorials [WIP] (#91)
* updating the documentation website * adding the built docs * update of api docstrings across coach and tutorials 0-2 * added some missing api documentation * New Sphinx based documentation
This commit is contained in:
39
docs_raw/source/components/agents/policy_optimization/pg.rst
Normal file
39
docs_raw/source/components/agents/policy_optimization/pg.rst
Normal file
@@ -0,0 +1,39 @@
|
||||
Policy Gradient
|
||||
===============
|
||||
|
||||
**Actions space:** Discrete | Continuous
|
||||
|
||||
**References:** `Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning <http://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf>`_
|
||||
|
||||
Network Structure
|
||||
-----------------
|
||||
|
||||
.. image:: /_static/img/design_imgs/pg.png
|
||||
:align: center
|
||||
|
||||
Algorithm Description
|
||||
---------------------
|
||||
Choosing an action - Discrete actions
|
||||
+++++++++++++++++++++++++++++++++++++
|
||||
Run the current states through the network and get a policy distribution over the actions.
|
||||
While training, sample from the policy distribution. When testing, take the action with the highest probability.
|
||||
|
||||
Training the network
|
||||
++++++++++++++++++++
|
||||
The policy head loss is defined as :math:`L=-log (\pi) \cdot PolicyGradientRescaler`.
|
||||
The :code:`PolicyGradientRescaler` is used in order to reduce the policy gradient variance, which might be very noisy.
|
||||
This is done in order to reduce the variance of the updates, since noisy gradient updates might destabilize the policy's
|
||||
convergence. The rescaler is a configurable parameter and there are few options to choose from:
|
||||
|
||||
* **Total Episode Return** - The sum of all the discounted rewards during the episode.
|
||||
* **Future Return** - Return from each transition until the end of the episode.
|
||||
* **Future Return Normalized by Episode** - Future returns across the episode normalized by the episode's mean and standard deviation.
|
||||
* **Future Return Normalized by Timestep** - Future returns normalized using running means and standard deviations,
|
||||
which are calculated seperately for each timestep, across different episodes.
|
||||
|
||||
Gradients are accumulated over a number of full played episodes. The gradients accumulation over several episodes
|
||||
serves the same purpose - reducing the update variance. After accumulating gradients for several episodes,
|
||||
the gradients are then applied to the network.
|
||||
|
||||
|
||||
.. autoclass:: rl_coach.agents.policy_gradients_agent.PolicyGradientAlgorithmParameters
|
||||
Reference in New Issue
Block a user