mirror of
https://github.com/gryf/coach.git
synced 2025-12-18 03:30:19 +01:00
* updating the documentation website * adding the built docs * update of api docstrings across coach and tutorials 0-2 * added some missing api documentation * New Sphinx based documentation
29 lines
874 B
ReStructuredText
29 lines
874 B
ReStructuredText
Behavioral Cloning
|
|
==================
|
|
|
|
**Actions space:** Discrete | Continuous
|
|
|
|
Network Structure
|
|
-----------------
|
|
|
|
.. image:: /_static/img/design_imgs/pg.png
|
|
:align: center
|
|
|
|
|
|
Algorithm Description
|
|
---------------------
|
|
|
|
Training the network
|
|
++++++++++++++++++++
|
|
|
|
The replay buffer contains the expert demonstrations for the task.
|
|
These demonstrations are given as state, action tuples, and with no reward.
|
|
The training goal is to reduce the difference between the actions predicted by the network and the actions taken by
|
|
the expert for each state.
|
|
|
|
1. Sample a batch of transitions from the replay buffer.
|
|
2. Use the current states as input to the network, and the expert actions as the targets of the network.
|
|
3. For the network head, we use the policy head, which uses the cross entropy loss function.
|
|
|
|
|
|
.. autoclass:: rl_coach.agents.bc_agent.BCAlgorithmParameters |