1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-18 19:50:17 +01:00
Files
coach/docs_raw/source/components/agents/other/dfp.rst
shadiendrawis 2b5d1dabe6 ACER algorithm (#184)
* initial ACER commit

* Code cleanup + several fixes

* Q-retrace bug fix + small clean-ups

* added documentation for acer

* ACER benchmarks

* update benchmarks table

* Add nightly running of golden and trace tests. (#202)

Resolves #200

* comment out nightly trace tests until values reset.

* remove redundant observe ignore (#168)

* ensure nightly test env containers exist. (#205)

Also bump integration test timeout

* wxPython removal (#207)

Replacing wxPython with Python's Tkinter.
Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner.

* Create CONTRIBUTING.md (#210)

* Create CONTRIBUTING.md.  Resolves #188

* run nightly golden tests sequentially. (#217)

Should reduce resource requirements and potential CPU contention but increases
overall execution time.

* tests: added new setup configuration + test args (#211)

- added utils for future tests and conftest
- added test args

* new docs build

* golden test update
2019-02-20 23:52:34 +02:00

1.6 KiB
Raw Blame History

Actions space: Discrete

References: Learning to Act by Predicting the Future

Network Structure

/_static/img/design_imgs/dfp.png

Algorithm Description

Choosing an action

  1. The current states (observations and measurements) and the corresponding goal vector are passed as an input to the network. The output of the network is the predicted future measurements for time-steps t+1,t+2,t+4,t+8,t+16 and t+32 for each possible action.

  2. For each action, the measurements of each predicted time-step are multiplied by the goal vector, and the result is a single vector of future values for each action.

  3. Then, a weighted sum of the future values of each action is calculated, and the result is a single value for each action.

  4. The action values are passed to the exploration policy to decide on the action to use.

Training the network

Given a batch of transitions, run them through the network to get the current predictions of the future measurements per action, and set them as the initial targets for training the network. For each transition (st,at,rt,st+1) in the batch, the target of the network for the action that was taken, is the actual measurements that were seen in time-steps t+1,t+2,t+4,t+8,t+16 and t+32. For the actions that were not taken, the targets are the current values.