1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-17 19:20:19 +01:00

ACER algorithm (#184)

* initial ACER commit

* Code cleanup + several fixes

* Q-retrace bug fix + small clean-ups

* added documentation for acer

* ACER benchmarks

* update benchmarks table

* Add nightly running of golden and trace tests. (#202)

Resolves #200

* comment out nightly trace tests until values reset.

* remove redundant observe ignore (#168)

* ensure nightly test env containers exist. (#205)

Also bump integration test timeout

* wxPython removal (#207)

Replacing wxPython with Python's Tkinter.
Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner.

* Create CONTRIBUTING.md (#210)

* Create CONTRIBUTING.md.  Resolves #188

* run nightly golden tests sequentially. (#217)

Should reduce resource requirements and potential CPU contention but increases
overall execution time.

* tests: added new setup configuration + test args (#211)

- added utils for future tests and conftest
- added test args

* new docs build

* golden test update
This commit is contained in:
shadiendrawis
2019-02-20 23:52:34 +02:00
committed by GitHub
parent 7253f511ed
commit 2b5d1dabe6
175 changed files with 2327 additions and 664 deletions

View File

@@ -149,7 +149,7 @@ class PolicyOptimizationAgent(Agent):
action_probabilities = np.array(action_values).squeeze()
action = self.exploration_policy.get_action(action_probabilities)
action_info = ActionInfo(action=action,
action_probability=action_probabilities[action])
all_action_probabilities=action_probabilities)
self.entropy.add_sample(-np.sum(action_probabilities * np.log(action_probabilities + eps)))
elif isinstance(self.spaces.action, BoxActionSpace):