1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-17 19:20:19 +01:00

ACER algorithm (#184)

* initial ACER commit

* Code cleanup + several fixes

* Q-retrace bug fix + small clean-ups

* added documentation for acer

* ACER benchmarks

* update benchmarks table

* Add nightly running of golden and trace tests. (#202)

Resolves #200

* comment out nightly trace tests until values reset.

* remove redundant observe ignore (#168)

* ensure nightly test env containers exist. (#205)

Also bump integration test timeout

* wxPython removal (#207)

Replacing wxPython with Python's Tkinter.
Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner.

* Create CONTRIBUTING.md (#210)

* Create CONTRIBUTING.md.  Resolves #188

* run nightly golden tests sequentially. (#217)

Should reduce resource requirements and potential CPU contention but increases
overall execution time.

* tests: added new setup configuration + test args (#211)

- added utils for future tests and conftest
- added test args

* new docs build

* golden test update
This commit is contained in:
shadiendrawis
2019-02-20 23:52:34 +02:00
committed by GitHub
parent 7253f511ed
commit 2b5d1dabe6
175 changed files with 2327 additions and 664 deletions

View File

@@ -403,7 +403,8 @@ class DiscreteActionSpace(ActionSpace):
return np.random.choice(self.actions)
def sample_with_info(self) -> ActionInfo:
return ActionInfo(self.sample(), action_probability=1. / (self.high[0] - self.low[0] + 1))
return ActionInfo(self.sample(),
all_action_probabilities=np.full(len(self.actions), 1. / (self.high[0] - self.low[0] + 1)))
def get_description(self, action: int) -> str:
if type(self.descriptions) == list and 0 <= action < len(self.descriptions):
@@ -450,7 +451,7 @@ class MultiSelectActionSpace(ActionSpace):
return random.choice(self.actions)
def sample_with_info(self) -> ActionInfo:
return ActionInfo(self.sample(), action_probability=1. / len(self.actions))
return ActionInfo(self.sample(), all_action_probabilities=np.full(len(self.actions), 1. / len(self.actions)))
def get_description(self, action: np.ndarray) -> str:
if np.sum(len(np.where(action == 0)[0])) + np.sum(len(np.where(action == 1)[0])) != self.shape or \