1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-17 11:10:20 +01:00

RL in Large Discrete Action Spaces - Wolpertinger Agent (#394)

* Currently this is specific to the case of discretizing a continuous action space. Can easily be adapted to other case by feeding the kNN otherwise, and removing the usage of a discretizing output action filter
This commit is contained in:
Gal Leibovich
2019-09-08 12:53:49 +03:00
committed by GitHub
parent fc50398544
commit 138ced23ba
46 changed files with 1193 additions and 51 deletions

View File

@@ -62,7 +62,9 @@ class AdditiveNoise(ContinuousActionExplorationPolicy):
self.evaluation_noise = evaluation_noise
self.noise_as_percentage_from_action_space = noise_as_percentage_from_action_space
if not isinstance(action_space, BoxActionSpace):
if not isinstance(action_space, BoxActionSpace) and \
(hasattr(action_space, 'filtered_action_space') and not
isinstance(action_space.filtered_action_space, BoxActionSpace)):
raise ValueError("Additive noise exploration works only for continuous controls."
"The given action space is of type: {}".format(action_space.__class__.__name__))