1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-17 11:10:20 +01:00
Commit Graph

67 Commits

Author SHA1 Message Date
Guy Jacob
9106b69227 Add is_on_policy property to agents (#480) 2021-05-06 18:02:02 +03:00
Gal Leibovich
138ced23ba RL in Large Discrete Action Spaces - Wolpertinger Agent (#394)
* Currently this is specific to the case of discretizing a continuous action space. Can easily be adapted to other case by feeding the kNN otherwise, and removing the usage of a discretizing output action filter
2019-09-08 12:53:49 +03:00
Gal Leibovich
c1d1fae342 Distiller's AMC induced changes (#359)
* override episode rewards with the last transition reward

* EWMA normalization filter

* allowing control over when the pre_network filter runs
2019-08-05 10:24:58 +03:00
Gal Leibovich
d6795bd524 batchnorm fixes + disabling batchnorm in DDPG (#353)
Co-authored-by: James Casbon <casbon+gh@gmail.com>
2019-06-23 11:28:22 +03:00
Gal Leibovich
7eb884c5b2 TD3 (#338) 2019-06-16 11:11:21 +03:00
Gal Leibovich
a1bb8eef89 DDPG Critic Head Bug Fix (#344)
* A bug fix for DDPG, where the update to the policy network was based on the sum of the critic's Q predictions on the batch instead of their mean
2019-06-05 17:47:56 +03:00
Gal Leibovich
4c996e147e applying filters for a csv loaded dataset + some bug-fixes in data loading (#319) 2019-05-28 15:44:55 +03:00
Gal Leibovich
9e9c4fd332 Create a dataset using an agent (#306)
Generate a dataset using an agent (allowing to select between this and a random dataset)
2019-05-28 09:34:49 +03:00
Gal Leibovich
acceb03ac0 bug fixes for OPE (#311) 2019-05-21 16:39:11 +03:00
Gal Leibovich
582921ffe3 OPE: Weighted Importance Sampling (#299) 2019-05-02 19:25:42 +03:00
guyk1971
74db141d5e SAC algorithm (#282)
* SAC algorithm

* SAC - updates to agent (learn_from_batch), sac_head and sac_q_head to fix problem in gradient calculation. Now SAC agents is able to train.
gym_environment - fixing an error in access to gym.spaces

* Soft Actor Critic - code cleanup

* code cleanup

* V-head initialization fix

* SAC benchmarks

* SAC Documentation

* typo fix

* documentation fixes

* documentation and version update

* README typo
2019-05-01 18:37:49 +03:00
Gal Leibovich
4741b0b916 BCQ variant on top of DDQN (#276)
* kNN based model for predicting which actions to drop
* fix for seeds with batch rl
2019-04-16 17:06:23 +03:00
zach dwiel
88f9c926ab update comment describing why the output filters don't modify Agent.last_action_info 2019-04-09 12:14:27 -04:00
zach dwiel
fd2c210915 rename AgentInterface.emulate_observe_on_trainer or observe_transition and call from AgentInterface.observe 2019-04-09 12:14:27 -04:00
zach dwiel
f8741522e4 merge AgentInterface.emulate_act_on_trainer and AgentInterface.act 2019-04-09 12:14:27 -04:00
zach dwiel
f2fead57e5 change method interface: AgentInterface.emulate_act_on_trainer(transition: Transition) -> emulate_act_on_trainer(action: ActionType) 2019-04-09 12:14:27 -04:00
zach dwiel
f16cd3cb1e remove unused ActionInfo.action_intrinsic_reward 2019-04-09 12:14:27 -04:00
zach dwiel
7d79433c05 remove unused parameter scale_external_reward_by_intrinsic_reward_value 2019-04-09 12:14:27 -04:00
Gal Leibovich
310d31c227 integration test changes to reach the train part (#254)
* integration test changes to override heatup to 1000 steps +  run each preset for 30 sec (to make sure we reach the train part)

* fixes to failing presets uncovered with this change + changes in the golden testing to properly test BatchRL

* fix for rainbow dqn

* fix to gym_environment (due to a change in Gym 0.12.1) + fix for rainbow DQN + some bug-fix in utils.squeeze_list

* fix for NEC agent
2019-03-27 21:14:19 +02:00
Gal Leibovich
6e08c55ad5 Enabling-more-agents-for-Batch-RL-and-cleanup (#258)
allowing for the last training batch drawn to be smaller than batch_size + adding support for more agents in BatchRL by adding softmax with temperature to the corresponding heads + adding a CartPole_QR_DQN preset with a golden test + cleanups
2019-03-21 16:10:29 +02:00
Gal Leibovich
abec59f367 fixes to rainbow dqn + a cartpole based golden test (#253) 2019-03-21 12:57:56 +02:00
Gal Leibovich
e3c7e526c7 Batch RL (#238) 2019-03-19 18:07:09 +02:00
shadiendrawis
f03bd7ad93 benchmark update (#250) 2019-03-17 15:33:28 +02:00
Gal Novik
10220be9be Adding support for evaluation only mode with predefined number of steps (#225) 2019-03-03 10:03:45 +02:00
shadiendrawis
2b5d1dabe6 ACER algorithm (#184)
* initial ACER commit

* Code cleanup + several fixes

* Q-retrace bug fix + small clean-ups

* added documentation for acer

* ACER benchmarks

* update benchmarks table

* Add nightly running of golden and trace tests. (#202)

Resolves #200

* comment out nightly trace tests until values reset.

* remove redundant observe ignore (#168)

* ensure nightly test env containers exist. (#205)

Also bump integration test timeout

* wxPython removal (#207)

Replacing wxPython with Python's Tkinter.
Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner.

* Create CONTRIBUTING.md (#210)

* Create CONTRIBUTING.md.  Resolves #188

* run nightly golden tests sequentially. (#217)

Should reduce resource requirements and potential CPU contention but increases
overall execution time.

* tests: added new setup configuration + test args (#211)

- added utils for future tests and conftest
- added test args

* new docs build

* golden test update
2019-02-20 23:52:34 +02:00
Cody Hsieh
bf0a65eefd remove redundant observe ignore (#168) 2019-01-17 14:08:05 -08:00
Zach Dwiel
fedb4cbd7c Cleanup and refactoring (#171) 2019-01-15 10:04:53 +02:00
Gal Leibovich
4c914c057c fix for finding the right filter checkpoint to restore + do not update internal filter state when evaluating + fix SharedRunningStats checkpoint filenames (#147) 2018-12-17 21:36:27 +02:00
Gal Leibovich
f9ee526536 Fix for issue #128 - circular DQN import (#130) 2018-12-16 16:06:44 +02:00
Gal Leibovich
f12857a8c7 Docs changes - fixing blogpost links, removing importing all exploration policies (#139)
* updated docs

* removing imports for all exploration policies in __init__ + setting the right blog-post link

* small cleanups
2018-12-05 16:16:16 -05:00
Ryan Peach
3c58ed740b 'CompositeAgent' object has no attribute 'handle_episode_ended' (#136) 2018-12-05 11:28:16 +02:00
Gal Leibovich
a1c56edd98 Fixes for having NumpySharedRunningStats syncing on multi-node (#139)
1. Having the standard checkpoint prefix in order for the data store to grab it, and sync it to S3.
2. Removing the reference to Redis so that it won't try to pickle that in.
3. Enable restoring a checkpoint into a single-worker run, which was saved by a single-node-multiple-worker run.
2018-11-23 16:11:47 +02:00
Sina Afrooze
87a7848b0a Moved tf.variable_scope and tf.device calls to framework-specific architecture (#136) 2018-11-22 22:52:21 +02:00
Gal Leibovich
a112ee69f6 Save filters' internal state (#127)
* save filters internal state

* moving the restore to be made from within NumpyRunningStats
2018-11-20 17:21:48 +02:00
Sina Afrooze
67eb9e4c28 Adding checkpointing framework (#74)
* Adding checkpointing framework as well as mxnet checkpointing implementation.

- MXNet checkpoint for each network is saved in a separate file.

* Adding checkpoint restore for mxnet to graph-manager

* Add unit-test for get_checkpoint_state()

* Added match.group() to fix unit-test failing on CI

* Added ONNX export support for MXNet
2018-11-19 19:45:49 +02:00
Gal Leibovich
430e286c56 muting pygame's hello message (#116) 2018-11-18 18:02:55 +02:00
Gal Leibovich
6caf721d1c Numpy shared running stats (#97) 2018-11-18 14:46:40 +02:00
Thom Lane
a0f25034c3 Added average total reward to logging after evaluation phase completes. (#93) 2018-11-16 08:22:00 -08:00
Ajay Deshpande
fde73ced13 Simulating the act on the trainer. (#65)
* Remove the use of daemon threads for Redis subscribe.
* Emulate act and observe on trainer side to update internal vars.
2018-11-15 08:38:58 -08:00
Itai Caspi
6d40ad1650 update of api docstrings across coach and tutorials [WIP] (#91)
* updating the documentation website
* adding the built docs
* update of api docstrings across coach and tutorials 0-2
* added some missing api documentation
* New Sphinx based documentation
2018-11-15 15:00:13 +02:00
Scott Leishman
524f8436a2 create per environment Dockerfiles. (#70)
* create per environment Dockerfiles.

Adjust CI setup to better parallelize runs.
Fix a couple of issues in golden and trace tests.
Update a few of the docs.

* bugfix in mmc agent.

Also install kubectl for CI, update badge branch.

* remove integration test parallelism.
2018-11-14 07:40:22 -08:00
Balaji Subramaniam
a849c17e46 Enable distributed SharedRunningStats (#81)
- Use Redis pub/sub for updating SharedRunningStats.
2018-11-13 19:17:38 +02:00
Ajay Deshpande
875d6ef017 Adding target reward and target sucess (#58)
* Adding target reward

* Adding target successs

* Addressing comments

* Using custom_reward_threshold and target_success_rate

* Adding exit message

* Moving success rate to environment

* Making target_success_rate optional
2018-11-12 15:03:43 -08:00
Gal Leibovich
49dea39d34 N-step returns for rainbow (#67)
* n_step returns for rainbow
* Rename CartPole_PPO -> CartPole_ClippedPPO
2018-11-07 18:33:08 +02:00
Sina Afrooze
a888226641 Move embedder, middleware, and head parameters to framework agnostic modules. (#45)
Part of #28
2018-10-29 14:46:40 -07:00
Ajay Deshpande
9a30c26469 Adding improvements 2018-10-23 19:59:02 -04:00
Zach Dwiel
9804b033a2 rename save_checkpoint_dir -> checkpoint_save_dir 2018-10-23 17:10:58 -04:00
Ajay Deshpande
b285a02023 Adding parameteres, checking transitions before training 2018-10-23 16:55:37 -04:00
Ajay Deshpande
7f00235ed5 waiting for a new checkpoint if it's available 2018-10-23 16:54:43 -04:00
Ajay Deshpande
a7f5442015 Adding should_train helper and should_train in graph_manager 2018-10-23 16:54:43 -04:00