1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-18 11:40:18 +01:00
Commit Graph

228 Commits

Author SHA1 Message Date
Gal Leibovich
310d31c227 integration test changes to reach the train part (#254)
* integration test changes to override heatup to 1000 steps +  run each preset for 30 sec (to make sure we reach the train part)

* fixes to failing presets uncovered with this change + changes in the golden testing to properly test BatchRL

* fix for rainbow dqn

* fix to gym_environment (due to a change in Gym 0.12.1) + fix for rainbow DQN + some bug-fix in utils.squeeze_list

* fix for NEC agent
2019-03-27 21:14:19 +02:00
Gal Leibovich
6e08c55ad5 Enabling-more-agents-for-Batch-RL-and-cleanup (#258)
allowing for the last training batch drawn to be smaller than batch_size + adding support for more agents in BatchRL by adding softmax with temperature to the corresponding heads + adding a CartPole_QR_DQN preset with a golden test + cleanups
2019-03-21 16:10:29 +02:00
Gal Leibovich
abec59f367 fixes to rainbow dqn + a cartpole based golden test (#253) 2019-03-21 12:57:56 +02:00
Gal Leibovich
e3c7e526c7 Batch RL (#238) 2019-03-19 18:07:09 +02:00
anabwan
4a8451ff02 tests: added new tests + utils code improved (#221)
* tests: added new tests + utils code improved

* new tests:
- test_preset_args_combination
- test_preset_mxnet_framework

* added more flags to test_preset_args
* added validation for flags in utils

* tests: added new tests + fixed utils

* tests: added new checkpoint test

* tests: added checkpoint test improve utils

* tests: added tests + improve validations

* bump integration CI run timeout.

* tests: improve timerun + add functional test marker
2019-03-18 11:21:43 +02:00
Gal Leibovich
d6158a5cfc restoring from a checkpoint file (#247) 2019-03-17 16:28:09 +02:00
shadiendrawis
f03bd7ad93 benchmark update (#250) 2019-03-17 15:33:28 +02:00
Gal Leibovich
c02333b1ba fix dashboard to allow connections from a remote machine. (#231) 2019-03-10 13:15:14 +02:00
Gal Leibovich
9a895a1ac7 bug-fix for l2_regularization not in use (#230)
* bug-fix for l2_regularization not in use
* removing not in use TF REGULARIZATION_LOSSES collection
2019-03-03 15:11:06 +02:00
Gal Novik
10220be9be Adding support for evaluation only mode with predefined number of steps (#225) 2019-03-03 10:03:45 +02:00
Ajay Deshpande
2c1a9dbf20 Adding framework for multinode tests (#149)
* Currently runs CartPole_ClippedPPO and Mujoco_ClippedPPO with inverted_pendulum level.
2019-02-26 13:53:12 -08:00
shadiendrawis
2b5d1dabe6 ACER algorithm (#184)
* initial ACER commit

* Code cleanup + several fixes

* Q-retrace bug fix + small clean-ups

* added documentation for acer

* ACER benchmarks

* update benchmarks table

* Add nightly running of golden and trace tests. (#202)

Resolves #200

* comment out nightly trace tests until values reset.

* remove redundant observe ignore (#168)

* ensure nightly test env containers exist. (#205)

Also bump integration test timeout

* wxPython removal (#207)

Replacing wxPython with Python's Tkinter.
Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner.

* Create CONTRIBUTING.md (#210)

* Create CONTRIBUTING.md.  Resolves #188

* run nightly golden tests sequentially. (#217)

Should reduce resource requirements and potential CPU contention but increases
overall execution time.

* tests: added new setup configuration + test args (#211)

- added utils for future tests and conftest
- added test args

* new docs build

* golden test update
2019-02-20 23:52:34 +02:00
anabwan
7253f511ed tests: added new setup configuration + test args (#211)
- added utils for future tests and conftest
- added test args
2019-02-13 07:43:59 -05:00
Gal Novik
135f02fb46 wxPython removal (#207)
Replacing wxPython with Python's Tkinter.
Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner.
2019-01-23 20:49:37 +02:00
Cody Hsieh
bf0a65eefd remove redundant observe ignore (#168) 2019-01-17 14:08:05 -08:00
Zach Dwiel
8672f8b542 Fix golden tests (#199)
* remove unused functions utils.read_json and utils.write_json
* increase verbosity of golden tests; detect errors in golden tests
2019-01-16 17:38:11 -08:00
Zach Dwiel
fedb4cbd7c Cleanup and refactoring (#171) 2019-01-15 10:04:53 +02:00
Zach Dwiel
cd812b0d25 more clear names for methods of Space (#181)
* rename Space.val_matches_space_definition -> contains; Space.is_point_in_space_shape -> valid_index
* rename valid_index -> is_valid_index
2019-01-14 15:02:53 -05:00
Zach Dwiel
0ccc333d77 raise value error if there is an invalid action space (#179) 2019-01-13 11:06:48 +02:00
Scott Leishman
053adf0ca9 prevent long job CI timeouts owing to lack of EKS token refresh (#183)
* add additional info during exception of eks runs.

* ensure we refresh k8s config after long calls.

Kubernetes client on EKS has a 10 minute token time to live, so will
result in unauthorized errors if tokens are not refreshed on long jobs.
2019-01-09 15:12:00 -08:00
Gourav Roy
b1e9ea48d8 Refactored GlobalVariableSaver 2019-01-03 15:08:34 -08:00
Gourav Roy
619ea0944e Avoid Memory Leak in Rollout worker
ISSUE: When we restore checkpoints, we create new nodes in the
Tensorflow graph. This happens when we assign new value (op node) to
RefVariable in GlobalVariableSaver. With every restore the size of TF
graph increases as new nodes are created and old unused nodes are not
removed from the graph. This causes the memory leak in
restore_checkpoint codepath.

FIX: We use TF placeholder to update the variables which avoids the
memory leak.
2019-01-02 23:09:09 -08:00
Gourav Roy
c377363e50 Revert "Changes to avoid memory leak in rollout worker"
This reverts commit 801aed5e10.
2019-01-02 23:09:09 -08:00
Gourav Roy
779d3694b4 Revert "comment out the part of test in 'test_basic_rl_graph_manager_with_cartpole_dqn_and_repeated_checkpoint_restore' that run in infinite loop"
This reverts commit b8d21c73bf.
2019-01-02 23:09:09 -08:00
Gourav Roy
6dd7ae2343 Revert "Avoid Memory Leak in Rollout worker"
This reverts commit c694766fad.
2019-01-02 23:09:09 -08:00
Gourav Roy
2461892c9e Revert "Updated comments"
This reverts commit 740f7937cd.
2019-01-02 23:09:09 -08:00
Gourav Roy
740f7937cd Updated comments 2018-12-25 21:52:07 -08:00
Gourav Roy
c694766fad Avoid Memory Leak in Rollout worker
ISSUE: When we restore checkpoints, we create new nodes in the
Tensorflow graph. This happens when we assign new value (op node) to
RefVariable in GlobalVariableSaver. With every restore the size of TF
graph increases as new nodes are created and old unused nodes are not
removed from the graph. This causes the memory leak in
restore_checkpoint codepath.

FIX: We reset the Tensorflow graph and recreate the Global, Online and
Target networks on every restore. This ensures that the old unused nodes
in TF graph is dropped.
2018-12-25 21:04:21 -08:00
x77a1
02f2db1264 Merge branch 'master' into master 2018-12-17 12:44:27 -08:00
Gal Leibovich
4c914c057c fix for finding the right filter checkpoint to restore + do not update internal filter state when evaluating + fix SharedRunningStats checkpoint filenames (#147) 2018-12-17 21:36:27 +02:00
Neta Zmora
b4bc8a476c Bug fix: when enabling 'heatup_using_network_decisions', we should add the configured noise (#162)
During heatup we may want to add agent-generated-noise (i.e. not "simple" random noise).
This is enabled by setting 'heatup_using_network_decisions' to True.  For example:
	agent_params = DDPGAgentParameters()
	agent_params.algorithm.heatup_using_network_decisions = True

The fix ensures that the correct noise is added not just while in the TRAINING phase, but
also during the HEATUP phase.

No one has enabled 'heatup_using_network_decisions' yet, which explains why this problem
arose only now (in my configuration I do enable 'heatup_using_network_decisions').
2018-12-17 10:08:54 +02:00
gouravr
b8d21c73bf comment out the part of test in 'test_basic_rl_graph_manager_with_cartpole_dqn_and_repeated_checkpoint_restore' that run in infinite loop 2018-12-16 10:56:40 -08:00
x77a1
1f0980c448 Merge branch 'master' into master 2018-12-16 09:37:00 -08:00
Gal Leibovich
f9ee526536 Fix for issue #128 - circular DQN import (#130) 2018-12-16 16:06:44 +02:00
gouravr
801aed5e10 Changes to avoid memory leak in rollout worker
Currently in rollout worker, we call restore_checkpoint repeatedly to load the latest model in memory. The restore checkpoint functions calls checkpoint_saver. Checkpoint saver uses GlobalVariablesSaver which does not release the references of the previous model variables. This leads to the situation where the memory keeps on growing before crashing the rollout worker.

This change avoid using the checkpoint saver in the rollout worker as I believe it is not needed in this code path.

Also added a test to easily reproduce the issue using CartPole example. We were also seeing this issue with the AWS DeepRacer implementation and the current implementation avoid the memory leak there as well.
2018-12-15 12:26:31 -08:00
zach dwiel
e08accdc22 allow case insensitive selected level name matching 2018-12-11 12:35:30 -05:00
Zach Dwiel
d0248e03c6 add meaningful error message in the event that the action space is not one that can be used (#151) 2018-12-11 09:09:24 +02:00
Gal Leibovich
f12857a8c7 Docs changes - fixing blogpost links, removing importing all exploration policies (#139)
* updated docs

* removing imports for all exploration policies in __init__ + setting the right blog-post link

* small cleanups
2018-12-05 16:16:16 -05:00
Sina Afrooze
155b78b995 Fix warning on import TF or MxNet, when only one of the frameworks is installed (#140) 2018-12-05 11:52:24 +02:00
Ryan Peach
9e66bb653e Enable creating custom tensorflow heads, embedders, and middleware. (#135)
Allowing components to have a path property.
2018-12-05 11:40:06 +02:00
Ryan Peach
3c58ed740b 'CompositeAgent' object has no attribute 'handle_episode_ended' (#136) 2018-12-05 11:28:16 +02:00
Ryan Peach
436b16016e Added num_transitions to Memory interface (#137) 2018-12-05 10:33:25 +02:00
Ryan Peach
28e5b8b612 Minor bugfix on RewardFilter in Readme (#133) 2018-11-30 16:02:08 -08:00
Gal Novik
fc6604c09c added missing license headers 2018-11-27 22:43:40 +02:00
Balaji Subramaniam
d06197f663 Add documentation on distributed Coach. (#158)
* Added documentation on distributed Coach.
2018-11-27 12:26:15 +02:00
Gal Leibovich
5674749ed5 workaround for resolving the issue of restoring a multi-node training checkpoint to single worker (#156) 2018-11-26 00:08:43 +02:00
Gal Leibovich
ab10852ad9 hacky way to resolve the checkpointing issue (#154) 2018-11-25 16:14:15 +02:00
Gal Leibovich
11170d5ba3 fix dist. tf (#153) 2018-11-25 14:02:24 +02:00
Sina Afrooze
19a68812f6 Added ONNX compatible broadcast_like function (#152)
- Also simplified the hybrid_clip implementation.
2018-11-25 11:23:18 +02:00
Balaji Subramaniam
8df425b6e1 Update how save checkpoint secs arg is handled in distributed Coach. (#151) 2018-11-25 00:05:24 -08:00