1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-18 03:30:19 +01:00
Commit Graph

44 Commits

Author SHA1 Message Date
shadiendrawis
0896f43097 Robosuite exploration (#478)
* Add Robosuite parameters for all env types + initialize env flow

* Init flow done

* Rest of Environment API complete for RobosuiteEnvironment

* RobosuiteEnvironment changes

* Observation stacking filter
* Add proper frame_skip in addition to control_freq
* Hardcode Coach rendering to 'frontview' camera

* Robosuite_Lift_DDPG preset + Robosuite env updates

* Move observation stacking filter from env to preset
* Pre-process observation - concatenate depth map (if exists)
  to image and object state (if exists) to robot state
* Preset parameters based on Surreal DDPG parameters, taken from:
  https://github.com/SurrealAI/surreal/blob/master/surreal/main/ddpg_configs.py

* RobosuiteEnvironment fixes - working now with PyGame rendering

* Preset minor modifications

* ObservationStackingFilter - option to concat non-vector observations

* Consider frame skip when setting horizon in robosuite env

* Robosuite lift preset - update heatup length and training interval

* Robosuite env - change control_freq to 10 to match Surreal usage

* Robosuite clipped PPO preset

* Distribute multiple workers (-n #) over multiple GPUs

* Clipped PPO memory optimization from @shadiendrawis

* Fixes to evaluation only workers

* RoboSuite_ClippedPPO: Update training interval

* Undo last commit (update training interval)

* Fix "doube-negative" if conditions

* multi-agent single-trainer clipped ppo training with cartpole

* cleanups (not done yet) + ~tuned hyper-params for mast

* Switch to Robosuite v1 APIs

* Change presets to IK controller

* more cleanups + enabling evaluation worker + better logging

* RoboSuite_Lift_ClippedPPO updates

* Fix major bug in obs normalization filter setup

* Reduce coupling between Robosuite API and Coach environment

* Now only non task-specific parameters are explicitly defined
  in Coach
* Removed a bunch of enums of Robosuite elements, using simple
  strings instead
* With this change new environments/robots/controllers in Robosuite
  can be used immediately in Coach

* MAST: better logging of actor-trainer interaction + bug fixes + performance improvements.

Still missing: fixed pubsub for obs normalization running stats + logging for trainer signals

* lstm support for ppo

* setting JOINT VELOCITY action space by default + fix for EveryNEpisodes video dump filter + new TaskIDDumpFilter + allowing or between video dump filters

* Separate Robosuite clipped PPO preset for the non-MAST case

* Add flatten layer to architectures and use it in Robosuite presets

This is required for embedders that mix conv and dense

TODO: Add MXNet implementation

* publishing running_stats together with the published policy + hyper-param for when to publish a policy + cleanups

* bug-fix for memory leak in MAST

* Bugfix: Return value in TF BatchnormActivationDropout.to_tf_instance

* Explicit activations in embedder scheme so there's no ReLU after flatten

* Add clipped PPO heads with configurable dense layers at the beginning

* This is a workaround needed to mimic Surreal-PPO, where the CNN and
  LSTM are shared between actor and critic but the FC layers are not
  shared
* Added a "SchemeBuilder" class, currently only used for the new heads
  but we can change Middleware and Embedder implementations to use it
  as well

* Video dump setting fix in basic preset

* logging screen output to file

* coach to start the redis-server for a MAST run

* trainer drops off-policy data + old policy in ClippedPPO updates only after policy was published + logging free memory stats + actors check for a new policy only at the beginning of a new episode + fixed a bug where the trainer was logging "Training Reward = 0", causing dashboard to incorrectly display the signal

* Add missing set_internal_state function in TFSharedRunningStats

* Robosuite preset - use SingleLevelSelect instead of hard-coded level

* policy ID published directly on Redis

* Small fix when writing to log file

* Major bugfix in Robosuite presets - pass dense sizes to heads

* RoboSuite_Lift_ClippedPPO hyper-params update

* add horizon and value bootstrap to GAE calculation, fix A3C with LSTM

* adam hyper-params from mujoco

* updated MAST preset with IK_POSE_POS controller

* configurable initialization for policy stdev + custom extra noise per actor + logging of policy stdev to dashboard

* values loss weighting of 0.5

* minor fixes + presets

* bug-fix for MAST  where the old policy in the trainer had kept updating every training iter while it should only update after every policy publish

* bug-fix: reset_internal_state was not called by the trainer

* bug-fixes in the lstm flow + some hyper-param adjustments for CartPole_ClippedPPO_LSTM -> training and sometimes reaches 200

* adding back the horizon hyper-param - a messy commit

* another bug-fix missing from prev commit

* set control_freq=2 to match action_scale 0.125

* ClippedPPO with MAST cleanups and some preps for TD3 with MAST

* TD3 presets. RoboSuite_Lift_TD3 seems to work well with multi-process runs (-n 8)

* setting termination on collision to be on by default

* bug-fix following prev-prev commit

* initial cube exploration environment with TD3 commit

* bug fix + minor refactoring

* several parameter changes and RND debugging

* Robosuite Gym wrapper + Rename TD3_Random* -> Random*

* algorithm update

* Add RoboSuite v1 env + presets (to eventually replace non-v1 ones)

* Remove grasping presets, keep only V1 exp. presets (w/o V1 tag)

* Keep just robosuite V1 env as the 'robosuite_environment' module

* Exclude Robosuite and MAST presets from integration tests

* Exclude LSTM and MAST presets from golden tests

* Fix mistakenly removed import

* Revert debug changes in ReaderWriterLock

* Try another way to exclude LSTM/MAST golden tests

* Remove debug prints

* Remove PreDense heads, unused in the end

* Missed removing an instance of PreDense head

* Remove MAST, not required for this PR

* Undo unused concat option in ObservationStackingFilter

* Remove LSTM updates, not required in this PR

* Update README.md

* code changes for the exploration flow to work with robosuite master branch

* code cleanup + documentation

* jupyter tutorial for the goal-based exploration + scatter plot

* typo fix

* Update README.md

* seprate parameter for the obs-goal observation + small fixes

* code clarity fixes

* adjustment in tutorial 5

* Update tutorial

* Update tutorial

Co-authored-by: Guy Jacob <guy.jacob@intel.com>
Co-authored-by: Gal Leibovich <gal.leibovich@intel.com>
Co-authored-by: shadi.endrawis <sendrawi@aipg-ra-skx-03.ra.intel.com>
2021-06-01 00:34:19 +03:00
Guy Jacob
a1a2e67fbd logging screen output to file (#479)
Co-authored-by: Gal Leibovich <gal.leibovich@intel.com>
2021-05-06 18:02:27 +03:00
Zach Dwiel
7b0fccb041 Add RedisDataStore (#295)
* GraphManager.set_session also sets self.sess

* make sure that GraphManager.fetch_from_worker uses training phase

* remove unnecessary phase setting in training worker

* reorganize rollout worker

* provide default name to GlobalVariableSaver.__init__ since it isn't really used anyway

* allow dividing TrainingSteps and EnvironmentSteps

* add timestamps to the log

* added redis data store

* conflict merge fix
2019-08-28 21:15:58 +03:00
shadiendrawis
8e812ef82f Coach as a library (#348)
* CoachInterface + tutorial

* Some improvements and typo fixes

* merge tutorial 0 and 4

* typo fix + additional tutorial changes

* tutorial changes

* added reading signals and experiment path argument
2019-06-19 18:05:03 +03:00
Timo Kaufmann
8df3c46756 Do not hardcode path to bash (#332) 2019-06-10 20:10:28 +03:00
Ajay Deshpande
33dc29ee99 Uploading checkpoint if crd provided (#191)
* Uploading checkpoint if crd provided
* Changing the calculation of total steps because of a recent change in core_types

Fixes #195
2019-04-26 12:27:33 -07:00
Gal Leibovich
4741b0b916 BCQ variant on top of DDQN (#276)
* kNN based model for predicting which actions to drop
* fix for seeds with batch rl
2019-04-16 17:06:23 +03:00
shadiendrawis
0b808f0794 remove -ept flag (#283) 2019-04-03 16:32:24 +03:00
Gal Leibovich
d6158a5cfc restoring from a checkpoint file (#247) 2019-03-17 16:28:09 +02:00
Gal Novik
10220be9be Adding support for evaluation only mode with predefined number of steps (#225) 2019-03-03 10:03:45 +02:00
Ajay Deshpande
2c1a9dbf20 Adding framework for multinode tests (#149)
* Currently runs CartPole_ClippedPPO and Mujoco_ClippedPPO with inverted_pendulum level.
2019-02-26 13:53:12 -08:00
anabwan
7253f511ed tests: added new setup configuration + test args (#211)
- added utils for future tests and conftest
- added test args
2019-02-13 07:43:59 -05:00
Gal Leibovich
f12857a8c7 Docs changes - fixing blogpost links, removing importing all exploration policies (#139)
* updated docs

* removing imports for all exploration policies in __init__ + setting the right blog-post link

* small cleanups
2018-12-05 16:16:16 -05:00
Balaji Subramaniam
8df425b6e1 Update how save checkpoint secs arg is handled in distributed Coach. (#151) 2018-11-25 00:05:24 -08:00
Balaji Subramaniam
13d2679af4 Sync experiment dir, videos, gifs to S3. (#147) 2018-11-23 20:52:12 -08:00
Ajay Deshpande
4a6c404070 Adding worker logs and plumbed task_parameters to distributed coach (#130) 2018-11-23 15:35:11 -08:00
Gal Leibovich
2b4c9c6774 Removing grarph_manager param (#141) 2018-11-23 11:42:54 -08:00
Thom Lane
949d91321a Added explicit environment closing (#129) 2018-11-22 14:25:03 +02:00
Gal Leibovich
a112ee69f6 Save filters' internal state (#127)
* save filters internal state

* moving the restore to be made from within NumpyRunningStats
2018-11-20 17:21:48 +02:00
Thom Lane
9210909050 Added MXNet to arg docs. (#121) 2018-11-19 11:31:28 +02:00
Gal Leibovich
d4d06aaea6 remove kubernetes dependency (#117) 2018-11-18 18:10:22 +02:00
Gal Leibovich
6caf721d1c Numpy shared running stats (#97) 2018-11-18 14:46:40 +02:00
Gal Leibovich
9fd4d55623 Making stop condition optional by using a flag (#113)
* apply stop condition flag (default: ignore the stop condition)
2018-11-18 13:37:39 +02:00
Balaji Subramaniam
dea1826658 Re-enable NFS data store. (#101) 2018-11-16 13:55:33 -08:00
Ajay Deshpande
fde73ced13 Simulating the act on the trainer. (#65)
* Remove the use of daemon threads for Redis subscribe.
* Emulate act and observe on trainer side to update internal vars.
2018-11-15 08:38:58 -08:00
Itai Caspi
6d40ad1650 update of api docstrings across coach and tutorials [WIP] (#91)
* updating the documentation website
* adding the built docs
* update of api docstrings across coach and tutorials 0-2
* added some missing api documentation
* New Sphinx based documentation
2018-11-15 15:00:13 +02:00
Itai Caspi
0fe583186e fixing the coach entrypoint after adding the CoachLauncher abstraction (#92) 2018-11-12 10:26:49 -08:00
Leo Dirac
2804a7c24f Refactor launcher to be object-oriented (#63)
* Import of annoy library uses failed_import mechanism.
2018-11-10 22:10:19 +02:00
Itai Caspi
83e0b09a6a adding the missing export_onnx_graph parameter to task parameters (#73) 2018-11-08 12:52:42 +02:00
Itai Caspi
e7a91b4dc3 Fix cmd line arguments handling (#68)
* refactoring the merging of the task parameters and the command line parameters
* removing some unused command line arguments
* fix for saving checkpoints when not passing through coach.py
2018-11-07 15:47:02 +02:00
Itai Caspi
811152126c Export graph to ONNX (#61)
Implements the ONNX graph exporting feature. 
Currently does not work for NAF, C51 and A3C_LSTM due to unsupported TF layers in the tf2onnx library.
2018-11-06 10:55:21 +02:00
Leo Dirac
d75df17d97 Modifying ScreenLogger to optionally not output color codes (#56)
* Modifying ScreenLogger to not output color when configured by new CLI parameter
2018-11-05 15:25:49 -08:00
Balaji Subramaniam
7e7006305a Integrate coach.py params with distributed Coach. (#42)
* Integrate coach.py params with distributed Coach.
* Minor improvements
- Use enums instead of constants.
- Reduce code duplication.
- Ask experiment name with timeout.
2018-11-05 09:33:30 -08:00
Sina Afrooze
95b4fc6888 Added ability to switch between tensorflow and mxnet using -f commandline argument. (#48)
NOTE: tensorflow framework works fine if mxnet is not installed in env, but mxnet will not work if tensorflow is not installed because of the code in network_wrapper.
2018-10-30 15:29:34 -07:00
Thom Lane
324c67d614 Bug fix: Removed reference to args which is out of scope. Conditioning now performed one level above. (#54) 2018-10-29 22:29:22 -07:00
zach dwiel
f835ac902c fix renaming: save_checkpoint_sec -> checkpoint_save_secs 2018-10-24 10:52:18 -04:00
Zach Dwiel
700a175902 rename save_checkpoint_secs -> checkpoint_save_secs 2018-10-23 17:10:58 -04:00
Zach Dwiel
9804b033a2 rename save_checkpoint_dir -> checkpoint_save_dir 2018-10-23 17:10:58 -04:00
Zach Dwiel
6541bc76b9 working checkpoints 2018-10-23 16:41:57 -04:00
Zach Dwiel
3714d8ec80 extract functions display_all_presets_and_exit, expand_preset 2018-10-23 16:40:33 -04:00
Zach Dwiel
5758c2f23e typo; increased detail in comment 2018-10-23 16:35:06 -04:00
Shadi Endrawis
51726a5b80 network_imporvements branch merge 2018-10-02 13:43:36 +03:00
itaicaspi-intel
fd2f4b0852 bug fix in HRL HER memory + some small improvements 2018-08-29 14:36:18 +03:00
Gal Novik
19ca5c24b1 pre-release 0.10.0 2018-08-13 17:11:34 +03:00