1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-18 03:30:19 +01:00
Commit Graph

50 Commits

Author SHA1 Message Date
shadiendrawis
0896f43097 Robosuite exploration (#478)
* Add Robosuite parameters for all env types + initialize env flow

* Init flow done

* Rest of Environment API complete for RobosuiteEnvironment

* RobosuiteEnvironment changes

* Observation stacking filter
* Add proper frame_skip in addition to control_freq
* Hardcode Coach rendering to 'frontview' camera

* Robosuite_Lift_DDPG preset + Robosuite env updates

* Move observation stacking filter from env to preset
* Pre-process observation - concatenate depth map (if exists)
  to image and object state (if exists) to robot state
* Preset parameters based on Surreal DDPG parameters, taken from:
  https://github.com/SurrealAI/surreal/blob/master/surreal/main/ddpg_configs.py

* RobosuiteEnvironment fixes - working now with PyGame rendering

* Preset minor modifications

* ObservationStackingFilter - option to concat non-vector observations

* Consider frame skip when setting horizon in robosuite env

* Robosuite lift preset - update heatup length and training interval

* Robosuite env - change control_freq to 10 to match Surreal usage

* Robosuite clipped PPO preset

* Distribute multiple workers (-n #) over multiple GPUs

* Clipped PPO memory optimization from @shadiendrawis

* Fixes to evaluation only workers

* RoboSuite_ClippedPPO: Update training interval

* Undo last commit (update training interval)

* Fix "doube-negative" if conditions

* multi-agent single-trainer clipped ppo training with cartpole

* cleanups (not done yet) + ~tuned hyper-params for mast

* Switch to Robosuite v1 APIs

* Change presets to IK controller

* more cleanups + enabling evaluation worker + better logging

* RoboSuite_Lift_ClippedPPO updates

* Fix major bug in obs normalization filter setup

* Reduce coupling between Robosuite API and Coach environment

* Now only non task-specific parameters are explicitly defined
  in Coach
* Removed a bunch of enums of Robosuite elements, using simple
  strings instead
* With this change new environments/robots/controllers in Robosuite
  can be used immediately in Coach

* MAST: better logging of actor-trainer interaction + bug fixes + performance improvements.

Still missing: fixed pubsub for obs normalization running stats + logging for trainer signals

* lstm support for ppo

* setting JOINT VELOCITY action space by default + fix for EveryNEpisodes video dump filter + new TaskIDDumpFilter + allowing or between video dump filters

* Separate Robosuite clipped PPO preset for the non-MAST case

* Add flatten layer to architectures and use it in Robosuite presets

This is required for embedders that mix conv and dense

TODO: Add MXNet implementation

* publishing running_stats together with the published policy + hyper-param for when to publish a policy + cleanups

* bug-fix for memory leak in MAST

* Bugfix: Return value in TF BatchnormActivationDropout.to_tf_instance

* Explicit activations in embedder scheme so there's no ReLU after flatten

* Add clipped PPO heads with configurable dense layers at the beginning

* This is a workaround needed to mimic Surreal-PPO, where the CNN and
  LSTM are shared between actor and critic but the FC layers are not
  shared
* Added a "SchemeBuilder" class, currently only used for the new heads
  but we can change Middleware and Embedder implementations to use it
  as well

* Video dump setting fix in basic preset

* logging screen output to file

* coach to start the redis-server for a MAST run

* trainer drops off-policy data + old policy in ClippedPPO updates only after policy was published + logging free memory stats + actors check for a new policy only at the beginning of a new episode + fixed a bug where the trainer was logging "Training Reward = 0", causing dashboard to incorrectly display the signal

* Add missing set_internal_state function in TFSharedRunningStats

* Robosuite preset - use SingleLevelSelect instead of hard-coded level

* policy ID published directly on Redis

* Small fix when writing to log file

* Major bugfix in Robosuite presets - pass dense sizes to heads

* RoboSuite_Lift_ClippedPPO hyper-params update

* add horizon and value bootstrap to GAE calculation, fix A3C with LSTM

* adam hyper-params from mujoco

* updated MAST preset with IK_POSE_POS controller

* configurable initialization for policy stdev + custom extra noise per actor + logging of policy stdev to dashboard

* values loss weighting of 0.5

* minor fixes + presets

* bug-fix for MAST  where the old policy in the trainer had kept updating every training iter while it should only update after every policy publish

* bug-fix: reset_internal_state was not called by the trainer

* bug-fixes in the lstm flow + some hyper-param adjustments for CartPole_ClippedPPO_LSTM -> training and sometimes reaches 200

* adding back the horizon hyper-param - a messy commit

* another bug-fix missing from prev commit

* set control_freq=2 to match action_scale 0.125

* ClippedPPO with MAST cleanups and some preps for TD3 with MAST

* TD3 presets. RoboSuite_Lift_TD3 seems to work well with multi-process runs (-n 8)

* setting termination on collision to be on by default

* bug-fix following prev-prev commit

* initial cube exploration environment with TD3 commit

* bug fix + minor refactoring

* several parameter changes and RND debugging

* Robosuite Gym wrapper + Rename TD3_Random* -> Random*

* algorithm update

* Add RoboSuite v1 env + presets (to eventually replace non-v1 ones)

* Remove grasping presets, keep only V1 exp. presets (w/o V1 tag)

* Keep just robosuite V1 env as the 'robosuite_environment' module

* Exclude Robosuite and MAST presets from integration tests

* Exclude LSTM and MAST presets from golden tests

* Fix mistakenly removed import

* Revert debug changes in ReaderWriterLock

* Try another way to exclude LSTM/MAST golden tests

* Remove debug prints

* Remove PreDense heads, unused in the end

* Missed removing an instance of PreDense head

* Remove MAST, not required for this PR

* Undo unused concat option in ObservationStackingFilter

* Remove LSTM updates, not required in this PR

* Update README.md

* code changes for the exploration flow to work with robosuite master branch

* code cleanup + documentation

* jupyter tutorial for the goal-based exploration + scatter plot

* typo fix

* Update README.md

* seprate parameter for the obs-goal observation + small fixes

* code clarity fixes

* adjustment in tutorial 5

* Update tutorial

* Update tutorial

Co-authored-by: Guy Jacob <guy.jacob@intel.com>
Co-authored-by: Gal Leibovich <gal.leibovich@intel.com>
Co-authored-by: shadi.endrawis <sendrawi@aipg-ra-skx-03.ra.intel.com>
2021-06-01 00:34:19 +03:00
Gal Novik
59e08034c6 Update README.md 2020-11-09 10:25:05 +02:00
Gal Novik
57e809c094 Docs updates following github repo change 2020-11-08 11:54:38 +02:00
Brian Broll
0867d8d0fb Fixed typo: Nerual -> Neural (#425) 2019-11-16 21:13:24 +02:00
Gal Novik
92460736bc Updated tutorial and docs (#386)
Improved getting started tutorial, and updated docs to point to version 1.0.0
2019-08-05 16:46:15 +03:00
Gal Novik
2697142d5a Release 1.0.0 (#382)
* Updating README
* Shortening test cycles
2019-07-24 16:10:58 +03:00
Gal Leibovich
19ad2d60a7 Batch RL Tutorial (#372) 2019-07-14 18:43:48 +03:00
Gal Leibovich
7eb884c5b2 TD3 (#338) 2019-06-16 11:11:21 +03:00
Gal Novik
e49aac05aa Update README.md (#341)
Adding some links to the tutorials from the README
2019-06-04 11:35:34 +03:00
guyk1971
74db141d5e SAC algorithm (#282)
* SAC algorithm

* SAC - updates to agent (learn_from_batch), sac_head and sac_q_head to fix problem in gradient calculation. Now SAC agents is able to train.
gym_environment - fixing an error in access to gym.spaces

* Soft Actor Critic - code cleanup

* code cleanup

* V-head initialization fix

* SAC benchmarks

* SAC Documentation

* typo fix

* documentation fixes

* documentation and version update

* README typo
2019-05-01 18:37:49 +03:00
Nikhil Barhate
537b549e1d fixed broken url in README (#246) 2019-03-13 22:38:33 -07:00
Scott Leishman
9c449507e0 update CARLA install docs to note python client. (#234) 2019-03-13 22:21:44 -07:00
shadiendrawis
b461a1b8ab readme fix (#228) 2019-02-24 13:46:21 +02:00
Scott Leishman
7cda5179c6 add CI status badge. 2018-12-21 10:50:28 -05:00
Gal Leibovich
f12857a8c7 Docs changes - fixing blogpost links, removing importing all exploration policies (#139)
* updated docs

* removing imports for all exploration policies in __init__ + setting the right blog-post link

* small cleanups
2018-12-05 16:16:16 -05:00
Ajay Deshpande
15fabf6ec3 Removing badge 2018-11-28 09:19:32 -08:00
Scott Leishman
3601d9bc45 CI related updates 2018-11-27 21:53:46 +00:00
Gal Novik
05c1005e94 Updated README and added .nojekyll file for github pages to work properly 2018-11-27 22:11:28 +02:00
Scott Leishman
524f8436a2 create per environment Dockerfiles. (#70)
* create per environment Dockerfiles.

Adjust CI setup to better parallelize runs.
Fix a couple of issues in golden and trace tests.
Update a few of the docs.

* bugfix in mmc agent.

Also install kubectl for CI, update badge branch.

* remove integration test parallelism.
2018-11-14 07:40:22 -08:00
Gal Leibovich
5aca3a5ed1 Update README.md 2018-08-30 23:33:44 +03:00
Itai Caspi
55c3034f4d Update README.md 2018-08-30 23:25:10 +03:00
Itai Caspi
e5526b98f8 Update README.md 2018-08-30 22:58:37 +03:00
Itai Caspi
3fd0bf4f0f Update README.md 2018-08-26 12:09:46 +03:00
Gal Leibovich
904570000a Update README.md 2018-08-20 12:04:29 +03:00
Itai Caspi
9f599f38cf Update README.md 2018-08-19 13:09:06 +03:00
Itai Caspi
1de04d6fee updated gifs in README + fix for multiworker crashes + improved Atari DQN and Dueling DDQN presets 2018-08-16 18:23:32 +03:00
Gal Leibovich
e783157b15 Update README.md 2018-08-14 16:16:41 +03:00
Itai Caspi
824fdeee59 Update README with new coach aliases 2018-08-14 14:36:41 +03:00
Gal Leibovich
7a76d63da4 Update README.md 2018-08-13 17:19:47 +03:00
Gal Novik
19ca5c24b1 pre-release 0.10.0 2018-08-13 17:11:34 +03:00
Itai Caspi
d44c329bb8 Update README.md 2018-06-25 17:46:01 +03:00
Itai Caspi
cfd4fe0faf Update README.md 2018-06-25 17:43:15 +03:00
itaicaspi-intel
5d5562bf62 moving the docs to github 2018-04-23 09:14:20 +03:00
Itai Caspi
a8d5fb7bdf Added a table of contents to the README 2018-01-27 14:31:53 +02:00
Itai Caspi
522c837e76 Update README.md 2018-01-22 12:15:23 +02:00
Itai Caspi
42f68f2e8a update the README with contact mail + small reformatting 2018-01-09 13:08:23 +02:00
Itai Caspi
b435c6d2d7 updated the links to the new Intel AI website 2018-01-09 10:25:06 +02:00
Itai Caspi
645d9d47a9 Adding bibtex to the README 2018-01-03 21:11:57 +02:00
Itai Caspi
93a54c7e8e Added a link to the 2nd blog post 2017-12-20 17:18:49 +02:00
Itai Caspi
125c7ee38d Release 0.9
Main changes are detailed below:

New features -
* CARLA 0.7 simulator integration
* Human control of the game play
* Recording of human game play and storing / loading the replay buffer
* Behavioral cloning agent and presets
* Golden tests for several presets
* Selecting between deep / shallow image embedders
* Rendering through pygame (with some boost in performance)

API changes -
* Improved environment wrapper API
* Added an evaluate flag to allow convenient evaluation of existing checkpoints
* Improve frameskip definition in Gym

Bug fixes -
* Fixed loading of checkpoints for agents with more than one network
* Fixed the N Step Q learning agent python3 compatibility
2017-12-19 19:27:16 +02:00
Miguel Morales
acd2b78a9e Update README.md
Fix algorithms list to be consistent with "<full name> (<acronym>)"
2017-11-12 16:00:00 +02:00
Itai Caspi
a8bce9828c new feature - implementation of Quantile Regression DQN (https://arxiv.org/pdf/1710.10044v1.pdf)
API change - Distributional DQN renamed to Categorical DQN
2017-11-01 15:09:07 +02:00
Gal Leibovich
eb0b57d7fa Updating PPO references per issue #11 2017-10-24 16:57:44 +03:00
Gal Leibovich
1a09b7cec3 changing python to python3 everywhere to make the supported version of python explicit 2017-10-23 13:07:54 +03:00
Gal Leibovich
c3501653f7 Update README.md 2017-10-22 09:09:37 +03:00
Itai Caspi
aacd9b5db8 Fixed link to MMC in the README 2017-10-21 20:26:45 +03:00
Gal Leibovich
79bb44d5be Update README.md with a link to Coach documentation. 2017-10-20 14:26:07 +03:00
galleibo-intel
e813eaf304 Update README.md 2017-10-19 13:19:16 +03:00
Gal Leibovich
1d4c3455e7 coach v0.8.0 2017-10-19 13:10:15 +03:00
Gal Novik
7f77813a39 Initial commit 2017-10-01 22:27:44 +03:00