1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-17 11:10:20 +01:00
Commit Graph

12 Commits

Author SHA1 Message Date
Itai Caspi
d302168c8c Parallel agents fixes (#95)
* Parallel agents related bug fixes: checkpoint restore, tensorboard integration.
Adding narrow networks support.
Reference code for unlimited number of checkpoints
2018-05-24 14:24:19 +03:00
Itai Caspi
52eb159f69 multiple bug fixes in dealing with measurements + CartPole_DFP preset (#92) 2018-04-23 10:44:46 +03:00
Itai Caspi
a7206ed702 Multiple improvements and bug fixes (#66)
* Multiple improvements and bug fixes:

    * Using lazy stacking to save on memory when using a replay buffer
    * Remove step counting for evaluation episodes
    * Reset game between heatup and training
    * Major bug fixes in NEC (is reproducing the paper results for pong now)
    * Image input rescaling to 0-1 is now optional
    * Change the terminal title to be the experiment name
    * Observation cropping for atari is now optional
    * Added random number of noop actions for gym to match the dqn paper
    * Fixed a bug where the evaluation episodes won't start with the max possible ale lives
    * Added a script for plotting the results of an experiment over all the atari games
2018-02-26 12:29:07 +02:00
Zach Dwiel
6c79a442f2 update nec and value optimization agents to work with recurrent middleware 2018-01-05 20:16:51 -05:00
Itai Caspi
125c7ee38d Release 0.9
Main changes are detailed below:

New features -
* CARLA 0.7 simulator integration
* Human control of the game play
* Recording of human game play and storing / loading the replay buffer
* Behavioral cloning agent and presets
* Golden tests for several presets
* Selecting between deep / shallow image embedders
* Rendering through pygame (with some boost in performance)

API changes -
* Improved environment wrapper API
* Added an evaluate flag to allow convenient evaluation of existing checkpoints
* Improve frameskip definition in Gym

Bug fixes -
* Fixed loading of checkpoints for agents with more than one network
* Fixed the N Step Q learning agent python3 compatibility
2017-12-19 19:27:16 +02:00
Itai Caspi
11faf19649 QR-DQN bug fix and imporvements (#30)
* bug fix - QR-DQN using error instead of abs-error in the quantile huber loss

* improvement - QR-DQN sorting the quantile only once instead of batch_size times

* new feature - adding the Breakout QRDQN preset (verified to achieve good results)
2017-11-29 14:01:59 +02:00
Itai Caspi
8d9ee4ea2b bug fix - fixed C51 presets hyperparameters 2017-11-10 13:22:42 +02:00
Itai Caspi
a8bce9828c new feature - implementation of Quantile Regression DQN (https://arxiv.org/pdf/1710.10044v1.pdf)
API change - Distributional DQN renamed to Categorical DQN
2017-11-01 15:09:07 +02:00
Itai Caspi
e38611b9eb bug fix - updating Doom_Health_DFP and Breakout_DQN presets 2017-10-31 10:54:14 +02:00
cxx
e33b0e8534 Fix preset mistakes. 2017-10-26 12:37:32 +03:00
Itai Caspi
43bc359166 updated atari presets with v4 environment ids 2017-10-23 14:14:09 +03:00
Gal Leibovich
1d4c3455e7 coach v0.8.0 2017-10-19 13:10:15 +03:00