1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-17 11:10:20 +01:00
Commit Graph

11 Commits

Author SHA1 Message Date
Roman Dobosz
1b095aeeca Cleanup imports.
Till now, most of the modules were importing all of the module objects
(variables, classes, functions, other imports) into module namespace,
which potentially could (and was) cause of unintentional use of class or
methods, which was indirect imported.

With this patch, all the star imports were substituted with top-level
module, which provides desired class or function.

Besides, all imports where sorted (where possible) in a way pep8[1]
suggests - first are imports from standard library, than goes third
party imports (like numpy, tensorflow etc) and finally coach modules.
All of those sections are separated by one empty line.

[1] https://www.python.org/dev/peps/pep-0008/#imports
2018-04-13 09:58:40 +02:00
Itai Caspi
a7206ed702 Multiple improvements and bug fixes (#66)
* Multiple improvements and bug fixes:

    * Using lazy stacking to save on memory when using a replay buffer
    * Remove step counting for evaluation episodes
    * Reset game between heatup and training
    * Major bug fixes in NEC (is reproducing the paper results for pong now)
    * Image input rescaling to 0-1 is now optional
    * Change the terminal title to be the experiment name
    * Observation cropping for atari is now optional
    * Added random number of noop actions for gym to match the dqn paper
    * Fixed a bug where the evaluation episodes won't start with the max possible ale lives
    * Added a script for plotting the results of an experiment over all the atari games
2018-02-26 12:29:07 +02:00
Zach Dwiel
e1ad86417f fix n_step_q_agent 2018-02-21 10:05:57 -05:00
Zach Dwiel
39a28aba95 fix clipped ppo 2018-02-21 10:05:57 -05:00
Zach Dwiel
6c79a442f2 update nec and value optimization agents to work with recurrent middleware 2018-01-05 20:16:51 -05:00
Itai Caspi
125c7ee38d Release 0.9
Main changes are detailed below:

New features -
* CARLA 0.7 simulator integration
* Human control of the game play
* Recording of human game play and storing / loading the replay buffer
* Behavioral cloning agent and presets
* Golden tests for several presets
* Selecting between deep / shallow image embedders
* Rendering through pygame (with some boost in performance)

API changes -
* Improved environment wrapper API
* Added an evaluate flag to allow convenient evaluation of existing checkpoints
* Improve frameskip definition in Gym

Bug fixes -
* Fixed loading of checkpoints for agents with more than one network
* Fixed the N Step Q learning agent python3 compatibility
2017-12-19 19:27:16 +02:00
Itai Caspi
913ab75e8a bug fix - preventing crashes when the probability of one of the actions is 0 in the policy head 2017-10-31 10:51:48 +02:00
cxx
f43c951c2d Unify base class using new-style (object). 2017-10-26 12:33:09 +03:00
cclauss
10c139a28c Update utils.py 2017-10-22 07:42:33 +02:00
cclauss
6e9275edc3 Simplify w/ dict.get() default value, ternary if 2017-10-22 07:41:07 +02:00
Gal Leibovich
1d4c3455e7 coach v0.8.0 2017-10-19 13:10:15 +03:00