* Change build_*_env jobs to pull base image of current "tag"
instead of "master" image
* Change nightly flow so build_*_env jobs now gated by build_base (so
change in previous bullet works in nightly)
* Bugfix in CheckpointDataStore: Call to object.__init__ with
parameters
* Disabling unstable Doom A3C and ACER golden tests
* Currently this is specific to the case of discretizing a continuous action space. Can easily be adapted to other case by feeding the kNN otherwise, and removing the usage of a discretizing output action filter
* GraphManager.set_session also sets self.sess
* make sure that GraphManager.fetch_from_worker uses training phase
* remove unnecessary phase setting in training worker
* reorganize rollout worker
* provide default name to GlobalVariableSaver.__init__ since it isn't really used anyway
* allow dividing TrainingSteps and EnvironmentSteps
* add timestamps to the log
* added redis data store
* conflict merge fix
* SAC algorithm
* SAC - updates to agent (learn_from_batch), sac_head and sac_q_head to fix problem in gradient calculation. Now SAC agents is able to train.
gym_environment - fixing an error in access to gym.spaces
* Soft Actor Critic - code cleanup
* code cleanup
* V-head initialization fix
* SAC benchmarks
* SAC Documentation
* typo fix
* documentation fixes
* documentation and version update
* README typo
* introduce dockerfiles.
* ensure golden tests are run not just collected.
* Skip CI download of dockerfiles.
* add StarCraft environment and tests.
* add minimaps starcraft validation parameters.
* Add functional test running (from Ayoob)
* pin mujoco_py version to a 1.5 compatible release.
* fix config syntax issue.
* pin remaining mujoco_py install calls.
* Relax pin of gym version in gym Dockerfile.
* update makefile based on functional test filtering.
* integration test changes to override heatup to 1000 steps + run each preset for 30 sec (to make sure we reach the train part)
* fixes to failing presets uncovered with this change + changes in the golden testing to properly test BatchRL
* fix for rainbow dqn
* fix to gym_environment (due to a change in Gym 0.12.1) + fix for rainbow DQN + some bug-fix in utils.squeeze_list
* fix for NEC agent
allowing for the last training batch drawn to be smaller than batch_size + adding support for more agents in BatchRL by adding softmax with temperature to the corresponding heads + adding a CartPole_QR_DQN preset with a golden test + cleanups
* initial ACER commit
* Code cleanup + several fixes
* Q-retrace bug fix + small clean-ups
* added documentation for acer
* ACER benchmarks
* update benchmarks table
* Add nightly running of golden and trace tests. (#202)
Resolves#200
* comment out nightly trace tests until values reset.
* remove redundant observe ignore (#168)
* ensure nightly test env containers exist. (#205)
Also bump integration test timeout
* wxPython removal (#207)
Replacing wxPython with Python's Tkinter.
Also removing the option to choose multiple files as it is unused and causes errors, and fixing the load file/directory spinner.
* Create CONTRIBUTING.md (#210)
* Create CONTRIBUTING.md. Resolves#188
* run nightly golden tests sequentially. (#217)
Should reduce resource requirements and potential CPU contention but increases
overall execution time.
* tests: added new setup configuration + test args (#211)
- added utils for future tests and conftest
- added test args
* new docs build
* golden test update
Adding mxnet components to rl_coach architectures.
- Supports PPO and DQN
- Tested with CartPole_PPO and CarPole_DQN
- Normalizing filters don't work right now (see #49) and are disabled in CartPole_PPO preset
- Checkpointing is disabled for MXNet
* Integrate coach.py params with distributed Coach.
* Minor improvements
- Use enums instead of constants.
- Reduce code duplication.
- Ask experiment name with timeout.
* reordering of the episode reset operation and allowing to store episodes only when they are terminated
* reordering of the episode reset operation and allowing to store episodes only when they are terminated
* revert tensorflow-gpu to 1.9.0 + bug fix in should_train()
* tests readme file and refactoring of policy optimization agent train function
* Update README.md
* Update README.md
* additional policy optimization train function simplifications
* Updated the traces after the reordering of the environment reset
* docker and jenkins files
* updated the traces to the ones from within the docker container
* updated traces and added control suite to the docker
* updated jenkins file with the intel proxy + updated doom basic a3c test params
* updated line breaks in jenkins file
* added a missing line break in jenkins file
* refining trace tests ignored presets + adding a configurable beta entropy value
* switch the order of trace and golden tests in jenkins + fix golden tests processes not killed issue
* updated benchmarks for dueling ddqn breakout and pong
* allowing dynamic updates to the loss weights + bug fix in episode.update_returns
* remove docker and jenkins file