1. Having the standard checkpoint prefix in order for the data store to grab it, and sync it to S3.
2. Removing the reference to Redis so that it won't try to pickle that in.
3. Enable restoring a checkpoint into a single-worker run, which was saved by a single-node-multiple-worker run.
* Adding checkpointing framework as well as mxnet checkpointing implementation.
- MXNet checkpoint for each network is saved in a separate file.
* Adding checkpoint restore for mxnet to graph-manager
* Add unit-test for get_checkpoint_state()
* Added match.group() to fix unit-test failing on CI
* Added ONNX export support for MXNet
* Allow arbitrary dimensional observation (non vector or image)
* Added creating PlanarMapsObservationSpace to GymEnvironment when number of channels is not 1 or 3
* Added NormalizedRSSInitializer, using same method as TensorFlow backend, but changed name since ācolumnsā have different meaning in dense layer weight matrix in MXNet.
* Added unit test for NormalizedRSSInitializer.
* Changes required for Continuous PPO Head with MXNet. Used in MountainCarContinuous_ClippedPPO.
* Simplified changes for continuous ppo.
* Cleaned up to avoid duplicate code, and simplified covariance creation.
* updating the documentation website
* adding the built docs
* update of api docstrings across coach and tutorials 0-2
* added some missing api documentation
* New Sphinx based documentation
Adding mxnet components to rl_coach architectures.
- Supports PPO and DQN
- Tested with CartPole_PPO and CarPole_DQN
- Normalizing filters don't work right now (see #49) and are disabled in CartPole_PPO preset
- Checkpointing is disabled for MXNet
* refactoring the merging of the task parameters and the command line parameters
* removing some unused command line arguments
* fix for saving checkpoints when not passing through coach.py
NOTE: tensorflow framework works fine if mxnet is not installed in env, but mxnet will not work if tensorflow is not installed because of the code in network_wrapper.
* reordering of the episode reset operation and allowing to store episodes only when they are terminated
* reordering of the episode reset operation and allowing to store episodes only when they are terminated
* revert tensorflow-gpu to 1.9.0 + bug fix in should_train()
* tests readme file and refactoring of policy optimization agent train function
* Update README.md
* Update README.md
* additional policy optimization train function simplifications
* Updated the traces after the reordering of the environment reset
* docker and jenkins files
* updated the traces to the ones from within the docker container
* updated traces and added control suite to the docker
* updated jenkins file with the intel proxy + updated doom basic a3c test params
* updated line breaks in jenkins file
* added a missing line break in jenkins file
* refining trace tests ignored presets + adding a configurable beta entropy value
* switch the order of trace and golden tests in jenkins + fix golden tests processes not killed issue
* updated benchmarks for dueling ddqn breakout and pong
* allowing dynamic updates to the loss weights + bug fix in episode.update_returns
* remove docker and jenkins file