* GraphManager.set_session also sets self.sess
* make sure that GraphManager.fetch_from_worker uses training phase
* remove unnecessary phase setting in training worker
* reorganize rollout worker
* provide default name to GlobalVariableSaver.__init__ since it isn't really used anyway
* allow dividing TrainingSteps and EnvironmentSteps
* add timestamps to the log
* added redis data store
* conflict merge fix
ISSUE: When we restore checkpoints, we create new nodes in the
Tensorflow graph. This happens when we assign new value (op node) to
RefVariable in GlobalVariableSaver. With every restore the size of TF
graph increases as new nodes are created and old unused nodes are not
removed from the graph. This causes the memory leak in
restore_checkpoint codepath.
FIX: We use TF placeholder to update the variables which avoids the
memory leak.