mirror of
https://github.com/gryf/coach.git
synced 2025-12-17 11:10:20 +01:00
* Add Robosuite parameters for all env types + initialize env flow * Init flow done * Rest of Environment API complete for RobosuiteEnvironment * RobosuiteEnvironment changes * Observation stacking filter * Add proper frame_skip in addition to control_freq * Hardcode Coach rendering to 'frontview' camera * Robosuite_Lift_DDPG preset + Robosuite env updates * Move observation stacking filter from env to preset * Pre-process observation - concatenate depth map (if exists) to image and object state (if exists) to robot state * Preset parameters based on Surreal DDPG parameters, taken from: https://github.com/SurrealAI/surreal/blob/master/surreal/main/ddpg_configs.py * RobosuiteEnvironment fixes - working now with PyGame rendering * Preset minor modifications * ObservationStackingFilter - option to concat non-vector observations * Consider frame skip when setting horizon in robosuite env * Robosuite lift preset - update heatup length and training interval * Robosuite env - change control_freq to 10 to match Surreal usage * Robosuite clipped PPO preset * Distribute multiple workers (-n #) over multiple GPUs * Clipped PPO memory optimization from @shadiendrawis * Fixes to evaluation only workers * RoboSuite_ClippedPPO: Update training interval * Undo last commit (update training interval) * Fix "doube-negative" if conditions * multi-agent single-trainer clipped ppo training with cartpole * cleanups (not done yet) + ~tuned hyper-params for mast * Switch to Robosuite v1 APIs * Change presets to IK controller * more cleanups + enabling evaluation worker + better logging * RoboSuite_Lift_ClippedPPO updates * Fix major bug in obs normalization filter setup * Reduce coupling between Robosuite API and Coach environment * Now only non task-specific parameters are explicitly defined in Coach * Removed a bunch of enums of Robosuite elements, using simple strings instead * With this change new environments/robots/controllers in Robosuite can be used immediately in Coach * MAST: better logging of actor-trainer interaction + bug fixes + performance improvements. Still missing: fixed pubsub for obs normalization running stats + logging for trainer signals * lstm support for ppo * setting JOINT VELOCITY action space by default + fix for EveryNEpisodes video dump filter + new TaskIDDumpFilter + allowing or between video dump filters * Separate Robosuite clipped PPO preset for the non-MAST case * Add flatten layer to architectures and use it in Robosuite presets This is required for embedders that mix conv and dense TODO: Add MXNet implementation * publishing running_stats together with the published policy + hyper-param for when to publish a policy + cleanups * bug-fix for memory leak in MAST * Bugfix: Return value in TF BatchnormActivationDropout.to_tf_instance * Explicit activations in embedder scheme so there's no ReLU after flatten * Add clipped PPO heads with configurable dense layers at the beginning * This is a workaround needed to mimic Surreal-PPO, where the CNN and LSTM are shared between actor and critic but the FC layers are not shared * Added a "SchemeBuilder" class, currently only used for the new heads but we can change Middleware and Embedder implementations to use it as well * Video dump setting fix in basic preset * logging screen output to file * coach to start the redis-server for a MAST run * trainer drops off-policy data + old policy in ClippedPPO updates only after policy was published + logging free memory stats + actors check for a new policy only at the beginning of a new episode + fixed a bug where the trainer was logging "Training Reward = 0", causing dashboard to incorrectly display the signal * Add missing set_internal_state function in TFSharedRunningStats * Robosuite preset - use SingleLevelSelect instead of hard-coded level * policy ID published directly on Redis * Small fix when writing to log file * Major bugfix in Robosuite presets - pass dense sizes to heads * RoboSuite_Lift_ClippedPPO hyper-params update * add horizon and value bootstrap to GAE calculation, fix A3C with LSTM * adam hyper-params from mujoco * updated MAST preset with IK_POSE_POS controller * configurable initialization for policy stdev + custom extra noise per actor + logging of policy stdev to dashboard * values loss weighting of 0.5 * minor fixes + presets * bug-fix for MAST where the old policy in the trainer had kept updating every training iter while it should only update after every policy publish * bug-fix: reset_internal_state was not called by the trainer * bug-fixes in the lstm flow + some hyper-param adjustments for CartPole_ClippedPPO_LSTM -> training and sometimes reaches 200 * adding back the horizon hyper-param - a messy commit * another bug-fix missing from prev commit * set control_freq=2 to match action_scale 0.125 * ClippedPPO with MAST cleanups and some preps for TD3 with MAST * TD3 presets. RoboSuite_Lift_TD3 seems to work well with multi-process runs (-n 8) * setting termination on collision to be on by default * bug-fix following prev-prev commit * initial cube exploration environment with TD3 commit * bug fix + minor refactoring * several parameter changes and RND debugging * Robosuite Gym wrapper + Rename TD3_Random* -> Random* * algorithm update * Add RoboSuite v1 env + presets (to eventually replace non-v1 ones) * Remove grasping presets, keep only V1 exp. presets (w/o V1 tag) * Keep just robosuite V1 env as the 'robosuite_environment' module * Exclude Robosuite and MAST presets from integration tests * Exclude LSTM and MAST presets from golden tests * Fix mistakenly removed import * Revert debug changes in ReaderWriterLock * Try another way to exclude LSTM/MAST golden tests * Remove debug prints * Remove PreDense heads, unused in the end * Missed removing an instance of PreDense head * Remove MAST, not required for this PR * Undo unused concat option in ObservationStackingFilter * Remove LSTM updates, not required in this PR * Update README.md * code changes for the exploration flow to work with robosuite master branch * code cleanup + documentation * jupyter tutorial for the goal-based exploration + scatter plot * typo fix * Update README.md * seprate parameter for the obs-goal observation + small fixes * code clarity fixes * adjustment in tutorial 5 * Update tutorial * Update tutorial Co-authored-by: Guy Jacob <guy.jacob@intel.com> Co-authored-by: Gal Leibovich <gal.leibovich@intel.com> Co-authored-by: shadi.endrawis <sendrawi@aipg-ra-skx-03.ra.intel.com>
191 lines
7.9 KiB
Python
191 lines
7.9 KiB
Python
#
|
|
# Copyright (c) 2017 Intel Corporation
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
# you may not use this file except in compliance with the License.
|
|
# You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
#
|
|
|
|
import numpy as np
|
|
import tensorflow as tf
|
|
from tensorflow.python.ops.losses.losses_impl import Reduction
|
|
from rl_coach.architectures.tensorflow_components.layers import Dense, convert_layer_class
|
|
from rl_coach.base_parameters import AgentParameters
|
|
from rl_coach.spaces import SpacesDefinition
|
|
from rl_coach.utils import force_list
|
|
from rl_coach.architectures.tensorflow_components.utils import squeeze_tensor
|
|
|
|
|
|
# Used to initialize weights for policy and value output layers
|
|
def normalized_columns_initializer(std=1.0):
|
|
def _initializer(shape, dtype=None, partition_info=None):
|
|
out = np.random.randn(*shape).astype(np.float32)
|
|
out *= std / np.sqrt(np.square(out).sum(axis=0, keepdims=True))
|
|
return tf.constant(out)
|
|
return _initializer
|
|
|
|
|
|
# Used to initialize RND network parameters
|
|
class Orthogonal(tf.initializers.orthogonal):
|
|
def __init__(self, gain=1.0):
|
|
super().__init__(gain=gain)
|
|
|
|
def __call__(self, shape, dtype=None, partition_info=None):
|
|
shape = tuple(shape)
|
|
if len(shape) == 2:
|
|
flat_shape = shape
|
|
elif len(shape) == 4: # assumes NHWC
|
|
flat_shape = (np.prod(shape[:-1]), shape[-1])
|
|
else:
|
|
raise NotImplementedError
|
|
a = np.random.normal(0.0, 1.0, flat_shape)
|
|
u, _, v = np.linalg.svd(a, full_matrices=False)
|
|
q = u if u.shape == flat_shape else v # pick the one with the correct shape
|
|
q = q.reshape(shape)
|
|
return (self.gain * q[:shape[0], :shape[1]]).astype(np.float32)
|
|
|
|
def get_config(self):
|
|
return {"gain": self.gain}
|
|
|
|
|
|
class Head(object):
|
|
"""
|
|
A head is the final part of the network. It takes the embedding from the middleware embedder and passes it through
|
|
a neural network to produce the output of the network. There can be multiple heads in a network, and each one has
|
|
an assigned loss function. The heads are algorithm dependent.
|
|
"""
|
|
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
|
head_idx: int=0, loss_weight: float=1., is_local: bool=True, activation_function: str='relu',
|
|
dense_layer=Dense, is_training=False):
|
|
self.head_idx = head_idx
|
|
self.network_name = network_name
|
|
self.network_parameters = agent_parameters.network_wrappers[self.network_name]
|
|
self.name = "head"
|
|
self.output = []
|
|
self.loss = []
|
|
self.loss_type = []
|
|
self.regularizations = []
|
|
self.loss_weight = tf.Variable([float(w) for w in force_list(loss_weight)],
|
|
trainable=False, collections=[tf.GraphKeys.LOCAL_VARIABLES])
|
|
self.target = []
|
|
self.importance_weight = []
|
|
self.input = []
|
|
self.is_local = is_local
|
|
self.ap = agent_parameters
|
|
self.spaces = spaces
|
|
self.return_type = None
|
|
self.activation_function = activation_function
|
|
self.dense_layer = dense_layer
|
|
if self.dense_layer is None:
|
|
self.dense_layer = Dense
|
|
else:
|
|
self.dense_layer = convert_layer_class(self.dense_layer)
|
|
self.is_training = is_training
|
|
|
|
def __call__(self, input_layer):
|
|
"""
|
|
Wrapper for building the module graph including scoping and loss creation
|
|
:param input_layer: the input to the graph
|
|
:return: the output of the last layer and the target placeholder
|
|
"""
|
|
|
|
with tf.variable_scope(self.get_name(), initializer=tf.contrib.layers.xavier_initializer()):
|
|
self._build_module(squeeze_tensor(input_layer))
|
|
|
|
self.output = force_list(self.output)
|
|
self.target = force_list(self.target)
|
|
self.input = force_list(self.input)
|
|
self.loss_type = force_list(self.loss_type)
|
|
self.loss = force_list(self.loss)
|
|
self.regularizations = force_list(self.regularizations)
|
|
if self.is_local:
|
|
self.set_loss()
|
|
self._post_build()
|
|
|
|
if self.is_local:
|
|
return self.output, self.target, self.input, self.importance_weight
|
|
else:
|
|
return self.output, self.input
|
|
|
|
def _build_module(self, input_layer):
|
|
"""
|
|
Builds the graph of the module
|
|
This method is called early on from __call__. It is expected to store the graph
|
|
in self.output.
|
|
:param input_layer: the input to the graph
|
|
:return: None
|
|
"""
|
|
pass
|
|
|
|
def _post_build(self):
|
|
"""
|
|
Optional function that allows adding any extra definitions after the head has been fully defined
|
|
For example, this allows doing additional calculations that are based on the loss
|
|
:return: None
|
|
"""
|
|
pass
|
|
|
|
def get_name(self):
|
|
"""
|
|
Get a formatted name for the module
|
|
:return: the formatted name
|
|
"""
|
|
return '{}_{}'.format(self.name, self.head_idx)
|
|
|
|
def set_loss(self):
|
|
"""
|
|
Creates a target placeholder and loss function for each loss_type and regularization
|
|
:param loss_type: a tensorflow loss function
|
|
:param scope: the name scope to include the tensors in
|
|
:return: None
|
|
"""
|
|
|
|
# there are heads that define the loss internally, but we need to create additional placeholders for them
|
|
for idx in range(len(self.loss)):
|
|
importance_weight = tf.placeholder('float',
|
|
[None] + [1] * (len(self.target[idx].shape) - 1),
|
|
'{}_importance_weight'.format(self.get_name()))
|
|
self.importance_weight.append(importance_weight)
|
|
|
|
# add losses and target placeholder
|
|
for idx in range(len(self.loss_type)):
|
|
# create target placeholder
|
|
target = tf.placeholder('float', self.output[idx].shape, '{}_target'.format(self.get_name()))
|
|
self.target.append(target)
|
|
|
|
# create importance sampling weights placeholder
|
|
num_target_dims = len(self.target[idx].shape)
|
|
importance_weight = tf.placeholder('float', [None] + [1] * (num_target_dims - 1),
|
|
'{}_importance_weight'.format(self.get_name()))
|
|
self.importance_weight.append(importance_weight)
|
|
|
|
# compute the weighted loss. importance_weight weights over the samples in the batch, while self.loss_weight
|
|
# weights the specific loss of this head against other losses in this head or in other heads
|
|
loss_weight = self.loss_weight[idx]*importance_weight
|
|
loss = self.loss_type[idx](self.target[-1], self.output[idx],
|
|
scope=self.get_name(), reduction=Reduction.NONE, loss_collection=None)
|
|
|
|
# the loss is first summed over each sample in the batch and then the mean over the batch is taken
|
|
loss = tf.reduce_mean(loss_weight*tf.reduce_sum(loss, axis=list(range(1, num_target_dims))))
|
|
|
|
# we add the loss to the losses collection and later we will extract it in general_network
|
|
tf.losses.add_loss(loss)
|
|
self.loss.append(loss)
|
|
|
|
# add regularizations
|
|
for regularization in self.regularizations:
|
|
self.loss.append(regularization)
|
|
tf.losses.add_loss(regularization)
|
|
|
|
@classmethod
|
|
def path(cls):
|
|
return cls.__class__.__name__
|