mirror of
https://github.com/gryf/coach.git
synced 2026-02-14 12:55:51 +01:00
Robosuite exploration (#478)
* Add Robosuite parameters for all env types + initialize env flow * Init flow done * Rest of Environment API complete for RobosuiteEnvironment * RobosuiteEnvironment changes * Observation stacking filter * Add proper frame_skip in addition to control_freq * Hardcode Coach rendering to 'frontview' camera * Robosuite_Lift_DDPG preset + Robosuite env updates * Move observation stacking filter from env to preset * Pre-process observation - concatenate depth map (if exists) to image and object state (if exists) to robot state * Preset parameters based on Surreal DDPG parameters, taken from: https://github.com/SurrealAI/surreal/blob/master/surreal/main/ddpg_configs.py * RobosuiteEnvironment fixes - working now with PyGame rendering * Preset minor modifications * ObservationStackingFilter - option to concat non-vector observations * Consider frame skip when setting horizon in robosuite env * Robosuite lift preset - update heatup length and training interval * Robosuite env - change control_freq to 10 to match Surreal usage * Robosuite clipped PPO preset * Distribute multiple workers (-n #) over multiple GPUs * Clipped PPO memory optimization from @shadiendrawis * Fixes to evaluation only workers * RoboSuite_ClippedPPO: Update training interval * Undo last commit (update training interval) * Fix "doube-negative" if conditions * multi-agent single-trainer clipped ppo training with cartpole * cleanups (not done yet) + ~tuned hyper-params for mast * Switch to Robosuite v1 APIs * Change presets to IK controller * more cleanups + enabling evaluation worker + better logging * RoboSuite_Lift_ClippedPPO updates * Fix major bug in obs normalization filter setup * Reduce coupling between Robosuite API and Coach environment * Now only non task-specific parameters are explicitly defined in Coach * Removed a bunch of enums of Robosuite elements, using simple strings instead * With this change new environments/robots/controllers in Robosuite can be used immediately in Coach * MAST: better logging of actor-trainer interaction + bug fixes + performance improvements. Still missing: fixed pubsub for obs normalization running stats + logging for trainer signals * lstm support for ppo * setting JOINT VELOCITY action space by default + fix for EveryNEpisodes video dump filter + new TaskIDDumpFilter + allowing or between video dump filters * Separate Robosuite clipped PPO preset for the non-MAST case * Add flatten layer to architectures and use it in Robosuite presets This is required for embedders that mix conv and dense TODO: Add MXNet implementation * publishing running_stats together with the published policy + hyper-param for when to publish a policy + cleanups * bug-fix for memory leak in MAST * Bugfix: Return value in TF BatchnormActivationDropout.to_tf_instance * Explicit activations in embedder scheme so there's no ReLU after flatten * Add clipped PPO heads with configurable dense layers at the beginning * This is a workaround needed to mimic Surreal-PPO, where the CNN and LSTM are shared between actor and critic but the FC layers are not shared * Added a "SchemeBuilder" class, currently only used for the new heads but we can change Middleware and Embedder implementations to use it as well * Video dump setting fix in basic preset * logging screen output to file * coach to start the redis-server for a MAST run * trainer drops off-policy data + old policy in ClippedPPO updates only after policy was published + logging free memory stats + actors check for a new policy only at the beginning of a new episode + fixed a bug where the trainer was logging "Training Reward = 0", causing dashboard to incorrectly display the signal * Add missing set_internal_state function in TFSharedRunningStats * Robosuite preset - use SingleLevelSelect instead of hard-coded level * policy ID published directly on Redis * Small fix when writing to log file * Major bugfix in Robosuite presets - pass dense sizes to heads * RoboSuite_Lift_ClippedPPO hyper-params update * add horizon and value bootstrap to GAE calculation, fix A3C with LSTM * adam hyper-params from mujoco * updated MAST preset with IK_POSE_POS controller * configurable initialization for policy stdev + custom extra noise per actor + logging of policy stdev to dashboard * values loss weighting of 0.5 * minor fixes + presets * bug-fix for MAST where the old policy in the trainer had kept updating every training iter while it should only update after every policy publish * bug-fix: reset_internal_state was not called by the trainer * bug-fixes in the lstm flow + some hyper-param adjustments for CartPole_ClippedPPO_LSTM -> training and sometimes reaches 200 * adding back the horizon hyper-param - a messy commit * another bug-fix missing from prev commit * set control_freq=2 to match action_scale 0.125 * ClippedPPO with MAST cleanups and some preps for TD3 with MAST * TD3 presets. RoboSuite_Lift_TD3 seems to work well with multi-process runs (-n 8) * setting termination on collision to be on by default * bug-fix following prev-prev commit * initial cube exploration environment with TD3 commit * bug fix + minor refactoring * several parameter changes and RND debugging * Robosuite Gym wrapper + Rename TD3_Random* -> Random* * algorithm update * Add RoboSuite v1 env + presets (to eventually replace non-v1 ones) * Remove grasping presets, keep only V1 exp. presets (w/o V1 tag) * Keep just robosuite V1 env as the 'robosuite_environment' module * Exclude Robosuite and MAST presets from integration tests * Exclude LSTM and MAST presets from golden tests * Fix mistakenly removed import * Revert debug changes in ReaderWriterLock * Try another way to exclude LSTM/MAST golden tests * Remove debug prints * Remove PreDense heads, unused in the end * Missed removing an instance of PreDense head * Remove MAST, not required for this PR * Undo unused concat option in ObservationStackingFilter * Remove LSTM updates, not required in this PR * Update README.md * code changes for the exploration flow to work with robosuite master branch * code cleanup + documentation * jupyter tutorial for the goal-based exploration + scatter plot * typo fix * Update README.md * seprate parameter for the obs-goal observation + small fixes * code clarity fixes * adjustment in tutorial 5 * Update tutorial * Update tutorial Co-authored-by: Guy Jacob <guy.jacob@intel.com> Co-authored-by: Gal Leibovich <gal.leibovich@intel.com> Co-authored-by: shadi.endrawis <sendrawi@aipg-ra-skx-03.ra.intel.com>
This commit is contained in:
@@ -151,8 +151,8 @@ class DoomEnvironment(Environment):
|
||||
Each camera should be an enum from CameraTypes, and there are several options like an RGB observation,
|
||||
a depth map, a segmentation map, and a top down map of the enviornment.
|
||||
|
||||
:param target_success_rate: (float)
|
||||
Stop experiment if given target success rate was achieved.
|
||||
:param target_success_rate: (float)
|
||||
Stop experiment if given target success rate was achieved.
|
||||
|
||||
"""
|
||||
super().__init__(level, seed, frame_skip, human_control, custom_reward_threshold, visualization_parameters, target_success_rate)
|
||||
|
||||
@@ -47,20 +47,21 @@ class LevelSelection(object):
|
||||
|
||||
|
||||
class SingleLevelSelection(LevelSelection):
|
||||
def __init__(self, levels: Union[str, List[str], Dict[str, str]]):
|
||||
def __init__(self, levels: Union[str, List[str], Dict[str, str]], force_lower=True):
|
||||
super().__init__(None)
|
||||
self.levels = levels
|
||||
if isinstance(levels, list):
|
||||
self.levels = {level: level for level in levels}
|
||||
if isinstance(levels, str):
|
||||
self.levels = {levels: levels}
|
||||
self.force_lower = force_lower
|
||||
|
||||
def __str__(self):
|
||||
if self.selected_level is None:
|
||||
logger.screen.error("No level has been selected. Please select a level using the -lvl command line flag, "
|
||||
"or change the level in the preset. \nThe available levels are: \n{}"
|
||||
.format(', '.join(sorted(self.levels.keys()))), crash=True)
|
||||
selected_level = self.selected_level.lower()
|
||||
selected_level = self.selected_level.lower() if self.force_lower else self.selected_level
|
||||
if selected_level not in self.levels.keys():
|
||||
logger.screen.error("The selected level ({}) is not part of the available levels ({})"
|
||||
.format(selected_level, ', '.join(self.levels.keys())), crash=True)
|
||||
|
||||
187
rl_coach/environments/robosuite/cube_exp.py
Normal file
187
rl_coach/environments/robosuite/cube_exp.py
Normal file
@@ -0,0 +1,187 @@
|
||||
#
|
||||
# Copyright (c) 2021 Intel Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import numpy as np
|
||||
|
||||
from robosuite.utils.mjcf_utils import CustomMaterial
|
||||
|
||||
from robosuite.environments.manipulation.single_arm_env import SingleArmEnv
|
||||
from robosuite.environments.manipulation.lift import Lift
|
||||
|
||||
from robosuite.models.arenas import TableArena
|
||||
from robosuite.models.objects import BoxObject
|
||||
from robosuite.models.tasks import ManipulationTask
|
||||
from robosuite.utils.placement_samplers import UniformRandomSampler
|
||||
|
||||
TABLE_TOP_SIZE = (0.84, 1.25, 0.05)
|
||||
TABLE_OFFSET = (0, 0, 0.82)
|
||||
|
||||
|
||||
class CubeExp(Lift):
|
||||
"""
|
||||
This class corresponds to multi-colored cube exploration for a single robot arm.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
robots,
|
||||
table_full_size=TABLE_TOP_SIZE,
|
||||
table_offset=TABLE_OFFSET,
|
||||
placement_initializer=None,
|
||||
penalize_reward_on_collision=False,
|
||||
end_episode_on_collision=False,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
robots (str or list of str): Specification for specific robot arm(s) to be instantiated within this env
|
||||
(e.g: "Sawyer" would generate one arm; ["Panda", "Panda", "Sawyer"] would generate three robot arms)
|
||||
Note: Must be a single single-arm robot!
|
||||
table_full_size (3-tuple): x, y, and z dimensions of the table.
|
||||
placement_initializer (ObjectPositionSampler instance): if provided, will
|
||||
be used to place objects on every reset, else a UniformRandomSampler
|
||||
is used by default.
|
||||
Rest of kwargs follow Lift class arguments
|
||||
"""
|
||||
if placement_initializer is None:
|
||||
placement_initializer = UniformRandomSampler(
|
||||
name="ObjectSampler",
|
||||
x_range=[0.0, 0.0],
|
||||
y_range=[0.0, 0.0],
|
||||
rotation=(0.0, 0.0),
|
||||
ensure_object_boundary_in_range=False,
|
||||
ensure_valid_placement=True,
|
||||
reference_pos=table_offset,
|
||||
z_offset=0.9,
|
||||
)
|
||||
|
||||
super().__init__(
|
||||
robots=robots,
|
||||
table_full_size=table_full_size,
|
||||
placement_initializer=placement_initializer,
|
||||
initialization_noise=None,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
self._max_episode_steps = self.horizon
|
||||
|
||||
def _load_model(self):
|
||||
"""
|
||||
Loads an xml model, puts it in self.model
|
||||
"""
|
||||
SingleArmEnv._load_model(self)
|
||||
|
||||
# Adjust base pose accordingly
|
||||
xpos = self.robots[0].robot_model.base_xpos_offset["table"](self.table_full_size[0])
|
||||
self.robots[0].robot_model.set_base_xpos(xpos)
|
||||
|
||||
# load model for table top workspace
|
||||
mujoco_arena = TableArena(
|
||||
table_full_size=self.table_full_size,
|
||||
table_friction=self.table_friction,
|
||||
table_offset=self.table_offset,
|
||||
)
|
||||
|
||||
# Arena always gets set to zero origin
|
||||
mujoco_arena.set_origin([0, 0, 0])
|
||||
|
||||
cube_material = self._get_cube_material()
|
||||
self.cube = BoxObject(
|
||||
name="cube",
|
||||
size_min=(0.025, 0.025, 0.025),
|
||||
size_max=(0.025, 0.025, 0.025),
|
||||
rgba=[1, 0, 0, 1],
|
||||
material=cube_material,
|
||||
)
|
||||
|
||||
self.placement_initializer.reset()
|
||||
self.placement_initializer.add_objects(self.cube)
|
||||
|
||||
# task includes arena, robot, and objects of interest
|
||||
self.model = ManipulationTask(
|
||||
mujoco_arena=mujoco_arena,
|
||||
mujoco_robots=[robot.robot_model for robot in self.robots],
|
||||
mujoco_objects=self.cube,
|
||||
)
|
||||
|
||||
@property
|
||||
def action_spec(self):
|
||||
"""
|
||||
Action space (low, high) for this environment
|
||||
"""
|
||||
low, high = super().action_spec
|
||||
return low[:3], high[:3]
|
||||
|
||||
def _get_cube_material(self):
|
||||
from robosuite.utils.mjcf_utils import array_to_string
|
||||
rgba = (1, 0, 0, 1)
|
||||
cube_material = CustomMaterial(
|
||||
texture=rgba,
|
||||
tex_name="solid",
|
||||
mat_name="solid_mat",
|
||||
)
|
||||
cube_material.tex_attrib.pop('file')
|
||||
cube_material.tex_attrib["type"] = "cube"
|
||||
cube_material.tex_attrib["builtin"] = "flat"
|
||||
cube_material.tex_attrib["rgb1"] = array_to_string(rgba[:3])
|
||||
cube_material.tex_attrib["rgb2"] = array_to_string(rgba[:3])
|
||||
cube_material.tex_attrib["width"] = "100"
|
||||
cube_material.tex_attrib["height"] = "100"
|
||||
|
||||
return cube_material
|
||||
|
||||
def _reset_internal(self):
|
||||
"""
|
||||
Resets simulation internal configurations.
|
||||
"""
|
||||
from robosuite.utils.mjmod import Texture
|
||||
|
||||
super()._reset_internal()
|
||||
|
||||
self._action_dim = 3
|
||||
|
||||
geom_id = self.sim.model.geom_name2id('cube_g0_vis')
|
||||
mat_id = self.sim.model.geom_matid[geom_id]
|
||||
tex_id = self.sim.model.mat_texid[mat_id]
|
||||
texture = Texture(self.sim.model, tex_id)
|
||||
bitmap_to_set = texture.bitmap
|
||||
bitmap = np.zeros_like(bitmap_to_set)
|
||||
bitmap[:100, :, :] = 255
|
||||
bitmap[100:200, :, 0] = 255
|
||||
bitmap[200:300, :, 1] = 255
|
||||
bitmap[300:400, :, 2] = 255
|
||||
bitmap[400:500, :, :2] = 255
|
||||
bitmap[500:, :, 1:] = 255
|
||||
bitmap_to_set[:] = bitmap
|
||||
for render_context in self.sim.render_contexts:
|
||||
render_context.upload_texture(texture.id)
|
||||
|
||||
def _pre_action(self, action, policy_step=False):
|
||||
""" explicitly shut the gripper """
|
||||
joined_action = np.append(action, [0., 0., 0., 1.])
|
||||
self._action_dim = 7
|
||||
super()._pre_action(joined_action, policy_step)
|
||||
|
||||
def _post_action(self, action):
|
||||
ret = super()._post_action(action)
|
||||
self._action_dim = 3
|
||||
return ret
|
||||
|
||||
def reward(self, action=None):
|
||||
return 0
|
||||
|
||||
def _check_success(self):
|
||||
return False
|
||||
18
rl_coach/environments/robosuite/osc_pose.json
Normal file
18
rl_coach/environments/robosuite/osc_pose.json
Normal file
@@ -0,0 +1,18 @@
|
||||
{
|
||||
"type": "OSC_POSE",
|
||||
"input_max": 1,
|
||||
"input_min": -1,
|
||||
"output_max": [0.125, 0.125, 0.125, 0.5, 0.5, 0.5],
|
||||
"output_min": [-0.125, -0.125, -0.125, -0.5, -0.5, -0.5],
|
||||
"kp": 150,
|
||||
"damping_ratio": 1,
|
||||
"impedance_mode": "fixed",
|
||||
"kp_limits": [0, 300],
|
||||
"damping_ratio_limits": [0, 10],
|
||||
"position_limits": [[-0.22, -0.35, 0.82], [0.22, 0.35, 1.3]],
|
||||
"orientation_limits": null,
|
||||
"uncouple_pos_ori": true,
|
||||
"control_delta": true,
|
||||
"interpolation": null,
|
||||
"ramp_ratio": 0.2
|
||||
}
|
||||
321
rl_coach/environments/robosuite_environment.py
Normal file
321
rl_coach/environments/robosuite_environment.py
Normal file
@@ -0,0 +1,321 @@
|
||||
#
|
||||
# Copyright (c) 2020 Intel Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from typing import Union ,Dict, Any
|
||||
from enum import Enum, Flag, auto
|
||||
from copy import deepcopy
|
||||
import numpy as np
|
||||
import random
|
||||
from collections import namedtuple
|
||||
|
||||
try:
|
||||
import robosuite
|
||||
from robosuite.wrappers import Wrapper, DomainRandomizationWrapper
|
||||
except ImportError:
|
||||
from rl_coach.logger import failed_imports
|
||||
failed_imports.append("Robosuite")
|
||||
|
||||
from rl_coach.base_parameters import Parameters, VisualizationParameters
|
||||
from rl_coach.environments.environment import Environment, EnvironmentParameters, LevelSelection
|
||||
from rl_coach.spaces import BoxActionSpace, VectorObservationSpace, StateSpace, PlanarMapsObservationSpace
|
||||
|
||||
# Importing our custom Robosuite environments here so that they are properly
|
||||
# registered in Robosuite, and so recognized by 'robosuite.make()' and included
|
||||
# in 'robosuite.ALL_ENVIRONMENTS'
|
||||
import rl_coach.environments.robosuite.cube_exp
|
||||
|
||||
|
||||
robosuite_environments = list(robosuite.ALL_ENVIRONMENTS)
|
||||
robosuite_robots = list(robosuite.ALL_ROBOTS)
|
||||
robosuite_controllers = list(robosuite.ALL_CONTROLLERS)
|
||||
|
||||
|
||||
def get_robosuite_env_extra_parameters(env_name: str):
|
||||
import inspect
|
||||
assert env_name in robosuite_environments
|
||||
|
||||
env_params = inspect.signature(robosuite.environments.REGISTERED_ENVS[env_name]).parameters
|
||||
base_params = list(RobosuiteBaseParameters().env_kwargs_dict().keys()) + ['robots', 'controller_configs']
|
||||
return {n: p.default for n, p in env_params.items() if n not in base_params}
|
||||
|
||||
|
||||
class OptionalObservations(Flag):
|
||||
NONE = 0
|
||||
CAMERA = auto()
|
||||
OBJECT = auto()
|
||||
|
||||
|
||||
class RobosuiteBaseParameters(Parameters):
|
||||
def __init__(self, optional_observations: OptionalObservations = OptionalObservations.NONE):
|
||||
super(RobosuiteBaseParameters, self).__init__()
|
||||
|
||||
# NOTE: Attribute names should exactly match the attribute names in Robosuite
|
||||
|
||||
self.horizon = 1000 # Every episode lasts for exactly horizon timesteps
|
||||
self.ignore_done = True # True if never terminating the environment (ignore horizon)
|
||||
self.reward_shaping = True # if True, use dense rewards.
|
||||
|
||||
# How many control signals to receive in every simulated second. This sets the amount of simulation time
|
||||
# that passes between every action input (this is NOT the same as frame_skip)
|
||||
self.control_freq = 10
|
||||
|
||||
# Optional observations (robot state is always returned)
|
||||
# if True, every observation includes a rendered image
|
||||
self.use_camera_obs = bool(optional_observations & OptionalObservations.CAMERA)
|
||||
# if True, include object (cube/etc.) information in the observation
|
||||
self.use_object_obs = bool(optional_observations & OptionalObservations.OBJECT)
|
||||
|
||||
# Camera parameters
|
||||
self.has_renderer = False # Set to true to use Mujoco native viewer for on-screen rendering
|
||||
self.render_camera = 'frontview' # name of camera to use for on-screen rendering
|
||||
self.has_offscreen_renderer = self.use_camera_obs
|
||||
self.render_collision_mesh = False # True if rendering collision meshes in camera. False otherwise
|
||||
self.render_visual_mesh = True # True if rendering visual meshes in camera. False otherwise
|
||||
self.camera_names = 'agentview' # name of camera for rendering camera observations
|
||||
self.camera_heights = 84 # height of camera frame.
|
||||
self.camera_widths = 84 # width of camera frame.
|
||||
self.camera_depths = False # True if rendering RGB-D, and RGB otherwise.
|
||||
|
||||
# Collision
|
||||
self.penalize_reward_on_collision = True
|
||||
self.end_episode_on_collision = False
|
||||
|
||||
|
||||
@property
|
||||
def optional_observations(self):
|
||||
flag = OptionalObservations.NONE
|
||||
if self.use_camera_obs:
|
||||
flag = OptionalObservations.CAMERA
|
||||
if self.use_object_obs:
|
||||
flag |= OptionalObservations.OBJECT
|
||||
elif self.use_object_obs:
|
||||
flag = OptionalObservations.OBJECT
|
||||
return flag
|
||||
|
||||
@optional_observations.setter
|
||||
def optional_observations(self, value):
|
||||
self.use_camera_obs = bool(value & OptionalObservations.CAMERA)
|
||||
if self.use_camera_obs:
|
||||
self.has_offscreen_renderer = True
|
||||
self.use_object_obs = bool(value & OptionalObservations.OBJECT)
|
||||
|
||||
def env_kwargs_dict(self):
|
||||
res = {k: (v.value if isinstance(v, Enum) else v) for k, v in vars(self).items()}
|
||||
return res
|
||||
|
||||
|
||||
class RobosuiteEnvironmentParameters(EnvironmentParameters):
|
||||
def __init__(self, level, robot=None, controller=None, apply_dr: bool = False,
|
||||
dr_every_n_steps_min: int = 10, dr_every_n_steps_max: int = 20,
|
||||
use_joint_vel_obs=False):
|
||||
super().__init__(level=level)
|
||||
self.base_parameters = RobosuiteBaseParameters()
|
||||
self.extra_parameters = {}
|
||||
self.robot = robot
|
||||
self.controller = controller
|
||||
self.apply_dr = apply_dr
|
||||
self.dr_every_n_steps_min = dr_every_n_steps_min
|
||||
self.dr_every_n_steps_max = dr_every_n_steps_max
|
||||
self.use_joint_vel_obs = use_joint_vel_obs
|
||||
self.custom_controller_config_fpath = None
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return 'rl_coach.environments.robosuite_environment:RobosuiteEnvironment'
|
||||
|
||||
|
||||
DEFAULT_REWARD_SCALES = {
|
||||
'Lift': 2.25,
|
||||
'LiftLab': 2.25,
|
||||
}
|
||||
|
||||
|
||||
RobosuiteStepResult = namedtuple('RobosuiteStepResult', ['observation', 'reward', 'done', 'info'])
|
||||
|
||||
|
||||
# Environment
|
||||
class RobosuiteEnvironment(Environment):
|
||||
def __init__(self, level: LevelSelection,
|
||||
seed: int, frame_skip: int, human_control: bool, custom_reward_threshold: Union[int, float, None],
|
||||
visualization_parameters: VisualizationParameters,
|
||||
base_parameters: RobosuiteBaseParameters,
|
||||
extra_parameters: Dict[str, Any],
|
||||
robot: str, controller: str,
|
||||
target_success_rate: float = 1.0, apply_dr: bool = False,
|
||||
dr_every_n_steps_min: int = 10, dr_every_n_steps_max: int = 20, use_joint_vel_obs=False,
|
||||
custom_controller_config_fpath=None, **kwargs):
|
||||
super(RobosuiteEnvironment, self).__init__(level, seed, frame_skip, human_control, custom_reward_threshold,
|
||||
visualization_parameters, target_success_rate)
|
||||
|
||||
# Validate arguments
|
||||
|
||||
self.frame_skip = max(1, self.frame_skip)
|
||||
|
||||
def validate_input(input, supported, name):
|
||||
if input not in supported:
|
||||
raise ValueError("Unknown Robosuite {0} passed: '{1}' ; Supported {0}s are: {2}".format(
|
||||
name, input, ' | '.join(supported)
|
||||
))
|
||||
|
||||
validate_input(self.env_id, robosuite_environments, 'environment')
|
||||
validate_input(robot, robosuite_robots, 'robot')
|
||||
self.robot = robot
|
||||
if controller is not None:
|
||||
validate_input(controller, robosuite_controllers, 'controller')
|
||||
self.controller = controller
|
||||
|
||||
self.base_parameters = base_parameters
|
||||
self.base_parameters.has_renderer = self.is_rendered and self.native_rendering
|
||||
self.base_parameters.has_offscreen_renderer = self.base_parameters.use_camera_obs or (self.is_rendered and not
|
||||
self.native_rendering)
|
||||
|
||||
# Seed
|
||||
if self.seed is not None:
|
||||
np.random.seed(self.seed)
|
||||
random.seed(self.seed)
|
||||
|
||||
# Load and initialize environment
|
||||
env_args = self.base_parameters.env_kwargs_dict()
|
||||
env_args.update(extra_parameters)
|
||||
|
||||
if 'reward_scale' not in env_args and self.env_id in DEFAULT_REWARD_SCALES:
|
||||
env_args['reward_scale'] = DEFAULT_REWARD_SCALES[self.env_id]
|
||||
|
||||
env_args['robots'] = self.robot
|
||||
controller_cfg = None
|
||||
if self.controller is not None:
|
||||
controller_cfg = robosuite.controllers.load_controller_config(default_controller=self.controller)
|
||||
elif custom_controller_config_fpath is not None:
|
||||
controller_cfg = robosuite.controllers.load_controller_config(custom_fpath=custom_controller_config_fpath)
|
||||
|
||||
env_args['controller_configs'] = controller_cfg
|
||||
|
||||
self.env = robosuite.make(self.env_id, **env_args)
|
||||
|
||||
# TODO: Generalize this to filter any observation by name
|
||||
if not use_joint_vel_obs:
|
||||
self.env.modify_observable('robot0_joint_vel', 'active', False)
|
||||
|
||||
# Wrap with a dummy wrapper so we get a consistent API (there are subtle changes between
|
||||
# wrappers and actual environments in Robosuite, for example action_spec as property vs. function)
|
||||
self.env = Wrapper(self.env)
|
||||
if apply_dr:
|
||||
self.env = DomainRandomizationWrapper(self.env, seed=self.seed, randomize_every_n_steps_min=dr_every_n_steps_min,
|
||||
randomize_every_n_steps_max=dr_every_n_steps_max)
|
||||
|
||||
# State space
|
||||
self.state_space = self._setup_state_space()
|
||||
|
||||
# Action space
|
||||
low, high = self.env.unwrapped.action_spec
|
||||
self.action_space = BoxActionSpace(low.shape, low=low, high=high)
|
||||
|
||||
self.reset_internal_state()
|
||||
|
||||
if self.is_rendered:
|
||||
image = self.get_rendered_image()
|
||||
self.renderer.create_screen(image.shape[1], image.shape[0])
|
||||
# TODO: Other environments call rendering here, why? reset_internal_state does it
|
||||
|
||||
def _setup_state_space(self):
|
||||
state_space = StateSpace({})
|
||||
dummy_obs = self._process_observation(self.env.observation_spec())
|
||||
|
||||
state_space['measurements'] = VectorObservationSpace(dummy_obs['measurements'].shape[0])
|
||||
|
||||
if self.base_parameters.use_camera_obs:
|
||||
state_space['camera'] = PlanarMapsObservationSpace(dummy_obs['camera'].shape, 0, 255)
|
||||
|
||||
return state_space
|
||||
|
||||
def _process_observation(self, raw_obs):
|
||||
new_obs = {}
|
||||
|
||||
# TODO: Support multiple cameras, this assumes a single camera
|
||||
camera_name = self.base_parameters.camera_names
|
||||
|
||||
camera_obs = raw_obs.get(camera_name + '_image', None)
|
||||
if camera_obs is not None:
|
||||
depth_obs = raw_obs.get(camera_name + '_depth', None)
|
||||
if depth_obs is not None:
|
||||
depth_obs = np.expand_dims(depth_obs, axis=2)
|
||||
camera_obs = np.concatenate([camera_obs, depth_obs], axis=2)
|
||||
new_obs['camera'] = camera_obs
|
||||
|
||||
measurements = raw_obs['robot0_proprio-state']
|
||||
object_obs = raw_obs.get('object-state', None)
|
||||
if object_obs is not None:
|
||||
measurements = np.concatenate([measurements, object_obs])
|
||||
new_obs['measurements'] = measurements
|
||||
|
||||
return new_obs
|
||||
|
||||
def _take_action(self, action):
|
||||
action = self.action_space.clip_action_to_space(action)
|
||||
|
||||
# We mimic the "action_repeat" mechanism of RobosuiteWrapper in Surreal.
|
||||
# Same concept as frame_skip, only returning the average reward across repeated actions instead
|
||||
# of the total reward.
|
||||
rewards = []
|
||||
for _ in range(self.frame_skip):
|
||||
obs, reward, done, info = self.env.step(action)
|
||||
rewards.append(reward)
|
||||
if done:
|
||||
break
|
||||
reward = np.mean(rewards)
|
||||
self.last_result = RobosuiteStepResult(obs, reward, done, info)
|
||||
|
||||
def _update_state(self):
|
||||
obs = self._process_observation(self.last_result.observation)
|
||||
self.state = {k: obs[k] for k in self.state_space.sub_spaces}
|
||||
self.reward = self.last_result.reward or 0
|
||||
self.done = self.last_result.done
|
||||
self.info = self.last_result.info
|
||||
|
||||
def _restart_environment_episode(self, force_environment_reset=False):
|
||||
reset_obs = self.env.reset()
|
||||
self.last_result = RobosuiteStepResult(reset_obs, 0.0, False, {})
|
||||
|
||||
def _render(self):
|
||||
self.env.render()
|
||||
|
||||
def get_rendered_image(self):
|
||||
img: np.ndarray = self.env.sim.render(camera_name=self.base_parameters.render_camera,
|
||||
height=512, width=512, depth=False)
|
||||
return np.flip(img, 0)
|
||||
|
||||
def close(self):
|
||||
self.env.close()
|
||||
|
||||
|
||||
class RobosuiteGoalBasedExpEnvironmentParameters(RobosuiteEnvironmentParameters):
|
||||
@property
|
||||
def path(self):
|
||||
return 'rl_coach.environments.robosuite_environment:RobosuiteGoalBasedExpEnvironment'
|
||||
|
||||
|
||||
class RobosuiteGoalBasedExpEnvironment(RobosuiteEnvironment):
|
||||
def _process_observation(self, raw_obs):
|
||||
new_obs = super()._process_observation(raw_obs)
|
||||
new_obs['obs-goal'] = None
|
||||
return new_obs
|
||||
|
||||
def _setup_state_space(self):
|
||||
state_space = super()._setup_state_space()
|
||||
goal_based_shape = list(state_space['camera'].shape)
|
||||
goal_based_shape[2] *= 2
|
||||
state_space['obs-goal'] = PlanarMapsObservationSpace(tuple(goal_based_shape), 0, 255)
|
||||
return state_space
|
||||
@@ -114,7 +114,8 @@ class StarCraft2Environment(Environment):
|
||||
observation_type: StarcraftObservationType=StarcraftObservationType.Features,
|
||||
disable_fog: bool=False, auto_select_all_army: bool=True,
|
||||
use_full_action_space: bool=False, **kwargs):
|
||||
super().__init__(level, seed, frame_skip, human_control, custom_reward_threshold, visualization_parameters, target_success_rate)
|
||||
super().__init__(level, seed, frame_skip, human_control, custom_reward_threshold, visualization_parameters,
|
||||
target_success_rate)
|
||||
|
||||
self.screen_size = screen_size
|
||||
self.minimap_size = minimap_size
|
||||
|
||||
Reference in New Issue
Block a user