mirror of
https://github.com/gryf/coach.git
synced 2025-12-18 03:30:19 +01:00
pre-release 0.10.0
This commit is contained in:
@@ -2,37 +2,67 @@
|
||||
|
||||
Coach's modularity makes adding an agent a simple and clean task, that involves the following steps:
|
||||
|
||||
1. Implement your algorithm in a new file under the agents directory. The agent can inherit base classes such as **ValueOptimizationAgent** or **ActorCriticAgent**, or the more generic **Agent** base class.
|
||||
1. Implement your algorithm in a new file. The agent can inherit base classes such as **ValueOptimizationAgent** or
|
||||
**ActorCriticAgent**, or the more generic **Agent** base class.
|
||||
|
||||
* **ValueOptimizationAgent**, **PolicyOptimizationAgent** and **Agent** are abstract classes.
|
||||
learn_from_batch() should be overriden with the desired behavior for the algorithm being implemented. If deciding to inherit from **Agent**, also choose_action() should be overriden.
|
||||
learn_from_batch() should be overriden with the desired behavior for the algorithm being implemented.
|
||||
If deciding to inherit from **Agent**, also choose_action() should be overriden.
|
||||
|
||||
|
||||
def learn_from_batch(self, batch):
|
||||
def learn_from_batch(self, batch) -> Tuple[float, List, List]:
|
||||
"""
|
||||
Given a batch of transitions, calculates their target values and updates the network.
|
||||
:param batch: A list of transitions
|
||||
:return: The loss of the training
|
||||
:return: The total loss of the training, the loss per head and the unclipped gradients
|
||||
"""
|
||||
pass
|
||||
|
||||
def choose_action(self, curr_state, phase=RunPhase.TRAIN):
|
||||
|
||||
def choose_action(self, curr_state):
|
||||
"""
|
||||
choose an action to act with in the current episode being played. Different behavior might be exhibited when training
|
||||
or testing.
|
||||
|
||||
:param curr_state: the current state to act upon.
|
||||
:param phase: the current phase: training or testing.
|
||||
|
||||
:param curr_state: the current state to act upon.
|
||||
:return: chosen action, some action value describing the action (q-value, probability, etc)
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
|
||||
* Make sure to add your new agent to **agents/\_\_init\_\_.py**
|
||||
|
||||
2. Implement your agent's specific network head, if needed, at the implementation for the framework of your choice. For example **architectures/neon_components/heads.py**. The head will inherit the generic base class Head.
|
||||
A new output type should be added to configurations.py, and a mapping between the new head and output type should be defined in the get_output_head() function at **architectures/neon_components/general_network.py**
|
||||
3. Define a new configuration class at configurations.py, which includes the new agent name in the **type** field, the new output type in the **output_types** field, and assigning default values to hyperparameters.
|
||||
4. (Optional) Define a preset using the new agent type with a given environment, and the hyperparameters that should be used for training on that environment.
|
||||
|
||||
2. Implement your agent's specific network head, if needed, at the implementation for the framework of your choice.
|
||||
For example **architectures/neon_components/heads.py**. The head will inherit the generic base class Head.
|
||||
A new output type should be added to configurations.py, and a mapping between the new head and output type should
|
||||
be defined in the get_output_head() function at **architectures/neon_components/general_network.py**
|
||||
|
||||
3. Define a new parameters class that inherits AgentParameters.
|
||||
The parameters class defines all the hyperparameters for the agent, and is initialized with 4 main components:
|
||||
* **algorithm**: A class inheriting AlgorithmParameters which defines any algorithm specific parameters
|
||||
* **exploration**: A class inheriting ExplorationParameters which defines the exploration policy parameters.
|
||||
There are several common exploration policies built-in which you can use, and are defined under
|
||||
the exploration sub directory. You can also define your own custom exploration policy.
|
||||
* **memory**: A class inheriting MemoryParameters which defined the memory parameters.
|
||||
There are several common memory types built-in which you can use, and are defined under the memories
|
||||
sub directory. You can also define your own custom memory.
|
||||
* **networks**: A dictionary defining all the networks that will be used by the agent. The keys of the dictionary
|
||||
define the network name and will be used to access each network through the agent class.
|
||||
The dictionary values are a class inheriting NetworkParameters, which define the network structure
|
||||
and parameters.
|
||||
|
||||
|
||||
Additionally, set the path property to return the path to your agent class in the following format:
|
||||
|
||||
<path to python module>:<name of agent class>
|
||||
|
||||
For example,
|
||||
|
||||
class RainbowAgentParameters(AgentParameters):
|
||||
def __init__(self):
|
||||
super().__init__(algorithm=RainbowAlgorithmParameters(),
|
||||
exploration=RainbowExplorationParameters(),
|
||||
memory=RainbowMemoryParameters(),
|
||||
networks={"main": RainbowNetworkParameters()})
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return 'rainbow.rainbow_agent:RainbowAgent'
|
||||
|
||||
4. (Optional) Define a preset using the new agent type with a given environment, and the hyper-parameters that should
|
||||
be used for training on that environment.
|
||||
|
||||
|
||||
@@ -1,70 +1,79 @@
|
||||
Adding a new environment to Coach is as easy as solving CartPole.
|
||||
|
||||
There are essentially two ways to integrate new environments to Coach:
|
||||
|
||||
## Using the OpenAI Gym API
|
||||
|
||||
If your environment is already using the OpenAI Gym API, you are already good to go.
|
||||
When selecting the environment parameters in the preset, use GymEnvironmentParameters(),
|
||||
and pass the path to your environment source code using the level parameter.
|
||||
You can specify additional parameters for your environment using the additional_simulator_parameters parameter.
|
||||
Take for example the definition used in the Pendulum_HAC preset:
|
||||
|
||||
env_params = GymEnvironmentParameters()
|
||||
env_params.level = "rl_coach.environments.mujoco.pendulum_with_goals:PendulumWithGoals"
|
||||
env_params.additional_simulator_parameters = {"time_limit": 1000}
|
||||
|
||||
## Using the Coach API
|
||||
|
||||
There are a few simple steps to follow, and we will walk through them one by one.
|
||||
|
||||
1. Coach defines a simple API for implementing a new environment which is defined in environment/environment_wrapper.py.
|
||||
There are several functions to implement, but only some of them are mandatory.
|
||||
1. Create a new class for your environment, and inherit the Environment class.
|
||||
|
||||
2. Coach defines a simple API for implementing a new environment, which are defined in environment/environment.py.
|
||||
There are several functions to implement, but only some of them are mandatory.
|
||||
|
||||
Here are the important ones:
|
||||
|
||||
def _take_action(self, action_idx):
|
||||
def _take_action(self, action_idx: ActionType) -> None:
|
||||
"""
|
||||
An environment dependent function that sends an action to the simulator.
|
||||
:param action_idx: the action to perform on the environment.
|
||||
:param action_idx: the action to perform on the environment
|
||||
:return: None
|
||||
"""
|
||||
pass
|
||||
|
||||
def _preprocess_observation(self, observation):
|
||||
"""
|
||||
Do initial observation preprocessing such as cropping, rgb2gray, rescale etc.
|
||||
Implementing this function is optional.
|
||||
:param observation: a raw observation from the environment
|
||||
:return: the preprocessed observation
|
||||
"""
|
||||
return observation
|
||||
|
||||
def _update_state(self):
|
||||
def _update_state(self) -> None:
|
||||
"""
|
||||
Updates the state from the environment.
|
||||
Should update self.observation, self.reward, self.done, self.measurements and self.info
|
||||
:return: None
|
||||
"""
|
||||
pass
|
||||
|
||||
def _restart_environment_episode(self, force_environment_reset=False):
|
||||
def _restart_environment_episode(self, force_environment_reset=False) -> None:
|
||||
"""
|
||||
Restarts the simulator episode
|
||||
:param force_environment_reset: Force the environment to reset even if the episode is not done yet.
|
||||
:return:
|
||||
:return: None
|
||||
"""
|
||||
pass
|
||||
|
||||
def get_rendered_image(self):
|
||||
def _render(self) -> None:
|
||||
"""
|
||||
Renders the environment using the native simulator renderer
|
||||
:return: None
|
||||
"""
|
||||
|
||||
def get_rendered_image(self) -> np.ndarray:
|
||||
"""
|
||||
Return a numpy array containing the image that will be rendered to the screen.
|
||||
This can be different from the observation. For example, mujoco's observation is a measurements vector.
|
||||
:return: numpy array containing the image that will be rendered to the screen
|
||||
"""
|
||||
return self.observation
|
||||
|
||||
3. Create a new parameters class for your environment, which inherits the EnvironmentParameters class.
|
||||
In the __init__ of your class, define all the parameters you used in your Environment class.
|
||||
Additionally, fill the path property of the class with the path to your Environment class.
|
||||
For example, take a look at the EnvironmentParameters class used for Doom:
|
||||
|
||||
2. Make sure to import the environment in environments/\_\_init\_\_.py:
|
||||
|
||||
from doom_environment_wrapper import *
|
||||
|
||||
Also, a new entry should be added to the EnvTypes enum mapping the environment name to the wrapper's class name:
|
||||
|
||||
Doom = "DoomEnvironmentWrapper"
|
||||
class DoomEnvironmentParameters(EnvironmentParameters):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.default_input_filter = DoomInputFilter
|
||||
self.default_output_filter = DoomOutputFilter
|
||||
self.cameras = [DoomEnvironment.CameraTypes.OBSERVATION]
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return 'rl_coach.environments.doom_environment:DoomEnvironment'
|
||||
|
||||
|
||||
3. In addition a new configuration class should be implemented for defining the environment's parameters and placed in configurations.py.
|
||||
For instance, the following is used for Doom:
|
||||
|
||||
class Doom(EnvironmentParameters):
|
||||
type = 'Doom'
|
||||
frame_skip = 4
|
||||
observation_stack_size = 3
|
||||
desired_observation_height = 60
|
||||
desired_observation_width = 76
|
||||
|
||||
4. And that's it, you're done. Now just add a new preset with your newly created environment, and start training an agent on top of it.
|
||||
4. And that's it, you're done. Now just add a new preset with your newly created environment, and start training an agent on top of it.
|
||||
|
||||
Reference in New Issue
Block a user