mirror of
https://github.com/gryf/coach.git
synced 2025-12-18 03:30:19 +01:00
update of api docstrings across coach and tutorials [WIP] (#91)
* updating the documentation website * adding the built docs * update of api docstrings across coach and tutorials 0-2 * added some missing api documentation * New Sphinx based documentation
This commit is contained in:
80
docs/_sources/contributing/add_agent.rst.txt
Normal file
80
docs/_sources/contributing/add_agent.rst.txt
Normal file
@@ -0,0 +1,80 @@
|
||||
Adding a New Agent
|
||||
==================
|
||||
|
||||
Coach's modularity makes adding an agent a simple and clean task.
|
||||
We suggest using the following
|
||||
`Jupyter notebook tutorial <https://github.com/NervanaSystems/coach/blob/master/tutorials/1.%20Implementing%20an%20Algorithm.ipynb>`_
|
||||
to ramp up on this process. In general, it involves the following steps:
|
||||
|
||||
1. Implement your algorithm in a new file. The agent can inherit base classes such as **ValueOptimizationAgent** or
|
||||
**ActorCriticAgent**, or the more generic **Agent** base class.
|
||||
|
||||
.. note::
|
||||
**ValueOptimizationAgent**, **PolicyOptimizationAgent** and **Agent** are abstract classes.
|
||||
:code:`learn_from_batch()` should be overriden with the desired behavior for the algorithm being implemented.
|
||||
If deciding to inherit from **Agent**, also :code:`choose_action()` should be overriden.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def learn_from_batch(self, batch) -> Tuple[float, List, List]:
|
||||
"""
|
||||
Given a batch of transitions, calculates their target values and updates the network.
|
||||
:param batch: A list of transitions
|
||||
:return: The total loss of the training, the loss per head and the unclipped gradients
|
||||
"""
|
||||
|
||||
def choose_action(self, curr_state):
|
||||
"""
|
||||
choose an action to act with in the current episode being played. Different behavior might be exhibited when training
|
||||
or testing.
|
||||
|
||||
:param curr_state: the current state to act upon.
|
||||
:return: chosen action, some action value describing the action (q-value, probability, etc)
|
||||
"""
|
||||
|
||||
2. Implement your agent's specific network head, if needed, at the implementation for the framework of your choice.
|
||||
For example **architectures/neon_components/heads.py**. The head will inherit the generic base class Head.
|
||||
A new output type should be added to configurations.py, and a mapping between the new head and output type should
|
||||
be defined in the get_output_head() function at **architectures/neon_components/general_network.py**
|
||||
|
||||
3. Define a new parameters class that inherits AgentParameters.
|
||||
The parameters class defines all the hyperparameters for the agent, and is initialized with 4 main components:
|
||||
|
||||
* **algorithm**: A class inheriting AlgorithmParameters which defines any algorithm specific parameters
|
||||
|
||||
* **exploration**: A class inheriting ExplorationParameters which defines the exploration policy parameters.
|
||||
There are several common exploration policies built-in which you can use, and are defined under
|
||||
the exploration sub directory. You can also define your own custom exploration policy.
|
||||
|
||||
* **memory**: A class inheriting MemoryParameters which defined the memory parameters.
|
||||
There are several common memory types built-in which you can use, and are defined under the memories
|
||||
sub directory. You can also define your own custom memory.
|
||||
|
||||
* **networks**: A dictionary defining all the networks that will be used by the agent. The keys of the dictionary
|
||||
define the network name and will be used to access each network through the agent class.
|
||||
The dictionary values are a class inheriting NetworkParameters, which define the network structure
|
||||
and parameters.
|
||||
|
||||
|
||||
Additionally, set the path property to return the path to your agent class in the following format:
|
||||
|
||||
:code:`<path to python module>:<name of agent class>`
|
||||
|
||||
For example,
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class RainbowAgentParameters(AgentParameters):
|
||||
def __init__(self):
|
||||
super().__init__(algorithm=RainbowAlgorithmParameters(),
|
||||
exploration=RainbowExplorationParameters(),
|
||||
memory=RainbowMemoryParameters(),
|
||||
networks={"main": RainbowNetworkParameters()})
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return 'rainbow.rainbow_agent:RainbowAgent'
|
||||
|
||||
4. (Optional) Define a preset using the new agent type with a given environment, and the hyper-parameters that should
|
||||
be used for training on that environment.
|
||||
|
||||
93
docs/_sources/contributing/add_env.rst.txt
Normal file
93
docs/_sources/contributing/add_env.rst.txt
Normal file
@@ -0,0 +1,93 @@
|
||||
Adding a New Environment
|
||||
========================
|
||||
|
||||
Adding a new environment to Coach is as easy as solving CartPole.
|
||||
|
||||
There are essentially two ways to integrate new environments to Coach:
|
||||
|
||||
Using the OpenAI Gym API
|
||||
------------------------
|
||||
|
||||
If your environment is already using the OpenAI Gym API, you are already good to go.
|
||||
When selecting the environment parameters in the preset, use :code:`GymEnvironmentParameters()`,
|
||||
and pass the path to your environment source code using the level parameter.
|
||||
You can specify additional parameters for your environment using the additional_simulator_parameters parameter.
|
||||
Take for example the definition used in the :code:`Pendulum_HAC` preset:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
env_params = GymEnvironmentParameters()
|
||||
env_params.level = "rl_coach.environments.mujoco.pendulum_with_goals:PendulumWithGoals"
|
||||
env_params.additional_simulator_parameters = {"time_limit": 1000}
|
||||
|
||||
Using the Coach API
|
||||
-------------------
|
||||
|
||||
There are a few simple steps to follow, and we will walk through them one by one.
|
||||
As an alternative, we highly recommend following the corresponding
|
||||
`tutorial <https://github.com/NervanaSystems/coach/blob/master/tutorials/2.%20Adding%20an%20Environment.ipynb>`_
|
||||
in the GitHub repo.
|
||||
|
||||
1. Create a new class for your environment, and inherit the Environment class.
|
||||
|
||||
2. Coach defines a simple API for implementing a new environment, which are defined in environment/environment.py.
|
||||
There are several functions to implement, but only some of them are mandatory.
|
||||
|
||||
Here are the important ones:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def _take_action(self, action_idx: ActionType) -> None:
|
||||
"""
|
||||
An environment dependent function that sends an action to the simulator.
|
||||
:param action_idx: the action to perform on the environment
|
||||
:return: None
|
||||
"""
|
||||
|
||||
def _update_state(self) -> None:
|
||||
"""
|
||||
Updates the state from the environment.
|
||||
Should update self.observation, self.reward, self.done, self.measurements and self.info
|
||||
:return: None
|
||||
"""
|
||||
|
||||
def _restart_environment_episode(self, force_environment_reset=False) -> None:
|
||||
"""
|
||||
Restarts the simulator episode
|
||||
:param force_environment_reset: Force the environment to reset even if the episode is not done yet.
|
||||
:return: None
|
||||
"""
|
||||
|
||||
def _render(self) -> None:
|
||||
"""
|
||||
Renders the environment using the native simulator renderer
|
||||
:return: None
|
||||
"""
|
||||
|
||||
def get_rendered_image(self) -> np.ndarray:
|
||||
"""
|
||||
Return a numpy array containing the image that will be rendered to the screen.
|
||||
This can be different from the observation. For example, mujoco's observation is a measurements vector.
|
||||
:return: numpy array containing the image that will be rendered to the screen
|
||||
"""
|
||||
|
||||
3. Create a new parameters class for your environment, which inherits the EnvironmentParameters class.
|
||||
In the __init__ of your class, define all the parameters you used in your Environment class.
|
||||
Additionally, fill the path property of the class with the path to your Environment class.
|
||||
For example, take a look at the EnvironmentParameters class used for Doom:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class DoomEnvironmentParameters(EnvironmentParameters):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.default_input_filter = DoomInputFilter
|
||||
self.default_output_filter = DoomOutputFilter
|
||||
self.cameras = [DoomEnvironment.CameraTypes.OBSERVATION]
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return 'rl_coach.environments.doom_environment:DoomEnvironment'
|
||||
|
||||
|
||||
4. And that's it, you're done. Now just add a new preset with your newly created environment, and start training an agent on top of it.
|
||||
Reference in New Issue
Block a user