1
0
mirror of https://github.com/gryf/coach.git synced 2025-12-17 19:20:19 +01:00

moving the docs to github

This commit is contained in:
itaicaspi-intel
2018-04-23 09:14:20 +03:00
parent cafa152382
commit 5d5562bf62
118 changed files with 10792 additions and 3 deletions

View File

@@ -0,0 +1,38 @@
<!-- language-all: python -->
Coach's modularity makes adding an agent a simple and clean task, that involves the following steps:
1. Implement your algorithm in a new file under the agents directory. The agent can inherit base classes such as **ValueOptimizationAgent** or **ActorCriticAgent**, or the more generic **Agent** base class.
* **ValueOptimizationAgent**, **PolicyOptimizationAgent** and **Agent** are abstract classes.
learn_from_batch() should be overriden with the desired behavior for the algorithm being implemented. If deciding to inherit from **Agent**, also choose_action() should be overriden.
def learn_from_batch(self, batch):
"""
Given a batch of transitions, calculates their target values and updates the network.
:param batch: A list of transitions
:return: The loss of the training
"""
pass
def choose_action(self, curr_state, phase=RunPhase.TRAIN):
"""
choose an action to act with in the current episode being played. Different behavior might be exhibited when training
or testing.
:param curr_state: the current state to act upon.
:param phase: the current phase: training or testing.
:return: chosen action, some action value describing the action (q-value, probability, etc)
"""
pass
* Make sure to add your new agent to **agents/\_\_init\_\_.py**
2. Implement your agent's specific network head, if needed, at the implementation for the framework of your choice. For example **architectures/neon_components/heads.py**. The head will inherit the generic base class Head.
A new output type should be added to configurations.py, and a mapping between the new head and output type should be defined in the get_output_head() function at **architectures/neon_components/general_network.py**
3. Define a new configuration class at configurations.py, which includes the new agent name in the **type** field, the new output type in the **output_types** field, and assigning default values to hyperparameters.
4. (Optional) Define a preset using the new agent type with a given environment, and the hyperparameters that should be used for training on that environment.

View File

@@ -0,0 +1,70 @@
Adding a new environment to Coach is as easy as solving CartPole.
There are a few simple steps to follow, and we will walk through them one by one.
1. Coach defines a simple API for implementing a new environment which is defined in environment/environment_wrapper.py.
There are several functions to implement, but only some of them are mandatory.
Here are the important ones:
def _take_action(self, action_idx):
"""
An environment dependent function that sends an action to the simulator.
:param action_idx: the action to perform on the environment.
:return: None
"""
pass
def _preprocess_observation(self, observation):
"""
Do initial observation preprocessing such as cropping, rgb2gray, rescale etc.
Implementing this function is optional.
:param observation: a raw observation from the environment
:return: the preprocessed observation
"""
return observation
def _update_state(self):
"""
Updates the state from the environment.
Should update self.observation, self.reward, self.done, self.measurements and self.info
:return: None
"""
pass
def _restart_environment_episode(self, force_environment_reset=False):
"""
:param force_environment_reset: Force the environment to reset even if the episode is not done yet.
:return:
"""
pass
def get_rendered_image(self):
"""
Return a numpy array containing the image that will be rendered to the screen.
This can be different from the observation. For example, mujoco's observation is a measurements vector.
:return: numpy array containing the image that will be rendered to the screen
"""
return self.observation
2. Make sure to import the environment in environments/\_\_init\_\_.py:
from doom_environment_wrapper import *
Also, a new entry should be added to the EnvTypes enum mapping the environment name to the wrapper's class name:
Doom = "DoomEnvironmentWrapper"
3. In addition a new configuration class should be implemented for defining the environment's parameters and placed in configurations.py.
For instance, the following is used for Doom:
class Doom(EnvironmentParameters):
type = 'Doom'
frame_skip = 4
observation_stack_size = 3
desired_observation_height = 60
desired_observation_width = 76
4. And that's it, you're done. Now just add a new preset with your newly created environment, and start training an agent on top of it.