mirror of
https://github.com/gryf/coach.git
synced 2025-12-17 11:10:20 +01:00
Batch RL Tutorial (#372)
This commit is contained in:
@@ -169,8 +169,13 @@ More usage examples can be found [here](https://github.com/NervanaSystems/coach/
|
||||
|
||||
### Distributed Multi-Node Coach
|
||||
|
||||
As of release 0.11 Coach supports horizontal scaling for training RL agents on multiple nodes. In release 0.11 this was tested on the ClippedPPO and DQN agents.
|
||||
For usage instructions please refer to the documentation [here](https://nervanasystems.github.io/coach/dist_usage.html)
|
||||
As of release 0.11.0, Coach supports horizontal scaling for training RL agents on multiple nodes. In release 0.11.0 this was tested on the ClippedPPO and DQN agents.
|
||||
For usage instructions please refer to the documentation [here](https://nervanasystems.github.io/coach/dist_usage.html).
|
||||
|
||||
### Batch Reinforcement Learning
|
||||
|
||||
Training and evaluating an agent from a dataset of experience, where no simulator is available, is supported in Coach.
|
||||
There are [example](https://github.com/NervanaSystems/coach/blob/master/rl_coach/presets/CartPole_DDQN_BatchRL.py) [presets](https://github.com/NervanaSystems/coach/blob/master/rl_coach/presets/Acrobot_DDQN_BCQ_BatchRL.py) and a [tutorial](https://github.com/NervanaSystems/coach/blob/master/tutorials/4.%20Batch%20Reinforcement%20Learning.ipynb).
|
||||
|
||||
### Running Coach Dashboard (Visualization)
|
||||
Training an agent to solve an environment can be tricky, at times.
|
||||
|
||||
@@ -217,9 +217,9 @@
|
||||
|
||||
|
||||
<span class="k">class</span> <span class="nc">DDPGCriticNetworkParameters</span><span class="p">(</span><span class="n">NetworkParameters</span><span class="p">):</span>
|
||||
<span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
|
||||
<span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">use_batchnorm</span><span class="o">=</span><span class="kc">False</span><span class="p">):</span>
|
||||
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">input_embedders_parameters</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'observation'</span><span class="p">:</span> <span class="n">InputEmbedderParameters</span><span class="p">(</span><span class="n">batchnorm</span><span class="o">=</span><span class="kc">True</span><span class="p">),</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">input_embedders_parameters</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'observation'</span><span class="p">:</span> <span class="n">InputEmbedderParameters</span><span class="p">(</span><span class="n">batchnorm</span><span class="o">=</span><span class="n">use_batchnorm</span><span class="p">),</span>
|
||||
<span class="s1">'action'</span><span class="p">:</span> <span class="n">InputEmbedderParameters</span><span class="p">(</span><span class="n">scheme</span><span class="o">=</span><span class="n">EmbedderScheme</span><span class="o">.</span><span class="n">Shallow</span><span class="p">)}</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">middleware_parameters</span> <span class="o">=</span> <span class="n">FCMiddlewareParameters</span><span class="p">()</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">heads_parameters</span> <span class="o">=</span> <span class="p">[</span><span class="n">DDPGVHeadParameters</span><span class="p">()]</span>
|
||||
@@ -236,11 +236,11 @@
|
||||
|
||||
|
||||
<span class="k">class</span> <span class="nc">DDPGActorNetworkParameters</span><span class="p">(</span><span class="n">NetworkParameters</span><span class="p">):</span>
|
||||
<span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
|
||||
<span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">use_batchnorm</span><span class="o">=</span><span class="kc">False</span><span class="p">):</span>
|
||||
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">input_embedders_parameters</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'observation'</span><span class="p">:</span> <span class="n">InputEmbedderParameters</span><span class="p">(</span><span class="n">batchnorm</span><span class="o">=</span><span class="kc">True</span><span class="p">)}</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">middleware_parameters</span> <span class="o">=</span> <span class="n">FCMiddlewareParameters</span><span class="p">(</span><span class="n">batchnorm</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">heads_parameters</span> <span class="o">=</span> <span class="p">[</span><span class="n">DDPGActorHeadParameters</span><span class="p">()]</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">input_embedders_parameters</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'observation'</span><span class="p">:</span> <span class="n">InputEmbedderParameters</span><span class="p">(</span><span class="n">batchnorm</span><span class="o">=</span><span class="n">use_batchnorm</span><span class="p">)}</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">middleware_parameters</span> <span class="o">=</span> <span class="n">FCMiddlewareParameters</span><span class="p">(</span><span class="n">batchnorm</span><span class="o">=</span><span class="n">use_batchnorm</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">heads_parameters</span> <span class="o">=</span> <span class="p">[</span><span class="n">DDPGActorHeadParameters</span><span class="p">(</span><span class="n">batchnorm</span><span class="o">=</span><span class="n">use_batchnorm</span><span class="p">)]</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">optimizer_type</span> <span class="o">=</span> <span class="s1">'Adam'</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">batch_size</span> <span class="o">=</span> <span class="mi">64</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">adam_optimizer_beta2</span> <span class="o">=</span> <span class="mf">0.999</span>
|
||||
@@ -292,12 +292,12 @@
|
||||
|
||||
|
||||
<span class="k">class</span> <span class="nc">DDPGAgentParameters</span><span class="p">(</span><span class="n">AgentParameters</span><span class="p">):</span>
|
||||
<span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
|
||||
<span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">use_batchnorm</span><span class="o">=</span><span class="kc">False</span><span class="p">):</span>
|
||||
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="n">algorithm</span><span class="o">=</span><span class="n">DDPGAlgorithmParameters</span><span class="p">(),</span>
|
||||
<span class="n">exploration</span><span class="o">=</span><span class="n">OUProcessParameters</span><span class="p">(),</span>
|
||||
<span class="n">memory</span><span class="o">=</span><span class="n">EpisodicExperienceReplayParameters</span><span class="p">(),</span>
|
||||
<span class="n">networks</span><span class="o">=</span><span class="n">OrderedDict</span><span class="p">([(</span><span class="s2">"actor"</span><span class="p">,</span> <span class="n">DDPGActorNetworkParameters</span><span class="p">()),</span>
|
||||
<span class="p">(</span><span class="s2">"critic"</span><span class="p">,</span> <span class="n">DDPGCriticNetworkParameters</span><span class="p">())]))</span>
|
||||
<span class="n">networks</span><span class="o">=</span><span class="n">OrderedDict</span><span class="p">([(</span><span class="s2">"actor"</span><span class="p">,</span> <span class="n">DDPGActorNetworkParameters</span><span class="p">(</span><span class="n">use_batchnorm</span><span class="o">=</span><span class="n">use_batchnorm</span><span class="p">)),</span>
|
||||
<span class="p">(</span><span class="s2">"critic"</span><span class="p">,</span> <span class="n">DDPGCriticNetworkParameters</span><span class="p">(</span><span class="n">use_batchnorm</span><span class="o">=</span><span class="n">use_batchnorm</span><span class="p">))]))</span>
|
||||
|
||||
<span class="nd">@property</span>
|
||||
<span class="k">def</span> <span class="nf">path</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
|
||||
@@ -353,7 +353,9 @@
|
||||
<span class="c1"># train the critic</span>
|
||||
<span class="n">critic_inputs</span> <span class="o">=</span> <span class="n">copy</span><span class="o">.</span><span class="n">copy</span><span class="p">(</span><span class="n">batch</span><span class="o">.</span><span class="n">states</span><span class="p">(</span><span class="n">critic_keys</span><span class="p">))</span>
|
||||
<span class="n">critic_inputs</span><span class="p">[</span><span class="s1">'action'</span><span class="p">]</span> <span class="o">=</span> <span class="n">batch</span><span class="o">.</span><span class="n">actions</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">batch</span><span class="o">.</span><span class="n">actions</span><span class="p">()</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span> <span class="o">==</span> <span class="mi">1</span><span class="p">)</span>
|
||||
<span class="n">result</span> <span class="o">=</span> <span class="n">critic</span><span class="o">.</span><span class="n">train_and_sync_networks</span><span class="p">(</span><span class="n">critic_inputs</span><span class="p">,</span> <span class="n">TD_targets</span><span class="p">)</span>
|
||||
|
||||
<span class="c1"># also need the inputs for when applying gradients so batchnorm's update of running mean and stddev will work</span>
|
||||
<span class="n">result</span> <span class="o">=</span> <span class="n">critic</span><span class="o">.</span><span class="n">train_and_sync_networks</span><span class="p">(</span><span class="n">critic_inputs</span><span class="p">,</span> <span class="n">TD_targets</span><span class="p">,</span> <span class="n">use_inputs_for_apply_gradients</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
|
||||
<span class="n">total_loss</span><span class="p">,</span> <span class="n">losses</span><span class="p">,</span> <span class="n">unclipped_grads</span> <span class="o">=</span> <span class="n">result</span><span class="p">[:</span><span class="mi">3</span><span class="p">]</span>
|
||||
|
||||
<span class="c1"># apply the gradients from the critic to the actor</span>
|
||||
@@ -362,11 +364,12 @@
|
||||
<span class="n">outputs</span><span class="o">=</span><span class="n">actor</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">weighted_gradients</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span>
|
||||
<span class="n">initial_feed_dict</span><span class="o">=</span><span class="n">initial_feed_dict</span><span class="p">)</span>
|
||||
|
||||
<span class="c1"># also need the inputs for when applying gradients so batchnorm's update of running mean and stddev will work</span>
|
||||
<span class="k">if</span> <span class="n">actor</span><span class="o">.</span><span class="n">has_global</span><span class="p">:</span>
|
||||
<span class="n">actor</span><span class="o">.</span><span class="n">apply_gradients_to_global_network</span><span class="p">(</span><span class="n">gradients</span><span class="p">)</span>
|
||||
<span class="n">actor</span><span class="o">.</span><span class="n">apply_gradients_to_global_network</span><span class="p">(</span><span class="n">gradients</span><span class="p">,</span> <span class="n">additional_inputs</span><span class="o">=</span><span class="n">copy</span><span class="o">.</span><span class="n">copy</span><span class="p">(</span><span class="n">batch</span><span class="o">.</span><span class="n">states</span><span class="p">(</span><span class="n">critic_keys</span><span class="p">)))</span>
|
||||
<span class="n">actor</span><span class="o">.</span><span class="n">update_online_network</span><span class="p">()</span>
|
||||
<span class="k">else</span><span class="p">:</span>
|
||||
<span class="n">actor</span><span class="o">.</span><span class="n">apply_gradients_to_online_network</span><span class="p">(</span><span class="n">gradients</span><span class="p">)</span>
|
||||
<span class="n">actor</span><span class="o">.</span><span class="n">apply_gradients_to_online_network</span><span class="p">(</span><span class="n">gradients</span><span class="p">,</span> <span class="n">additional_inputs</span><span class="o">=</span><span class="n">copy</span><span class="o">.</span><span class="n">copy</span><span class="p">(</span><span class="n">batch</span><span class="o">.</span><span class="n">states</span><span class="p">(</span><span class="n">critic_keys</span><span class="p">)))</span>
|
||||
|
||||
<span class="k">return</span> <span class="n">total_loss</span><span class="p">,</span> <span class="n">losses</span><span class="p">,</span> <span class="n">unclipped_grads</span>
|
||||
|
||||
|
||||
@@ -307,31 +307,37 @@
|
||||
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">global_network</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">set_weights</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">global_network</span><span class="o">.</span><span class="n">get_weights</span><span class="p">(),</span> <span class="n">rate</span><span class="p">)</span></div>
|
||||
|
||||
<div class="viewcode-block" id="NetworkWrapper.apply_gradients_to_global_network"><a class="viewcode-back" href="../../../components/architectures/index.html#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_global_network">[docs]</a> <span class="k">def</span> <span class="nf">apply_gradients_to_global_network</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradients</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
|
||||
<div class="viewcode-block" id="NetworkWrapper.apply_gradients_to_global_network"><a class="viewcode-back" href="../../../components/architectures/index.html#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_global_network">[docs]</a> <span class="k">def</span> <span class="nf">apply_gradients_to_global_network</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradients</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">additional_inputs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Apply gradients from the online network on the global network</span>
|
||||
|
||||
<span class="sd"> :param gradients: optional gradients that will be used instead of teh accumulated gradients</span>
|
||||
<span class="sd"> :param additional_inputs: optional additional inputs required for when applying the gradients (e.g. batchnorm's</span>
|
||||
<span class="sd"> update ops also requires the inputs)</span>
|
||||
<span class="sd"> :return:</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="k">if</span> <span class="n">gradients</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="n">gradients</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">accumulated_gradients</span>
|
||||
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">network_parameters</span><span class="o">.</span><span class="n">shared_optimizer</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">global_network</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="n">gradients</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">global_network</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="n">gradients</span><span class="p">,</span> <span class="n">additional_inputs</span><span class="o">=</span><span class="n">additional_inputs</span><span class="p">)</span>
|
||||
<span class="k">else</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="n">gradients</span><span class="p">)</span></div>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="n">gradients</span><span class="p">,</span> <span class="n">additional_inputs</span><span class="o">=</span><span class="n">additional_inputs</span><span class="p">)</span></div>
|
||||
|
||||
<div class="viewcode-block" id="NetworkWrapper.apply_gradients_to_online_network"><a class="viewcode-back" href="../../../components/architectures/index.html#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_online_network">[docs]</a> <span class="k">def</span> <span class="nf">apply_gradients_to_online_network</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradients</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
|
||||
<div class="viewcode-block" id="NetworkWrapper.apply_gradients_to_online_network"><a class="viewcode-back" href="../../../components/architectures/index.html#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_online_network">[docs]</a> <span class="k">def</span> <span class="nf">apply_gradients_to_online_network</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradients</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">additional_inputs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Apply gradients from the online network on itself</span>
|
||||
<span class="sd"> :param gradients: optional gradients that will be used instead of teh accumulated gradients</span>
|
||||
<span class="sd"> :param additional_inputs: optional additional inputs required for when applying the gradients (e.g. batchnorm's</span>
|
||||
<span class="sd"> update ops also requires the inputs)</span>
|
||||
|
||||
<span class="sd"> :return:</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="k">if</span> <span class="n">gradients</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="n">gradients</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">accumulated_gradients</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="n">gradients</span><span class="p">)</span></div>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="n">gradients</span><span class="p">,</span> <span class="n">additional_inputs</span><span class="o">=</span><span class="n">additional_inputs</span><span class="p">)</span></div>
|
||||
|
||||
<div class="viewcode-block" id="NetworkWrapper.train_and_sync_networks"><a class="viewcode-back" href="../../../components/architectures/index.html#rl_coach.architectures.network_wrapper.NetworkWrapper.train_and_sync_networks">[docs]</a> <span class="k">def</span> <span class="nf">train_and_sync_networks</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">inputs</span><span class="p">,</span> <span class="n">targets</span><span class="p">,</span> <span class="n">additional_fetches</span><span class="o">=</span><span class="p">[],</span> <span class="n">importance_weights</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
|
||||
<div class="viewcode-block" id="NetworkWrapper.train_and_sync_networks"><a class="viewcode-back" href="../../../components/architectures/index.html#rl_coach.architectures.network_wrapper.NetworkWrapper.train_and_sync_networks">[docs]</a> <span class="k">def</span> <span class="nf">train_and_sync_networks</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">inputs</span><span class="p">,</span> <span class="n">targets</span><span class="p">,</span> <span class="n">additional_fetches</span><span class="o">=</span><span class="p">[],</span> <span class="n">importance_weights</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
|
||||
<span class="n">use_inputs_for_apply_gradients</span><span class="o">=</span><span class="kc">False</span><span class="p">):</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> A generic training function that enables multi-threading training using a global network if necessary.</span>
|
||||
|
||||
@@ -340,14 +346,20 @@
|
||||
<span class="sd"> :param additional_fetches: Any additional tensor the user wants to fetch</span>
|
||||
<span class="sd"> :param importance_weights: A coefficient for each sample in the batch, which will be used to rescale the loss</span>
|
||||
<span class="sd"> error of this sample. If it is not given, the samples losses won't be scaled</span>
|
||||
<span class="sd"> :param use_inputs_for_apply_gradients: Add the inputs also for when applying gradients</span>
|
||||
<span class="sd"> (e.g. for incorporating batchnorm update ops)</span>
|
||||
<span class="sd"> :return: The loss of the training iteration</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="n">result</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">accumulate_gradients</span><span class="p">(</span><span class="n">inputs</span><span class="p">,</span> <span class="n">targets</span><span class="p">,</span> <span class="n">additional_fetches</span><span class="o">=</span><span class="n">additional_fetches</span><span class="p">,</span>
|
||||
<span class="n">importance_weights</span><span class="o">=</span><span class="n">importance_weights</span><span class="p">,</span> <span class="n">no_accumulation</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">apply_gradients_and_sync_networks</span><span class="p">(</span><span class="n">reset_gradients</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
|
||||
<span class="k">if</span> <span class="n">use_inputs_for_apply_gradients</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">apply_gradients_and_sync_networks</span><span class="p">(</span><span class="n">reset_gradients</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">additional_inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">)</span>
|
||||
<span class="k">else</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">apply_gradients_and_sync_networks</span><span class="p">(</span><span class="n">reset_gradients</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
|
||||
|
||||
<span class="k">return</span> <span class="n">result</span></div>
|
||||
|
||||
<div class="viewcode-block" id="NetworkWrapper.apply_gradients_and_sync_networks"><a class="viewcode-back" href="../../../components/architectures/index.html#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_and_sync_networks">[docs]</a> <span class="k">def</span> <span class="nf">apply_gradients_and_sync_networks</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">reset_gradients</span><span class="o">=</span><span class="kc">True</span><span class="p">):</span>
|
||||
<div class="viewcode-block" id="NetworkWrapper.apply_gradients_and_sync_networks"><a class="viewcode-back" href="../../../components/architectures/index.html#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_and_sync_networks">[docs]</a> <span class="k">def</span> <span class="nf">apply_gradients_and_sync_networks</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">reset_gradients</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">additional_inputs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Applies the gradients accumulated in the online network to the global network or to itself and syncs the</span>
|
||||
<span class="sd"> networks if necessary</span>
|
||||
@@ -356,17 +368,22 @@
|
||||
<span class="sd"> the network. this is useful when the accumulated gradients are overwritten instead</span>
|
||||
<span class="sd"> if accumulated by the accumulate_gradients function. this allows reducing time</span>
|
||||
<span class="sd"> complexity for this function by around 10%</span>
|
||||
<span class="sd"> :param additional_inputs: optional additional inputs required for when applying the gradients (e.g. batchnorm's</span>
|
||||
<span class="sd"> update ops also requires the inputs)</span>
|
||||
|
||||
<span class="sd"> """</span>
|
||||
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">global_network</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">apply_gradients_to_global_network</span><span class="p">()</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">apply_gradients_to_global_network</span><span class="p">(</span><span class="n">additional_inputs</span><span class="o">=</span><span class="n">additional_inputs</span><span class="p">)</span>
|
||||
<span class="k">if</span> <span class="n">reset_gradients</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">reset_accumulated_gradients</span><span class="p">()</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">update_online_network</span><span class="p">()</span>
|
||||
<span class="k">else</span><span class="p">:</span>
|
||||
<span class="k">if</span> <span class="n">reset_gradients</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">apply_and_reset_gradients</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">accumulated_gradients</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">apply_and_reset_gradients</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">accumulated_gradients</span><span class="p">,</span>
|
||||
<span class="n">additional_inputs</span><span class="o">=</span><span class="n">additional_inputs</span><span class="p">)</span>
|
||||
<span class="k">else</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">accumulated_gradients</span><span class="p">)</span></div>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">online_network</span><span class="o">.</span><span class="n">accumulated_gradients</span><span class="p">,</span>
|
||||
<span class="n">additional_inputs</span><span class="o">=</span><span class="n">additional_inputs</span><span class="p">)</span></div>
|
||||
|
||||
<div class="viewcode-block" id="NetworkWrapper.parallel_prediction"><a class="viewcode-back" href="../../../components/architectures/index.html#rl_coach.architectures.network_wrapper.NetworkWrapper.parallel_prediction">[docs]</a> <span class="k">def</span> <span class="nf">parallel_prediction</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">network_input_tuples</span><span class="p">:</span> <span class="n">List</span><span class="p">[</span><span class="n">Tuple</span><span class="p">]):</span>
|
||||
<span class="sd">"""</span>
|
||||
|
||||
@@ -213,7 +213,7 @@
|
||||
<span class="n">failed_imports</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="s2">"RoboSchool"</span><span class="p">)</span>
|
||||
|
||||
<span class="k">try</span><span class="p">:</span>
|
||||
<span class="kn">from</span> <span class="nn">rl_coach.gym_extensions.continuous</span> <span class="k">import</span> <span class="n">mujoco</span>
|
||||
<span class="kn">from</span> <span class="nn">gym_extensions.continuous</span> <span class="k">import</span> <span class="n">mujoco</span>
|
||||
<span class="k">except</span><span class="p">:</span>
|
||||
<span class="kn">from</span> <span class="nn">rl_coach.logger</span> <span class="k">import</span> <span class="n">failed_imports</span>
|
||||
<span class="n">failed_imports</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="s2">"GymExtensions"</span><span class="p">)</span>
|
||||
@@ -575,9 +575,6 @@
|
||||
<span class="k">else</span><span class="p">:</span>
|
||||
<span class="n">screen</span><span class="o">.</span><span class="n">error</span><span class="p">(</span><span class="s2">"Error: Environment </span><span class="si">{}</span><span class="s2"> does not support human control."</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">env</span><span class="p">),</span> <span class="n">crash</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
|
||||
|
||||
<span class="c1"># initialize the state by getting a new state from the environment</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">reset_internal_state</span><span class="p">(</span><span class="kc">True</span><span class="p">)</span>
|
||||
|
||||
<span class="c1"># render</span>
|
||||
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">is_rendered</span><span class="p">:</span>
|
||||
<span class="n">image</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">get_rendered_image</span><span class="p">()</span>
|
||||
@@ -588,7 +585,6 @@
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">renderer</span><span class="o">.</span><span class="n">create_screen</span><span class="p">(</span><span class="n">image</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="n">scale</span><span class="p">,</span> <span class="n">image</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">*</span><span class="n">scale</span><span class="p">)</span>
|
||||
|
||||
<span class="c1"># the info is only updated after the first step</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">state</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">step</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">action_space</span><span class="o">.</span><span class="n">default_action</span><span class="p">)</span><span class="o">.</span><span class="n">next_state</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">state_space</span><span class="p">[</span><span class="s1">'measurements'</span><span class="p">]</span> <span class="o">=</span> <span class="n">VectorObservationSpace</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">info</span><span class="o">.</span><span class="n">keys</span><span class="p">()))</span>
|
||||
|
||||
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">env</span><span class="o">.</span><span class="n">spec</span> <span class="ow">and</span> <span class="n">custom_reward_threshold</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
|
||||
|
||||
@@ -247,15 +247,14 @@
|
||||
|
||||
<span class="k">def</span> <span class="nf">filter</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">reward</span><span class="p">:</span> <span class="n">RewardType</span><span class="p">,</span> <span class="n">update_internal_state</span><span class="p">:</span> <span class="nb">bool</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="o">-></span> <span class="n">RewardType</span><span class="p">:</span>
|
||||
<span class="k">if</span> <span class="n">update_internal_state</span><span class="p">:</span>
|
||||
<span class="k">if</span> <span class="ow">not</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">reward</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">ndarray</span><span class="p">)</span> <span class="ow">or</span> <span class="nb">len</span><span class="p">(</span><span class="n">reward</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span> <span class="o"><</span> <span class="mi">2</span><span class="p">:</span>
|
||||
<span class="n">reward</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="n">reward</span><span class="p">]])</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">running_rewards_stats</span><span class="o">.</span><span class="n">push</span><span class="p">(</span><span class="n">reward</span><span class="p">)</span>
|
||||
|
||||
<span class="n">reward</span> <span class="o">=</span> <span class="p">(</span><span class="n">reward</span> <span class="o">-</span> <span class="bp">self</span><span class="o">.</span><span class="n">running_rewards_stats</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span> <span class="o">/</span> \
|
||||
<span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">running_rewards_stats</span><span class="o">.</span><span class="n">std</span> <span class="o">+</span> <span class="mf">1e-15</span><span class="p">)</span>
|
||||
<span class="n">reward</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">clip</span><span class="p">(</span><span class="n">reward</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">clip_min</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">clip_max</span><span class="p">)</span>
|
||||
|
||||
<span class="k">return</span> <span class="n">reward</span>
|
||||
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">running_rewards_stats</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">reward</span><span class="p">)</span><span class="o">.</span><span class="n">squeeze</span><span class="p">()</span>
|
||||
|
||||
<span class="k">def</span> <span class="nf">get_filtered_reward_space</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_reward_space</span><span class="p">:</span> <span class="n">RewardSpace</span><span class="p">)</span> <span class="o">-></span> <span class="n">RewardSpace</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">running_rewards_stats</span><span class="o">.</span><span class="n">set_params</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,),</span> <span class="n">clip_values</span><span class="o">=</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">clip_min</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">clip_max</span><span class="p">))</span>
|
||||
<span class="k">return</span> <span class="n">input_reward_space</span>
|
||||
|
||||
<span class="k">def</span> <span class="nf">save_state_to_checkpoint</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">checkpoint_dir</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="n">checkpoint_prefix</span><span class="p">:</span> <span class="nb">str</span><span class="p">):</span>
|
||||
|
||||
@@ -198,6 +198,8 @@
|
||||
<span class="c1"># limitations under the License.</span>
|
||||
<span class="c1">#</span>
|
||||
<span class="kn">import</span> <span class="nn">ast</span>
|
||||
|
||||
<span class="kn">import</span> <span class="nn">pickle</span>
|
||||
<span class="kn">from</span> <span class="nn">copy</span> <span class="k">import</span> <span class="n">deepcopy</span>
|
||||
|
||||
<span class="kn">import</span> <span class="nn">math</span>
|
||||
@@ -324,14 +326,27 @@
|
||||
|
||||
<span class="k">def</span> <span class="nf">shuffle_episodes</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Shuffle all the episodes in the replay buffer</span>
|
||||
<span class="sd"> Shuffle all the complete episodes in the replay buffer, while deleting the last non-complete episode</span>
|
||||
<span class="sd"> :return:</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">reader_writer_lock</span><span class="o">.</span><span class="n">lock_writing</span><span class="p">()</span>
|
||||
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">assert_not_frozen</span><span class="p">()</span>
|
||||
|
||||
<span class="c1"># unlike the standard usage of the EpisodicExperienceReplay, where we always leave an empty episode after</span>
|
||||
<span class="c1"># the last full one, so that new transitions will have where to be added, in this case we delibrately remove</span>
|
||||
<span class="c1"># that empty last episode, as we are about to shuffle the memory, and we don't want it to be shuffled in</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">remove_last_episode</span><span class="p">(</span><span class="n">lock</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
|
||||
|
||||
<span class="n">random</span><span class="o">.</span><span class="n">shuffle</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_buffer</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">transitions</span> <span class="o">=</span> <span class="p">[</span><span class="n">t</span> <span class="k">for</span> <span class="n">e</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">_buffer</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="n">e</span><span class="o">.</span><span class="n">transitions</span><span class="p">]</span>
|
||||
|
||||
<span class="c1"># create a new Episode for the next transitions to be placed into</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_buffer</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">Episode</span><span class="p">(</span><span class="n">n_step</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">n_step</span><span class="p">))</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_length</span> <span class="o">+=</span> <span class="mi">1</span>
|
||||
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">reader_writer_lock</span><span class="o">.</span><span class="n">release_writing</span><span class="p">()</span>
|
||||
|
||||
<span class="k">def</span> <span class="nf">get_shuffled_training_data_generator</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">size</span><span class="p">:</span> <span class="nb">int</span><span class="p">)</span> <span class="o">-></span> <span class="n">List</span><span class="p">[</span><span class="n">Transition</span><span class="p">]:</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Get an generator for iterating through the shuffled replay buffer, for processing the data in epochs.</span>
|
||||
@@ -384,10 +399,10 @@
|
||||
<span class="n">granularity</span><span class="p">,</span> <span class="n">size</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">max_size</span>
|
||||
<span class="k">if</span> <span class="n">granularity</span> <span class="o">==</span> <span class="n">MemoryGranularity</span><span class="o">.</span><span class="n">Transitions</span><span class="p">:</span>
|
||||
<span class="k">while</span> <span class="n">size</span> <span class="o">!=</span> <span class="mi">0</span> <span class="ow">and</span> <span class="bp">self</span><span class="o">.</span><span class="n">num_transitions</span><span class="p">()</span> <span class="o">></span> <span class="n">size</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_remove_episode</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">remove_first_episode</span><span class="p">(</span><span class="n">lock</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
|
||||
<span class="k">elif</span> <span class="n">granularity</span> <span class="o">==</span> <span class="n">MemoryGranularity</span><span class="o">.</span><span class="n">Episodes</span><span class="p">:</span>
|
||||
<span class="k">while</span> <span class="bp">self</span><span class="o">.</span><span class="n">length</span><span class="p">()</span> <span class="o">></span> <span class="n">size</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_remove_episode</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">remove_first_episode</span><span class="p">(</span><span class="n">lock</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
|
||||
|
||||
<span class="k">def</span> <span class="nf">_update_episode</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">episode</span><span class="p">:</span> <span class="n">Episode</span><span class="p">)</span> <span class="o">-></span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="n">episode</span><span class="o">.</span><span class="n">update_transitions_rewards_and_bootstrap_data</span><span class="p">()</span>
|
||||
@@ -504,31 +519,53 @@
|
||||
|
||||
<span class="k">def</span> <span class="nf">_remove_episode</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">episode_index</span><span class="p">:</span> <span class="nb">int</span><span class="p">)</span> <span class="o">-></span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Remove the episode in the given index (even if it is not complete yet)</span>
|
||||
<span class="sd"> :param episode_index: the index of the episode to remove</span>
|
||||
<span class="sd"> Remove either the first or the last index</span>
|
||||
<span class="sd"> :param episode_index: the index of the episode to remove (either 0 or -1)</span>
|
||||
<span class="sd"> :return: None</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">assert_not_frozen</span><span class="p">()</span>
|
||||
<span class="k">assert</span> <span class="n">episode_index</span> <span class="o">==</span> <span class="mi">0</span> <span class="ow">or</span> <span class="n">episode_index</span> <span class="o">==</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="s2">"_remove_episode only supports removing the first or the last "</span> \
|
||||
<span class="s2">"episode"</span>
|
||||
|
||||
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_buffer</span><span class="p">)</span> <span class="o">></span> <span class="n">episode_index</span><span class="p">:</span>
|
||||
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_buffer</span><span class="p">)</span> <span class="o">></span> <span class="mi">0</span><span class="p">:</span>
|
||||
<span class="n">episode_length</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_buffer</span><span class="p">[</span><span class="n">episode_index</span><span class="p">]</span><span class="o">.</span><span class="n">length</span><span class="p">()</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_length</span> <span class="o">-=</span> <span class="mi">1</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_num_transitions</span> <span class="o">-=</span> <span class="n">episode_length</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_num_transitions_in_complete_episodes</span> <span class="o">-=</span> <span class="n">episode_length</span>
|
||||
<span class="k">del</span> <span class="bp">self</span><span class="o">.</span><span class="n">transitions</span><span class="p">[:</span><span class="n">episode_length</span><span class="p">]</span>
|
||||
<span class="k">if</span> <span class="n">episode_index</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
|
||||
<span class="k">del</span> <span class="bp">self</span><span class="o">.</span><span class="n">transitions</span><span class="p">[:</span><span class="n">episode_length</span><span class="p">]</span>
|
||||
<span class="k">else</span><span class="p">:</span> <span class="c1"># episode_index = -1</span>
|
||||
<span class="k">del</span> <span class="bp">self</span><span class="o">.</span><span class="n">transitions</span><span class="p">[</span><span class="o">-</span><span class="n">episode_length</span><span class="p">:]</span>
|
||||
<span class="k">del</span> <span class="bp">self</span><span class="o">.</span><span class="n">_buffer</span><span class="p">[</span><span class="n">episode_index</span><span class="p">]</span>
|
||||
|
||||
<span class="k">def</span> <span class="nf">remove_episode</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">episode_index</span><span class="p">:</span> <span class="nb">int</span><span class="p">)</span> <span class="o">-></span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="k">def</span> <span class="nf">remove_first_episode</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">lock</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> <span class="o">-></span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Remove the episode in the given index (even if it is not complete yet)</span>
|
||||
<span class="sd"> :param episode_index: the index of the episode to remove</span>
|
||||
<span class="sd"> Remove the first episode (even if it is not complete yet)</span>
|
||||
<span class="sd"> :param lock: if true, will lock the readers writers lock. this can cause a deadlock if an inheriting class</span>
|
||||
<span class="sd"> locks and then calls store with lock = True</span>
|
||||
<span class="sd"> :return: None</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">reader_writer_lock</span><span class="o">.</span><span class="n">lock_writing_and_reading</span><span class="p">()</span>
|
||||
<span class="k">if</span> <span class="n">lock</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">reader_writer_lock</span><span class="o">.</span><span class="n">lock_writing_and_reading</span><span class="p">()</span>
|
||||
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_remove_episode</span><span class="p">(</span><span class="n">episode_index</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_remove_episode</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
|
||||
<span class="k">if</span> <span class="n">lock</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">reader_writer_lock</span><span class="o">.</span><span class="n">release_writing_and_reading</span><span class="p">()</span>
|
||||
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">reader_writer_lock</span><span class="o">.</span><span class="n">release_writing_and_reading</span><span class="p">()</span>
|
||||
<span class="k">def</span> <span class="nf">remove_last_episode</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">lock</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> <span class="o">-></span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Remove the last episode (even if it is not complete yet)</span>
|
||||
<span class="sd"> :param lock: if true, will lock the readers writers lock. this can cause a deadlock if an inheriting class</span>
|
||||
<span class="sd"> locks and then calls store with lock = True</span>
|
||||
<span class="sd"> :return: None</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="k">if</span> <span class="n">lock</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">reader_writer_lock</span><span class="o">.</span><span class="n">lock_writing_and_reading</span><span class="p">()</span>
|
||||
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">_remove_episode</span><span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span>
|
||||
|
||||
<span class="k">if</span> <span class="n">lock</span><span class="p">:</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">reader_writer_lock</span><span class="o">.</span><span class="n">release_writing_and_reading</span><span class="p">()</span>
|
||||
|
||||
<span class="c1"># for API compatibility</span>
|
||||
<span class="k">def</span> <span class="nf">get</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">episode_index</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span> <span class="n">lock</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> <span class="o">-></span> <span class="n">Union</span><span class="p">[</span><span class="kc">None</span><span class="p">,</span> <span class="n">Episode</span><span class="p">]:</span>
|
||||
@@ -555,15 +592,6 @@
|
||||
|
||||
<span class="k">return</span> <span class="n">episode</span>
|
||||
|
||||
<span class="c1"># for API compatibility</span>
|
||||
<span class="k">def</span> <span class="nf">remove</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">episode_index</span><span class="p">:</span> <span class="nb">int</span><span class="p">):</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Remove the episode in the given index (even if it is not complete yet)</span>
|
||||
<span class="sd"> :param episode_index: the index of the episode to remove</span>
|
||||
<span class="sd"> :return: None</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">remove_episode</span><span class="p">(</span><span class="n">episode_index</span><span class="p">)</span>
|
||||
|
||||
<span class="k">def</span> <span class="nf">clean</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span> <span class="o">-></span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Clean the memory by removing all the episodes</span>
|
||||
@@ -629,7 +657,7 @@
|
||||
|
||||
<span class="n">transitions</span><span class="o">.</span><span class="n">append</span><span class="p">(</span>
|
||||
<span class="n">Transition</span><span class="p">(</span><span class="n">state</span><span class="o">=</span><span class="p">{</span><span class="s1">'observation'</span><span class="p">:</span> <span class="n">state</span><span class="p">},</span>
|
||||
<span class="n">action</span><span class="o">=</span><span class="n">current_transition</span><span class="p">[</span><span class="s1">'action'</span><span class="p">],</span> <span class="n">reward</span><span class="o">=</span><span class="n">current_transition</span><span class="p">[</span><span class="s1">'reward'</span><span class="p">],</span>
|
||||
<span class="n">action</span><span class="o">=</span><span class="nb">int</span><span class="p">(</span><span class="n">current_transition</span><span class="p">[</span><span class="s1">'action'</span><span class="p">]),</span> <span class="n">reward</span><span class="o">=</span><span class="n">current_transition</span><span class="p">[</span><span class="s1">'reward'</span><span class="p">],</span>
|
||||
<span class="n">next_state</span><span class="o">=</span><span class="p">{</span><span class="s1">'observation'</span><span class="p">:</span> <span class="n">next_state</span><span class="p">},</span> <span class="n">game_over</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
|
||||
<span class="n">info</span><span class="o">=</span><span class="p">{</span><span class="s1">'all_action_probabilities'</span><span class="p">:</span>
|
||||
<span class="n">ast</span><span class="o">.</span><span class="n">literal_eval</span><span class="p">(</span><span class="n">current_transition</span><span class="p">[</span><span class="s1">'all_action_probabilities'</span><span class="p">])}),</span>
|
||||
@@ -698,7 +726,40 @@
|
||||
<span class="n">episode_num</span><span class="p">,</span> <span class="n">episode</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">get_episode_for_transition</span><span class="p">(</span><span class="n">transition</span><span class="p">)</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">last_training_set_episode_id</span> <span class="o">=</span> <span class="n">episode_num</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">last_training_set_transition_id</span> <span class="o">=</span> \
|
||||
<span class="nb">len</span><span class="p">([</span><span class="n">t</span> <span class="k">for</span> <span class="n">e</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">get_all_complete_episodes_from_to</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">last_training_set_episode_id</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="n">e</span><span class="p">])</span></div>
|
||||
<span class="nb">len</span><span class="p">([</span><span class="n">t</span> <span class="k">for</span> <span class="n">e</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">get_all_complete_episodes_from_to</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">last_training_set_episode_id</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="n">e</span><span class="p">])</span>
|
||||
|
||||
<span class="k">def</span> <span class="nf">save</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">file_path</span><span class="p">:</span> <span class="nb">str</span><span class="p">)</span> <span class="o">-></span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Save the replay buffer contents to a pickle file</span>
|
||||
<span class="sd"> :param file_path: the path to the file that will be used to store the pickled transitions</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">file_path</span><span class="p">,</span> <span class="s1">'wb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">file</span><span class="p">:</span>
|
||||
<span class="n">pickle</span><span class="o">.</span><span class="n">dump</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">get_all_complete_episodes</span><span class="p">(),</span> <span class="n">file</span><span class="p">)</span>
|
||||
|
||||
<span class="k">def</span> <span class="nf">load_pickled</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">file_path</span><span class="p">:</span> <span class="nb">str</span><span class="p">)</span> <span class="o">-></span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Restore the replay buffer contents from a pickle file.</span>
|
||||
<span class="sd"> The pickle file is assumed to include a list of transitions.</span>
|
||||
<span class="sd"> :param file_path: The path to a pickle file to restore</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">assert_not_frozen</span><span class="p">()</span>
|
||||
|
||||
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">file_path</span><span class="p">,</span> <span class="s1">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">file</span><span class="p">:</span>
|
||||
<span class="n">episodes</span> <span class="o">=</span> <span class="n">pickle</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">file</span><span class="p">)</span>
|
||||
<span class="n">num_transitions</span> <span class="o">=</span> <span class="nb">sum</span><span class="p">([</span><span class="nb">len</span><span class="p">(</span><span class="n">e</span><span class="o">.</span><span class="n">transitions</span><span class="p">)</span> <span class="k">for</span> <span class="n">e</span> <span class="ow">in</span> <span class="n">episodes</span><span class="p">])</span>
|
||||
<span class="k">if</span> <span class="n">num_transitions</span> <span class="o">></span> <span class="bp">self</span><span class="o">.</span><span class="n">max_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]:</span>
|
||||
<span class="n">screen</span><span class="o">.</span><span class="n">warning</span><span class="p">(</span><span class="s2">"Warning! The number of transition to load into the replay buffer (</span><span class="si">{}</span><span class="s2">) is "</span>
|
||||
<span class="s2">"bigger than the max size of the replay buffer (</span><span class="si">{}</span><span class="s2">). The excessive transitions will "</span>
|
||||
<span class="s2">"not be stored."</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">num_transitions</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">max_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]))</span>
|
||||
|
||||
<span class="n">progress_bar</span> <span class="o">=</span> <span class="n">ProgressBar</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">episodes</span><span class="p">))</span>
|
||||
<span class="k">for</span> <span class="n">episode_idx</span><span class="p">,</span> <span class="n">episode</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">episodes</span><span class="p">):</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">store_episode</span><span class="p">(</span><span class="n">episode</span><span class="p">)</span>
|
||||
|
||||
<span class="c1"># print progress</span>
|
||||
<span class="n">progress_bar</span><span class="o">.</span><span class="n">update</span><span class="p">(</span><span class="n">episode_idx</span><span class="p">)</span>
|
||||
|
||||
<span class="n">progress_bar</span><span class="o">.</span><span class="n">close</span><span class="p">()</span></div>
|
||||
</pre></div>
|
||||
|
||||
</div>
|
||||
|
||||
@@ -381,15 +381,6 @@
|
||||
<span class="sd"> """</span>
|
||||
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">get_transition</span><span class="p">(</span><span class="n">transition_index</span><span class="p">,</span> <span class="n">lock</span><span class="p">)</span>
|
||||
|
||||
<span class="c1"># for API compatibility</span>
|
||||
<span class="k">def</span> <span class="nf">remove</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">transition_index</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span> <span class="n">lock</span><span class="p">:</span> <span class="nb">bool</span><span class="o">=</span><span class="kc">True</span><span class="p">):</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Remove the transition in the given index</span>
|
||||
<span class="sd"> :param transition_index: the index of the transition to remove</span>
|
||||
<span class="sd"> :return: None</span>
|
||||
<span class="sd"> """</span>
|
||||
<span class="bp">self</span><span class="o">.</span><span class="n">remove_transition</span><span class="p">(</span><span class="n">transition_index</span><span class="p">,</span> <span class="n">lock</span><span class="p">)</span>
|
||||
|
||||
<span class="k">def</span> <span class="nf">clean</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">lock</span><span class="p">:</span> <span class="nb">bool</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="o">-></span> <span class="kc">None</span><span class="p">:</span>
|
||||
<span class="sd">"""</span>
|
||||
<span class="sd"> Clean the memory by removing all the episodes</span>
|
||||
|
||||
18
docs/_sources/features/batch_rl.rst.txt
Normal file
18
docs/_sources/features/batch_rl.rst.txt
Normal file
@@ -0,0 +1,18 @@
|
||||
Batch Reinforcement Learning
|
||||
============================
|
||||
|
||||
Coach supports Batch Reinforcement Learning, where learning is based solely on a (fixed) batch of data.
|
||||
In Batch RL, we are given a dataset of experience, which was collected using some (one or more) deployed policies, and we would
|
||||
like to use it to learn a better policy than what was used to collect the dataset.
|
||||
There is no simulator to interact with, and so we cannot collect any new data, meaning we often cannot explore the MDP any further.
|
||||
To make things even harder, we would also like to use the dataset in order to evaluate the newly learned policy
|
||||
(using off-policy evaluation), since we do not have a simulator which we can use to evaluate the policy on.
|
||||
Batch RL is also often beneficial in cases where we just want to separate the inference (data collection) from the
|
||||
training process of a new policy. This is often the case where we have a system on which we could quite easily deploy a policy
|
||||
and collect experience data, but cannot easily use that system's setup to online train a new policy (as is often the
|
||||
case with more standard RL algorithms).
|
||||
|
||||
Coach supports (almost) all of the integrated off-policy algorithms with Batch RL.
|
||||
|
||||
A lot more details and example usage can be found in the
|
||||
`tutorial <https://github.com/NervanaSystems/coach/blob/master/tutorials/4.%20Batch%20Reinforcement%20Learning.ipynb>`_.
|
||||
@@ -8,3 +8,4 @@ Features
|
||||
algorithms
|
||||
environments
|
||||
benchmarks
|
||||
batch_rl
|
||||
@@ -544,26 +544,34 @@ multi-process distributed mode. The network wrapper contains functionality for m
|
||||
between them.</p>
|
||||
<dl class="method">
|
||||
<dt id="rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_and_sync_networks">
|
||||
<code class="sig-name descname">apply_gradients_and_sync_networks</code><span class="sig-paren">(</span><em class="sig-param">reset_gradients=True</em><span class="sig-paren">)</span><a class="reference internal" href="../../_modules/rl_coach/architectures/network_wrapper.html#NetworkWrapper.apply_gradients_and_sync_networks"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_and_sync_networks" title="Permalink to this definition">¶</a></dt>
|
||||
<code class="sig-name descname">apply_gradients_and_sync_networks</code><span class="sig-paren">(</span><em class="sig-param">reset_gradients=True</em>, <em class="sig-param">additional_inputs=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../_modules/rl_coach/architectures/network_wrapper.html#NetworkWrapper.apply_gradients_and_sync_networks"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_and_sync_networks" title="Permalink to this definition">¶</a></dt>
|
||||
<dd><p>Applies the gradients accumulated in the online network to the global network or to itself and syncs the
|
||||
networks if necessary</p>
|
||||
<dl class="field-list simple">
|
||||
<dt class="field-odd">Parameters</dt>
|
||||
<dd class="field-odd"><p><strong>reset_gradients</strong> – If set to True, the accumulated gradients wont be reset to 0 after applying them to
|
||||
<dd class="field-odd"><ul class="simple">
|
||||
<li><p><strong>reset_gradients</strong> – If set to True, the accumulated gradients wont be reset to 0 after applying them to
|
||||
the network. this is useful when the accumulated gradients are overwritten instead
|
||||
if accumulated by the accumulate_gradients function. this allows reducing time
|
||||
complexity for this function by around 10%</p>
|
||||
complexity for this function by around 10%</p></li>
|
||||
<li><p><strong>additional_inputs</strong> – optional additional inputs required for when applying the gradients (e.g. batchnorm’s
|
||||
update ops also requires the inputs)</p></li>
|
||||
</ul>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd></dl>
|
||||
|
||||
<dl class="method">
|
||||
<dt id="rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_global_network">
|
||||
<code class="sig-name descname">apply_gradients_to_global_network</code><span class="sig-paren">(</span><em class="sig-param">gradients=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../_modules/rl_coach/architectures/network_wrapper.html#NetworkWrapper.apply_gradients_to_global_network"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_global_network" title="Permalink to this definition">¶</a></dt>
|
||||
<code class="sig-name descname">apply_gradients_to_global_network</code><span class="sig-paren">(</span><em class="sig-param">gradients=None</em>, <em class="sig-param">additional_inputs=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../_modules/rl_coach/architectures/network_wrapper.html#NetworkWrapper.apply_gradients_to_global_network"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_global_network" title="Permalink to this definition">¶</a></dt>
|
||||
<dd><p>Apply gradients from the online network on the global network</p>
|
||||
<dl class="field-list simple">
|
||||
<dt class="field-odd">Parameters</dt>
|
||||
<dd class="field-odd"><p><strong>gradients</strong> – optional gradients that will be used instead of teh accumulated gradients</p>
|
||||
<dd class="field-odd"><ul class="simple">
|
||||
<li><p><strong>gradients</strong> – optional gradients that will be used instead of teh accumulated gradients</p></li>
|
||||
<li><p><strong>additional_inputs</strong> – optional additional inputs required for when applying the gradients (e.g. batchnorm’s
|
||||
update ops also requires the inputs)</p></li>
|
||||
</ul>
|
||||
</dd>
|
||||
<dt class="field-even">Returns</dt>
|
||||
<dd class="field-even"><p></p>
|
||||
@@ -573,8 +581,13 @@ complexity for this function by around 10%</p>
|
||||
|
||||
<dl class="method">
|
||||
<dt id="rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_online_network">
|
||||
<code class="sig-name descname">apply_gradients_to_online_network</code><span class="sig-paren">(</span><em class="sig-param">gradients=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../_modules/rl_coach/architectures/network_wrapper.html#NetworkWrapper.apply_gradients_to_online_network"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_online_network" title="Permalink to this definition">¶</a></dt>
|
||||
<dd><p>Apply gradients from the online network on itself</p>
|
||||
<code class="sig-name descname">apply_gradients_to_online_network</code><span class="sig-paren">(</span><em class="sig-param">gradients=None</em>, <em class="sig-param">additional_inputs=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../_modules/rl_coach/architectures/network_wrapper.html#NetworkWrapper.apply_gradients_to_online_network"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#rl_coach.architectures.network_wrapper.NetworkWrapper.apply_gradients_to_online_network" title="Permalink to this definition">¶</a></dt>
|
||||
<dd><p>Apply gradients from the online network on itself
|
||||
:param gradients: optional gradients that will be used instead of teh accumulated gradients
|
||||
:param additional_inputs: optional additional inputs required for when applying the gradients (e.g. batchnorm’s</p>
|
||||
<blockquote>
|
||||
<div><p>update ops also requires the inputs)</p>
|
||||
</div></blockquote>
|
||||
<dl class="field-list simple">
|
||||
<dt class="field-odd">Returns</dt>
|
||||
<dd class="field-odd"><p></p>
|
||||
@@ -650,7 +663,7 @@ target_network or global_network) and the second element is the inputs</p>
|
||||
|
||||
<dl class="method">
|
||||
<dt id="rl_coach.architectures.network_wrapper.NetworkWrapper.train_and_sync_networks">
|
||||
<code class="sig-name descname">train_and_sync_networks</code><span class="sig-paren">(</span><em class="sig-param">inputs</em>, <em class="sig-param">targets</em>, <em class="sig-param">additional_fetches=[]</em>, <em class="sig-param">importance_weights=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../_modules/rl_coach/architectures/network_wrapper.html#NetworkWrapper.train_and_sync_networks"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#rl_coach.architectures.network_wrapper.NetworkWrapper.train_and_sync_networks" title="Permalink to this definition">¶</a></dt>
|
||||
<code class="sig-name descname">train_and_sync_networks</code><span class="sig-paren">(</span><em class="sig-param">inputs</em>, <em class="sig-param">targets</em>, <em class="sig-param">additional_fetches=[]</em>, <em class="sig-param">importance_weights=None</em>, <em class="sig-param">use_inputs_for_apply_gradients=False</em><span class="sig-paren">)</span><a class="reference internal" href="../../_modules/rl_coach/architectures/network_wrapper.html#NetworkWrapper.train_and_sync_networks"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#rl_coach.architectures.network_wrapper.NetworkWrapper.train_and_sync_networks" title="Permalink to this definition">¶</a></dt>
|
||||
<dd><p>A generic training function that enables multi-threading training using a global network if necessary.</p>
|
||||
<dl class="field-list simple">
|
||||
<dt class="field-odd">Parameters</dt>
|
||||
@@ -660,6 +673,8 @@ target_network or global_network) and the second element is the inputs</p>
|
||||
<li><p><strong>additional_fetches</strong> – Any additional tensor the user wants to fetch</p></li>
|
||||
<li><p><strong>importance_weights</strong> – A coefficient for each sample in the batch, which will be used to rescale the loss
|
||||
error of this sample. If it is not given, the samples losses won’t be scaled</p></li>
|
||||
<li><p><strong>use_inputs_for_apply_gradients</strong> – Add the inputs also for when applying gradients
|
||||
(e.g. for incorporating batchnorm update ops)</p></li>
|
||||
</ul>
|
||||
</dd>
|
||||
<dt class="field-even">Returns</dt>
|
||||
|
||||
261
docs/features/batch_rl.html
Normal file
261
docs/features/batch_rl.html
Normal file
@@ -0,0 +1,261 @@
|
||||
|
||||
|
||||
<!DOCTYPE html>
|
||||
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
|
||||
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
|
||||
<title>Batch Reinforcement Learning — Reinforcement Learning Coach 0.12.0 documentation</title>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<script type="text/javascript" src="../_static/js/modernizr.min.js"></script>
|
||||
|
||||
|
||||
<script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
|
||||
<script type="text/javascript" src="../_static/jquery.js"></script>
|
||||
<script type="text/javascript" src="../_static/underscore.js"></script>
|
||||
<script type="text/javascript" src="../_static/doctools.js"></script>
|
||||
<script type="text/javascript" src="../_static/language_data.js"></script>
|
||||
<script async="async" type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-AMS-MML_HTMLorMML"></script>
|
||||
|
||||
<script type="text/javascript" src="../_static/js/theme.js"></script>
|
||||
|
||||
|
||||
|
||||
|
||||
<link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
|
||||
<link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
|
||||
<link rel="stylesheet" href="../_static/css/custom.css" type="text/css" />
|
||||
<link rel="index" title="Index" href="../genindex.html" />
|
||||
<link rel="search" title="Search" href="../search.html" />
|
||||
<link rel="next" title="Selecting an Algorithm" href="../selecting_an_algorithm.html" />
|
||||
<link rel="prev" title="Benchmarks" href="benchmarks.html" />
|
||||
<link href="../_static/css/custom.css" rel="stylesheet" type="text/css">
|
||||
|
||||
</head>
|
||||
|
||||
<body class="wy-body-for-nav">
|
||||
|
||||
|
||||
<div class="wy-grid-for-nav">
|
||||
|
||||
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
|
||||
<div class="wy-side-scroll">
|
||||
<div class="wy-side-nav-search" >
|
||||
|
||||
|
||||
|
||||
<a href="../index.html" class="icon icon-home"> Reinforcement Learning Coach
|
||||
|
||||
|
||||
|
||||
|
||||
<img src="../_static/dark_logo.png" class="logo" alt="Logo"/>
|
||||
|
||||
</a>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<div role="search">
|
||||
<form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
|
||||
<input type="text" name="q" placeholder="Search docs" />
|
||||
<input type="hidden" name="check_keywords" value="yes" />
|
||||
<input type="hidden" name="area" value="default" />
|
||||
</form>
|
||||
</div>
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
<div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<p class="caption"><span class="caption-text">Intro</span></p>
|
||||
<ul class="current">
|
||||
<li class="toctree-l1"><a class="reference internal" href="../usage.html">Usage</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../dist_usage.html">Usage - Distributed Coach</a></li>
|
||||
<li class="toctree-l1 current"><a class="reference internal" href="index.html">Features</a><ul class="current">
|
||||
<li class="toctree-l2"><a class="reference internal" href="algorithms.html">Algorithms</a></li>
|
||||
<li class="toctree-l2"><a class="reference internal" href="environments.html">Environments</a></li>
|
||||
<li class="toctree-l2"><a class="reference internal" href="benchmarks.html">Benchmarks</a></li>
|
||||
<li class="toctree-l2 current"><a class="current reference internal" href="#">Batch Reinforcement Learning</a></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../selecting_an_algorithm.html">Selecting an Algorithm</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../dashboard.html">Coach Dashboard</a></li>
|
||||
</ul>
|
||||
<p class="caption"><span class="caption-text">Design</span></p>
|
||||
<ul>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../design/control_flow.html">Control Flow</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../design/network.html">Network Design</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../design/horizontal_scaling.html">Distributed Coach - Horizontal Scale-Out</a></li>
|
||||
</ul>
|
||||
<p class="caption"><span class="caption-text">Contributing</span></p>
|
||||
<ul>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../contributing/add_agent.html">Adding a New Agent</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../contributing/add_env.html">Adding a New Environment</a></li>
|
||||
</ul>
|
||||
<p class="caption"><span class="caption-text">Components</span></p>
|
||||
<ul>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/agents/index.html">Agents</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/architectures/index.html">Architectures</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/data_stores/index.html">Data Stores</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/environments/index.html">Environments</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/exploration_policies/index.html">Exploration Policies</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/filters/index.html">Filters</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/memories/index.html">Memories</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/memory_backends/index.html">Memory Backends</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/orchestrators/index.html">Orchestrators</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/core_types.html">Core Types</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/spaces.html">Spaces</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../components/additional_parameters.html">Additional Parameters</a></li>
|
||||
</ul>
|
||||
|
||||
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
|
||||
|
||||
|
||||
<nav class="wy-nav-top" aria-label="top navigation">
|
||||
|
||||
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
|
||||
<a href="../index.html">Reinforcement Learning Coach</a>
|
||||
|
||||
</nav>
|
||||
|
||||
|
||||
<div class="wy-nav-content">
|
||||
|
||||
<div class="rst-content">
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<div role="navigation" aria-label="breadcrumbs navigation">
|
||||
|
||||
<ul class="wy-breadcrumbs">
|
||||
|
||||
<li><a href="../index.html">Docs</a> »</li>
|
||||
|
||||
<li><a href="index.html">Features</a> »</li>
|
||||
|
||||
<li>Batch Reinforcement Learning</li>
|
||||
|
||||
|
||||
<li class="wy-breadcrumbs-aside">
|
||||
|
||||
|
||||
<a href="../_sources/features/batch_rl.rst.txt" rel="nofollow"> View page source</a>
|
||||
|
||||
|
||||
</li>
|
||||
|
||||
</ul>
|
||||
|
||||
|
||||
<hr/>
|
||||
</div>
|
||||
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
|
||||
<div itemprop="articleBody">
|
||||
|
||||
<div class="section" id="batch-reinforcement-learning">
|
||||
<h1>Batch Reinforcement Learning<a class="headerlink" href="#batch-reinforcement-learning" title="Permalink to this headline">¶</a></h1>
|
||||
<p>Coach supports Batch Reinforcement Learning, where learning is based solely on a (fixed) batch of data.
|
||||
In Batch RL, we are given a dataset of experience, which was collected using some (one or more) deployed policies, and we would
|
||||
like to use it to learn a better policy than what was used to collect the dataset.
|
||||
There is no simulator to interact with, and so we cannot collect any new data, meaning we often cannot explore the MDP any further.
|
||||
To make things even harder, we would also like to use the dataset in order to evaluate the newly learned policy
|
||||
(using off-policy evaluation), since we do not have a simulator which we can use to evaluate the policy on.
|
||||
Batch RL is also often beneficial in cases where we just want to separate the inference (data collection) from the
|
||||
training process of a new policy. This is often the case where we have a system on which we could quite easily deploy a policy
|
||||
and collect experience data, but cannot easily use that system’s setup to online train a new policy (as is often the
|
||||
case with more standard RL algorithms).</p>
|
||||
<p>Coach supports (almost) all of the integrated off-policy algorithms with Batch RL.</p>
|
||||
<p>A lot more details and example usage can be found in the
|
||||
<a class="reference external" href="https://github.com/NervanaSystems/coach/blob/master/tutorials/4.%20Batch%20Reinforcement%20Learning.ipynb">tutorial</a>.</p>
|
||||
</div>
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
<footer>
|
||||
|
||||
<div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
|
||||
|
||||
<a href="../selecting_an_algorithm.html" class="btn btn-neutral float-right" title="Selecting an Algorithm" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
|
||||
|
||||
|
||||
<a href="benchmarks.html" class="btn btn-neutral float-left" title="Benchmarks" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
<hr/>
|
||||
|
||||
<div role="contentinfo">
|
||||
<p>
|
||||
© Copyright 2018-2019, Intel AI Lab
|
||||
|
||||
</p>
|
||||
</div>
|
||||
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
|
||||
|
||||
</footer>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</section>
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
<script type="text/javascript">
|
||||
jQuery(function () {
|
||||
SphinxRtdTheme.Navigation.enable(true);
|
||||
});
|
||||
</script>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
</body>
|
||||
</html>
|
||||
@@ -95,6 +95,7 @@
|
||||
<li class="toctree-l2"><a class="reference internal" href="algorithms.html">Algorithms</a></li>
|
||||
<li class="toctree-l2"><a class="reference internal" href="environments.html">Environments</a></li>
|
||||
<li class="toctree-l2"><a class="reference internal" href="benchmarks.html">Benchmarks</a></li>
|
||||
<li class="toctree-l2"><a class="reference internal" href="batch_rl.html">Batch Reinforcement Learning</a></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="../selecting_an_algorithm.html">Selecting an Algorithm</a></li>
|
||||
@@ -197,6 +198,7 @@
|
||||
<li class="toctree-l1"><a class="reference internal" href="algorithms.html">Algorithms</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="environments.html">Environments</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="benchmarks.html">Benchmarks</a></li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="batch_rl.html">Batch Reinforcement Learning</a></li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -210,6 +210,7 @@ Coach collects statistics from the training process and supports advanced visual
|
||||
<li class="toctree-l2"><a class="reference internal" href="features/algorithms.html">Algorithms</a></li>
|
||||
<li class="toctree-l2"><a class="reference internal" href="features/environments.html">Environments</a></li>
|
||||
<li class="toctree-l2"><a class="reference internal" href="features/benchmarks.html">Benchmarks</a></li>
|
||||
<li class="toctree-l2"><a class="reference internal" href="features/batch_rl.html">Batch Reinforcement Learning</a></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li class="toctree-l1"><a class="reference internal" href="selecting_an_algorithm.html">Selecting an Algorithm</a></li>
|
||||
|
||||
BIN
docs/objects.inv
BIN
docs/objects.inv
Binary file not shown.
File diff suppressed because one or more lines are too long
18
docs_raw/source/features/batch_rl.rst
Normal file
18
docs_raw/source/features/batch_rl.rst
Normal file
@@ -0,0 +1,18 @@
|
||||
Batch Reinforcement Learning
|
||||
============================
|
||||
|
||||
Coach supports Batch Reinforcement Learning, where learning is based solely on a (fixed) batch of data.
|
||||
In Batch RL, we are given a dataset of experience, which was collected using some (one or more) deployed policies, and we would
|
||||
like to use it to learn a better policy than what was used to collect the dataset.
|
||||
There is no simulator to interact with, and so we cannot collect any new data, meaning we often cannot explore the MDP any further.
|
||||
To make things even harder, we would also like to use the dataset in order to evaluate the newly learned policy
|
||||
(using off-policy evaluation), since we do not have a simulator which we can use to evaluate the policy on.
|
||||
Batch RL is also often beneficial in cases where we just want to separate the inference (data collection) from the
|
||||
training process of a new policy. This is often the case where we have a system on which we could quite easily deploy a policy
|
||||
and collect experience data, but cannot easily use that system's setup to online train a new policy (as is often the
|
||||
case with more standard RL algorithms).
|
||||
|
||||
Coach supports (almost) all of the integrated off-policy algorithms with Batch RL.
|
||||
|
||||
A lot more details and example usage can be found in the
|
||||
`tutorial <https://github.com/NervanaSystems/coach/blob/master/tutorials/4.%20Batch%20Reinforcement%20Learning.ipynb>`_.
|
||||
@@ -8,3 +8,4 @@ Features
|
||||
algorithms
|
||||
environments
|
||||
benchmarks
|
||||
batch_rl
|
||||
@@ -50,43 +50,51 @@ class PPOHeadParameters(HeadParameters):
|
||||
class VHeadParameters(HeadParameters):
|
||||
def __init__(self, activation_function: str ='relu', name: str='v_head_params',
|
||||
num_output_head_copies: int = 1, rescale_gradient_from_head_by_factor: float = 1.0,
|
||||
loss_weight: float = 1.0, dense_layer=None, initializer='normalized_columns'):
|
||||
loss_weight: float = 1.0, dense_layer=None, initializer='normalized_columns',
|
||||
output_bias_initializer=None):
|
||||
super().__init__(parameterized_class_name="VHead", activation_function=activation_function, name=name,
|
||||
dense_layer=dense_layer, num_output_head_copies=num_output_head_copies,
|
||||
rescale_gradient_from_head_by_factor=rescale_gradient_from_head_by_factor,
|
||||
loss_weight=loss_weight)
|
||||
self.initializer = initializer
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
|
||||
class DDPGVHeadParameters(HeadParameters):
|
||||
def __init__(self, activation_function: str ='relu', name: str='ddpg_v_head_params',
|
||||
num_output_head_copies: int = 1, rescale_gradient_from_head_by_factor: float = 1.0,
|
||||
loss_weight: float = 1.0, dense_layer=None, initializer='normalized_columns'):
|
||||
loss_weight: float = 1.0, dense_layer=None, initializer='normalized_columns',
|
||||
output_bias_initializer=None):
|
||||
super().__init__(parameterized_class_name="DDPGVHead", activation_function=activation_function, name=name,
|
||||
dense_layer=dense_layer, num_output_head_copies=num_output_head_copies,
|
||||
rescale_gradient_from_head_by_factor=rescale_gradient_from_head_by_factor,
|
||||
loss_weight=loss_weight)
|
||||
self.initializer = initializer
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
|
||||
class CategoricalQHeadParameters(HeadParameters):
|
||||
def __init__(self, activation_function: str ='relu', name: str='categorical_q_head_params',
|
||||
num_output_head_copies: int = 1, rescale_gradient_from_head_by_factor: float = 1.0,
|
||||
loss_weight: float = 1.0, dense_layer=None):
|
||||
loss_weight: float = 1.0, dense_layer=None,
|
||||
output_bias_initializer=None):
|
||||
super().__init__(parameterized_class_name="CategoricalQHead", activation_function=activation_function, name=name,
|
||||
dense_layer=dense_layer, num_output_head_copies=num_output_head_copies,
|
||||
rescale_gradient_from_head_by_factor=rescale_gradient_from_head_by_factor,
|
||||
loss_weight=loss_weight)
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
|
||||
class RegressionHeadParameters(HeadParameters):
|
||||
def __init__(self, activation_function: str ='relu', name: str='q_head_params',
|
||||
num_output_head_copies: int = 1, rescale_gradient_from_head_by_factor: float = 1.0,
|
||||
loss_weight: float = 1.0, dense_layer=None, scheme=None):
|
||||
loss_weight: float = 1.0, dense_layer=None, scheme=None,
|
||||
output_bias_initializer=None):
|
||||
super().__init__(parameterized_class_name="RegressionHead", activation_function=activation_function, name=name,
|
||||
dense_layer=dense_layer, num_output_head_copies=num_output_head_copies,
|
||||
rescale_gradient_from_head_by_factor=rescale_gradient_from_head_by_factor,
|
||||
loss_weight=loss_weight)
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
|
||||
class DDPGActorHeadParameters(HeadParameters):
|
||||
@@ -153,21 +161,23 @@ class PolicyHeadParameters(HeadParameters):
|
||||
class PPOVHeadParameters(HeadParameters):
|
||||
def __init__(self, activation_function: str ='relu', name: str='ppo_v_head_params',
|
||||
num_output_head_copies: int = 1, rescale_gradient_from_head_by_factor: float = 1.0,
|
||||
loss_weight: float = 1.0, dense_layer=None):
|
||||
loss_weight: float = 1.0, dense_layer=None, output_bias_initializer=None):
|
||||
super().__init__(parameterized_class_name="PPOVHead", activation_function=activation_function, name=name,
|
||||
dense_layer=dense_layer, num_output_head_copies=num_output_head_copies,
|
||||
rescale_gradient_from_head_by_factor=rescale_gradient_from_head_by_factor,
|
||||
loss_weight=loss_weight)
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
|
||||
class QHeadParameters(HeadParameters):
|
||||
def __init__(self, activation_function: str ='relu', name: str='q_head_params',
|
||||
num_output_head_copies: int = 1, rescale_gradient_from_head_by_factor: float = 1.0,
|
||||
loss_weight: float = 1.0, dense_layer=None):
|
||||
loss_weight: float = 1.0, dense_layer=None, output_bias_initializer=None):
|
||||
super().__init__(parameterized_class_name="QHead", activation_function=activation_function, name=name,
|
||||
dense_layer=dense_layer, num_output_head_copies=num_output_head_copies,
|
||||
rescale_gradient_from_head_by_factor=rescale_gradient_from_head_by_factor,
|
||||
loss_weight=loss_weight)
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
|
||||
class ClassificationHeadParameters(HeadParameters):
|
||||
@@ -183,11 +193,12 @@ class ClassificationHeadParameters(HeadParameters):
|
||||
class QuantileRegressionQHeadParameters(HeadParameters):
|
||||
def __init__(self, activation_function: str ='relu', name: str='quantile_regression_q_head_params',
|
||||
num_output_head_copies: int = 1, rescale_gradient_from_head_by_factor: float = 1.0,
|
||||
loss_weight: float = 1.0, dense_layer=None):
|
||||
loss_weight: float = 1.0, dense_layer=None, output_bias_initializer=None):
|
||||
super().__init__(parameterized_class_name="QuantileRegressionQHead", activation_function=activation_function, name=name,
|
||||
dense_layer=dense_layer, num_output_head_copies=num_output_head_copies,
|
||||
rescale_gradient_from_head_by_factor=rescale_gradient_from_head_by_factor,
|
||||
loss_weight=loss_weight)
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
|
||||
class RainbowQHeadParameters(HeadParameters):
|
||||
@@ -218,18 +229,21 @@ class SACPolicyHeadParameters(HeadParameters):
|
||||
|
||||
class SACQHeadParameters(HeadParameters):
|
||||
def __init__(self, activation_function: str ='relu', name: str='sac_q_head_params', dense_layer=None,
|
||||
layers_sizes: tuple = (256, 256)):
|
||||
layers_sizes: tuple = (256, 256), output_bias_initializer=None):
|
||||
super().__init__(parameterized_class_name='SACQHead', activation_function=activation_function, name=name,
|
||||
dense_layer=dense_layer)
|
||||
self.network_layers_sizes = layers_sizes
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
|
||||
class TD3VHeadParameters(HeadParameters):
|
||||
def __init__(self, activation_function: str ='relu', name: str='td3_v_head_params',
|
||||
num_output_head_copies: int = 1, rescale_gradient_from_head_by_factor: float = 1.0,
|
||||
loss_weight: float = 1.0, dense_layer=None, initializer='xavier'):
|
||||
loss_weight: float = 1.0, dense_layer=None, initializer='xavier',
|
||||
output_bias_initializer=None):
|
||||
super().__init__(parameterized_class_name="TD3VHead", activation_function=activation_function, name=name,
|
||||
dense_layer=dense_layer, num_output_head_copies=num_output_head_copies,
|
||||
rescale_gradient_from_head_by_factor=rescale_gradient_from_head_by_factor,
|
||||
loss_weight=loss_weight)
|
||||
self.initializer = initializer
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
@@ -26,9 +26,9 @@ from rl_coach.spaces import SpacesDefinition
|
||||
class CategoricalQHead(QHead):
|
||||
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
||||
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str ='relu',
|
||||
dense_layer=Dense):
|
||||
dense_layer=Dense, output_bias_initializer=None):
|
||||
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function,
|
||||
dense_layer=dense_layer)
|
||||
dense_layer=dense_layer, output_bias_initializer=output_bias_initializer)
|
||||
self.name = 'categorical_dqn_head'
|
||||
self.num_actions = len(self.spaces.action.actions)
|
||||
self.num_atoms = agent_parameters.algorithm.atoms
|
||||
@@ -37,7 +37,8 @@ class CategoricalQHead(QHead):
|
||||
self.loss_type = []
|
||||
|
||||
def _build_module(self, input_layer):
|
||||
values_distribution = self.dense_layer(self.num_actions * self.num_atoms)(input_layer, name='output')
|
||||
values_distribution = self.dense_layer(self.num_actions * self.num_atoms)\
|
||||
(input_layer, name='output', bias_initializer=self.output_bias_initializer)
|
||||
values_distribution = tf.reshape(values_distribution, (tf.shape(values_distribution)[0], self.num_actions,
|
||||
self.num_atoms))
|
||||
# softmax on atoms dimension
|
||||
|
||||
@@ -27,7 +27,7 @@ from rl_coach.utils import force_list
|
||||
class RegressionHead(Head):
|
||||
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
||||
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str='relu',
|
||||
dense_layer=Dense, scheme=[Dense(256), Dense(256)]):
|
||||
dense_layer=Dense, scheme=[Dense(256), Dense(256)], output_bias_initializer=None):
|
||||
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function,
|
||||
dense_layer=dense_layer)
|
||||
self.name = 'regression_head'
|
||||
@@ -42,6 +42,7 @@ class RegressionHead(Head):
|
||||
self.loss_type = tf.losses.huber_loss
|
||||
else:
|
||||
self.loss_type = tf.losses.mean_squared_error
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
def _build_module(self, input_layer):
|
||||
self.layers.append(input_layer)
|
||||
@@ -50,7 +51,8 @@ class RegressionHead(Head):
|
||||
layer_params(input_layer=self.layers[-1], name='{}_{}'.format(layer_params.__class__.__name__, idx))
|
||||
))
|
||||
|
||||
self.layers.append(self.dense_layer(self.num_actions)(self.layers[-1], name='output'))
|
||||
self.layers.append(self.dense_layer(self.num_actions)(self.layers[-1], name='output',
|
||||
bias_initializer=self.output_bias_initializer))
|
||||
self.output = self.layers[-1]
|
||||
|
||||
def __str__(self):
|
||||
|
||||
@@ -24,9 +24,10 @@ from rl_coach.spaces import SpacesDefinition
|
||||
class DDPGVHead(VHead):
|
||||
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
||||
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str='relu',
|
||||
dense_layer=Dense, initializer='normalized_columns'):
|
||||
dense_layer=Dense, initializer='normalized_columns', output_bias_initializer=None):
|
||||
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function,
|
||||
dense_layer=dense_layer, initializer=initializer)
|
||||
dense_layer=dense_layer, initializer=initializer,
|
||||
output_bias_initializer=output_bias_initializer)
|
||||
|
||||
def _build_module(self, input_layer):
|
||||
super()._build_module(input_layer)
|
||||
|
||||
@@ -26,18 +26,20 @@ from rl_coach.spaces import SpacesDefinition
|
||||
class PPOVHead(Head):
|
||||
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
||||
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str='relu',
|
||||
dense_layer=Dense):
|
||||
dense_layer=Dense, output_bias_initializer=None):
|
||||
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function,
|
||||
dense_layer=dense_layer)
|
||||
self.name = 'ppo_v_head'
|
||||
self.clip_likelihood_ratio_using_epsilon = agent_parameters.algorithm.clip_likelihood_ratio_using_epsilon
|
||||
self.return_type = ActionProbabilities
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
def _build_module(self, input_layer):
|
||||
self.old_policy_value = tf.placeholder(tf.float32, [None], "old_policy_values")
|
||||
self.input = [self.old_policy_value]
|
||||
self.output = self.dense_layer(1)(input_layer, name='output',
|
||||
kernel_initializer=normalized_columns_initializer(1.0))
|
||||
kernel_initializer=normalized_columns_initializer(1.0),
|
||||
bias_initializer=self.output_bias_initializer)
|
||||
self.target = self.total_return = tf.placeholder(tf.float32, [None], name="total_return")
|
||||
|
||||
value_loss_1 = tf.square(self.output - self.target)
|
||||
|
||||
@@ -26,7 +26,7 @@ from rl_coach.spaces import SpacesDefinition, BoxActionSpace, DiscreteActionSpac
|
||||
class QHead(Head):
|
||||
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
||||
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str='relu',
|
||||
dense_layer=Dense):
|
||||
dense_layer=Dense, output_bias_initializer=None):
|
||||
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function,
|
||||
dense_layer=dense_layer)
|
||||
self.name = 'q_values_head'
|
||||
@@ -46,9 +46,12 @@ class QHead(Head):
|
||||
else:
|
||||
self.loss_type = tf.losses.mean_squared_error
|
||||
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
def _build_module(self, input_layer):
|
||||
# Standard Q Network
|
||||
self.q_values = self.output = self.dense_layer(self.num_actions)(input_layer, name='output')
|
||||
self.q_values = self.output = self.dense_layer(self.num_actions)\
|
||||
(input_layer, name='output', bias_initializer=self.output_bias_initializer)
|
||||
|
||||
# used in batch-rl to estimate a probablity distribution over actions
|
||||
self.softmax = self.add_softmax_with_temperature()
|
||||
|
||||
@@ -25,9 +25,9 @@ from rl_coach.spaces import SpacesDefinition
|
||||
class QuantileRegressionQHead(QHead):
|
||||
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
||||
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str='relu',
|
||||
dense_layer=Dense):
|
||||
dense_layer=Dense, output_bias_initializer=None):
|
||||
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function,
|
||||
dense_layer=dense_layer)
|
||||
dense_layer=dense_layer, output_bias_initializer=output_bias_initializer)
|
||||
self.name = 'quantile_regression_dqn_head'
|
||||
self.num_actions = len(self.spaces.action.actions)
|
||||
self.num_atoms = agent_parameters.algorithm.atoms # we use atom / quantile interchangeably
|
||||
@@ -43,7 +43,8 @@ class QuantileRegressionQHead(QHead):
|
||||
self.input = [self.actions, self.quantile_midpoints]
|
||||
|
||||
# the output of the head is the N unordered quantile locations {theta_1, ..., theta_N}
|
||||
quantiles_locations = self.dense_layer(self.num_actions * self.num_atoms)(input_layer, name='output')
|
||||
quantiles_locations = self.dense_layer(self.num_actions * self.num_atoms)\
|
||||
(input_layer, name='output', bias_initializer=self.output_bias_initializer)
|
||||
quantiles_locations = tf.reshape(quantiles_locations, (tf.shape(quantiles_locations)[0], self.num_actions, self.num_atoms))
|
||||
self.output = quantiles_locations
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ from rl_coach.spaces import SpacesDefinition, BoxActionSpace
|
||||
class SACQHead(Head):
|
||||
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
||||
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str='relu',
|
||||
dense_layer=Dense):
|
||||
dense_layer=Dense, output_bias_initializer=None):
|
||||
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function,
|
||||
dense_layer=dense_layer)
|
||||
self.name = 'q_values_head'
|
||||
@@ -41,6 +41,7 @@ class SACQHead(Head):
|
||||
self.return_type = QActionStateValue
|
||||
# extract the topology from the SACQHeadParameters
|
||||
self.network_layers_sizes = agent_parameters.network_wrappers['q'].heads_parameters[0].network_layers_sizes
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
def _build_module(self, input_layer):
|
||||
# SAC Q network is basically 2 networks running in parallel on the same input (state , action)
|
||||
@@ -63,7 +64,8 @@ class SACQHead(Head):
|
||||
for layer_size in self.network_layers_sizes[1:]:
|
||||
qi_output = self.dense_layer(layer_size)(qi_output, activation=self.activation_function)
|
||||
# the output layer
|
||||
self.q1_output = self.dense_layer(1)(qi_output, name='q1_output')
|
||||
self.q1_output = self.dense_layer(1)(qi_output, name='q1_output',
|
||||
bias_initializer=self.output_bias_initializer)
|
||||
|
||||
# build q2 network head
|
||||
with tf.variable_scope("q2_head"):
|
||||
@@ -74,7 +76,8 @@ class SACQHead(Head):
|
||||
for layer_size in self.network_layers_sizes[1:]:
|
||||
qi_output = self.dense_layer(layer_size)(qi_output, activation=self.activation_function)
|
||||
# the output layer
|
||||
self.q2_output = self.dense_layer(1)(qi_output, name='q2_output')
|
||||
self.q2_output = self.dense_layer(1)(qi_output, name='q2_output',
|
||||
bias_initializer=self.output_bias_initializer)
|
||||
|
||||
# take the minimum as the network's output. this is the log_target (in the original implementation)
|
||||
self.q_output = tf.minimum(self.q1_output, self.q2_output, name='q_output')
|
||||
|
||||
@@ -26,7 +26,7 @@ from rl_coach.spaces import SpacesDefinition
|
||||
class TD3VHead(Head):
|
||||
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
||||
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str='relu',
|
||||
dense_layer=Dense, initializer='xavier'):
|
||||
dense_layer=Dense, initializer='xavier', output_bias_initializer=None):
|
||||
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function,
|
||||
dense_layer=dense_layer)
|
||||
self.name = 'td3_v_values_head'
|
||||
@@ -35,6 +35,7 @@ class TD3VHead(Head):
|
||||
self.initializer = initializer
|
||||
self.loss = []
|
||||
self.output = []
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
def _build_module(self, input_layer):
|
||||
# Standard V Network
|
||||
@@ -44,9 +45,11 @@ class TD3VHead(Head):
|
||||
for i in range(input_layer.shape[0]): # assuming that the actual size is 2, as there are two critic networks
|
||||
if self.initializer == 'normalized_columns':
|
||||
q_outputs.append(self.dense_layer(1)(input_layer[i], name='q_output_{}'.format(i + 1),
|
||||
kernel_initializer=normalized_columns_initializer(1.0)))
|
||||
kernel_initializer=normalized_columns_initializer(1.0),
|
||||
bias_initializer=self.output_bias_initializer),)
|
||||
elif self.initializer == 'xavier' or self.initializer is None:
|
||||
q_outputs.append(self.dense_layer(1)(input_layer[i], name='q_output_{}'.format(i + 1)))
|
||||
q_outputs.append(self.dense_layer(1)(input_layer[i], name='q_output_{}'.format(i + 1),
|
||||
bias_initializer=self.output_bias_initializer))
|
||||
|
||||
self.output.append(q_outputs[i])
|
||||
self.loss.append(tf.reduce_mean((self.target-q_outputs[i])**2))
|
||||
|
||||
@@ -26,7 +26,7 @@ from rl_coach.spaces import SpacesDefinition
|
||||
class VHead(Head):
|
||||
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
|
||||
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str='relu',
|
||||
dense_layer=Dense, initializer='normalized_columns'):
|
||||
dense_layer=Dense, initializer='normalized_columns', output_bias_initializer=None):
|
||||
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function,
|
||||
dense_layer=dense_layer)
|
||||
self.name = 'v_values_head'
|
||||
@@ -38,14 +38,17 @@ class VHead(Head):
|
||||
self.loss_type = tf.losses.mean_squared_error
|
||||
|
||||
self.initializer = initializer
|
||||
self.output_bias_initializer = output_bias_initializer
|
||||
|
||||
def _build_module(self, input_layer):
|
||||
# Standard V Network
|
||||
if self.initializer == 'normalized_columns':
|
||||
self.output = self.dense_layer(1)(input_layer, name='output',
|
||||
kernel_initializer=normalized_columns_initializer(1.0))
|
||||
kernel_initializer=normalized_columns_initializer(1.0),
|
||||
bias_initializer=self.output_bias_initializer)
|
||||
elif self.initializer == 'xavier' or self.initializer is None:
|
||||
self.output = self.dense_layer(1)(input_layer, name='output')
|
||||
self.output = self.dense_layer(1)(input_layer, name='output',
|
||||
bias_initializer=self.output_bias_initializer)
|
||||
|
||||
def __str__(self):
|
||||
result = [
|
||||
|
||||
@@ -168,15 +168,18 @@ class Dense(layers.Dense):
|
||||
def __init__(self, units: int):
|
||||
super(Dense, self).__init__(units=units)
|
||||
|
||||
def __call__(self, input_layer, name: str=None, kernel_initializer=None, activation=None, is_training=None):
|
||||
def __call__(self, input_layer, name: str=None, kernel_initializer=None, bias_initializer=None,
|
||||
activation=None, is_training=None):
|
||||
"""
|
||||
returns a tensorflow dense layer
|
||||
:param input_layer: previous layer
|
||||
:param name: layer name
|
||||
:return: dense layer
|
||||
"""
|
||||
if bias_initializer is None:
|
||||
bias_initializer = tf.zeros_initializer()
|
||||
return tf.layers.dense(input_layer, self.units, name=name, kernel_initializer=kernel_initializer,
|
||||
activation=activation)
|
||||
activation=activation, bias_initializer=bias_initializer)
|
||||
|
||||
@staticmethod
|
||||
@reg_to_tf_instance(layers.Dense)
|
||||
@@ -199,7 +202,8 @@ class NoisyNetDense(layers.NoisyNetDense):
|
||||
def __init__(self, units: int):
|
||||
super(NoisyNetDense, self).__init__(units=units)
|
||||
|
||||
def __call__(self, input_layer, name: str, kernel_initializer=None, activation=None, is_training=None):
|
||||
def __call__(self, input_layer, name: str, kernel_initializer=None, activation=None, is_training=None,
|
||||
bias_initializer=None):
|
||||
"""
|
||||
returns a NoisyNet dense layer
|
||||
:param input_layer: previous layer
|
||||
@@ -233,10 +237,12 @@ class NoisyNetDense(layers.NoisyNetDense):
|
||||
kernel_stddev_initializer = tf.random_uniform_initializer(-stddev * self.sigma0, stddev * self.sigma0)
|
||||
else:
|
||||
kernel_mean_initializer = kernel_stddev_initializer = kernel_initializer
|
||||
if bias_initializer is None:
|
||||
bias_initializer = tf.zeros_initializer()
|
||||
with tf.variable_scope(None, default_name=name):
|
||||
weight_mean = tf.get_variable('weight_mean', shape=(num_inputs, num_outputs),
|
||||
initializer=kernel_mean_initializer)
|
||||
bias_mean = tf.get_variable('bias_mean', shape=(num_outputs,), initializer=tf.zeros_initializer())
|
||||
bias_mean = tf.get_variable('bias_mean', shape=(num_outputs,), initializer=bias_initializer)
|
||||
|
||||
weight_stddev = tf.get_variable('weight_stddev', shape=(num_inputs, num_outputs),
|
||||
initializer=kernel_stddev_initializer)
|
||||
|
||||
@@ -64,15 +64,14 @@ class RewardNormalizationFilter(RewardFilter):
|
||||
|
||||
def filter(self, reward: RewardType, update_internal_state: bool=True) -> RewardType:
|
||||
if update_internal_state:
|
||||
if not isinstance(reward, np.ndarray) or len(reward.shape) < 2:
|
||||
reward = np.array([[reward]])
|
||||
self.running_rewards_stats.push(reward)
|
||||
|
||||
reward = (reward - self.running_rewards_stats.mean) / \
|
||||
(self.running_rewards_stats.std + 1e-15)
|
||||
reward = np.clip(reward, self.clip_min, self.clip_max)
|
||||
|
||||
return reward
|
||||
return self.running_rewards_stats.normalize(reward).squeeze()
|
||||
|
||||
def get_filtered_reward_space(self, input_reward_space: RewardSpace) -> RewardSpace:
|
||||
self.running_rewards_stats.set_params(shape=(1,), clip_values=(self.clip_min, self.clip_max))
|
||||
return input_reward_space
|
||||
|
||||
def save_state_to_checkpoint(self, checkpoint_dir: str, checkpoint_prefix: str):
|
||||
|
||||
@@ -37,7 +37,6 @@ from rl_coach.memories.episodic import EpisodicExperienceReplayParameters
|
||||
from rl_coach.core_types import TimeTypes
|
||||
|
||||
|
||||
# TODO build a tutorial for batch RL
|
||||
class BatchRLGraphManager(BasicRLGraphManager):
|
||||
"""
|
||||
A batch RL graph manager creates a scenario of learning from a dataset without a simulator.
|
||||
@@ -95,6 +94,8 @@ class BatchRLGraphManager(BasicRLGraphManager):
|
||||
self.schedule_params = schedule_params
|
||||
|
||||
def _create_graph(self, task_parameters: TaskParameters) -> Tuple[List[LevelManager], List[Environment]]:
|
||||
assert self.agent_params.memory.load_memory_from_file_path or self.env_params, \
|
||||
"BatchRL requires either a dataset to train from or an environment to collect a dataset from. "
|
||||
if self.env_params:
|
||||
# environment loading
|
||||
self.env_params.seed = task_parameters.seed
|
||||
@@ -172,36 +173,38 @@ class BatchRLGraphManager(BasicRLGraphManager):
|
||||
# initialize the network parameters from the global network
|
||||
self.sync()
|
||||
|
||||
# TODO a bug in heatup where the last episode run is not fed into the ER. e.g. asked for 1024 heatup steps,
|
||||
# last ran episode ended increased the total to 1040 steps, but the ER will contain only 1014 steps.
|
||||
# The last episode is not there. Is this a bug in my changes or also on master?
|
||||
# If we have both an environment and a dataset to load from, we will use the environment only for
|
||||
# evaluating the policy, and will not run heatup. If no dataset is available to load from, we will be collecting
|
||||
# a dataset from an environment.
|
||||
if not self.agent_params.memory.load_memory_from_file_path:
|
||||
if self.is_collecting_random_dataset:
|
||||
# heatup
|
||||
if self.env_params is not None:
|
||||
screen.log_title(
|
||||
"Collecting random-action experience to use for training the actual agent in a Batch RL "
|
||||
"fashion")
|
||||
# Creating a random dataset during the heatup phase is useful mainly for tutorial and debug
|
||||
# purposes.
|
||||
self.heatup(self.heatup_steps)
|
||||
else:
|
||||
screen.log_title(
|
||||
"Starting to improve an agent collecting experience to use for training the actual agent in a "
|
||||
"Batch RL fashion")
|
||||
|
||||
# Creating a dataset during the heatup phase is useful mainly for tutorial and debug purposes. If we have both
|
||||
# an environment and a dataset to load from, we will use the environment only for evaluating the policy,
|
||||
# and will not run heatup.
|
||||
# set the experience generating agent to train
|
||||
self.level_managers[0].agents = {'experience_generating_agent': self.experience_generating_agent}
|
||||
|
||||
screen.log_title("Starting to improve an agent collecting experience to use for training the actual agent in a "
|
||||
"Batch RL fashion")
|
||||
# collect a dataset using the experience generating agent
|
||||
super().improve()
|
||||
|
||||
if self.is_collecting_random_dataset:
|
||||
# heatup
|
||||
if self.env_params is not None and not self.agent_params.memory.load_memory_from_file_path:
|
||||
self.heatup(self.heatup_steps)
|
||||
else:
|
||||
# set the experience generating agent to train
|
||||
self.level_managers[0].agents = {'experience_generating_agent': self.experience_generating_agent}
|
||||
# set the acquired experience to the actual agent that we're going to train
|
||||
self.agent.memory = self.experience_generating_agent.memory
|
||||
|
||||
# collect a dataset using the experience generating agent
|
||||
super().improve()
|
||||
# switch the graph scheduling parameters
|
||||
self.set_schedule_params(self.schedule_params)
|
||||
|
||||
# set the acquired experience to the actual agent that we're going to train
|
||||
self.agent.memory = self.experience_generating_agent.memory
|
||||
|
||||
# switch the graph scheduling parameters
|
||||
self.set_schedule_params(self.schedule_params)
|
||||
|
||||
# set the actual agent to train
|
||||
self.level_managers[0].agents = {'agent': self.agent}
|
||||
# set the actual agent to train
|
||||
self.level_managers[0].agents = {'agent': self.agent}
|
||||
|
||||
# this agent never actually plays
|
||||
self.level_managers[0].agents['agent'].ap.algorithm.num_consecutive_playing_steps = EnvironmentSteps(0)
|
||||
|
||||
@@ -15,6 +15,8 @@
|
||||
# limitations under the License.
|
||||
#
|
||||
import ast
|
||||
|
||||
import pickle
|
||||
from copy import deepcopy
|
||||
|
||||
import math
|
||||
@@ -141,14 +143,27 @@ class EpisodicExperienceReplay(Memory):
|
||||
|
||||
def shuffle_episodes(self):
|
||||
"""
|
||||
Shuffle all the episodes in the replay buffer
|
||||
Shuffle all the complete episodes in the replay buffer, while deleting the last non-complete episode
|
||||
:return:
|
||||
"""
|
||||
self.reader_writer_lock.lock_writing()
|
||||
|
||||
self.assert_not_frozen()
|
||||
|
||||
# unlike the standard usage of the EpisodicExperienceReplay, where we always leave an empty episode after
|
||||
# the last full one, so that new transitions will have where to be added, in this case we delibrately remove
|
||||
# that empty last episode, as we are about to shuffle the memory, and we don't want it to be shuffled in
|
||||
self.remove_last_episode(lock=False)
|
||||
|
||||
random.shuffle(self._buffer)
|
||||
self.transitions = [t for e in self._buffer for t in e.transitions]
|
||||
|
||||
# create a new Episode for the next transitions to be placed into
|
||||
self._buffer.append(Episode(n_step=self.n_step))
|
||||
self._length += 1
|
||||
|
||||
self.reader_writer_lock.release_writing()
|
||||
|
||||
def get_shuffled_training_data_generator(self, size: int) -> List[Transition]:
|
||||
"""
|
||||
Get an generator for iterating through the shuffled replay buffer, for processing the data in epochs.
|
||||
@@ -201,10 +216,10 @@ class EpisodicExperienceReplay(Memory):
|
||||
granularity, size = self.max_size
|
||||
if granularity == MemoryGranularity.Transitions:
|
||||
while size != 0 and self.num_transitions() > size:
|
||||
self._remove_episode(0)
|
||||
self.remove_first_episode(lock=False)
|
||||
elif granularity == MemoryGranularity.Episodes:
|
||||
while self.length() > size:
|
||||
self._remove_episode(0)
|
||||
self.remove_first_episode(lock=False)
|
||||
|
||||
def _update_episode(self, episode: Episode) -> None:
|
||||
episode.update_transitions_rewards_and_bootstrap_data()
|
||||
@@ -321,31 +336,53 @@ class EpisodicExperienceReplay(Memory):
|
||||
|
||||
def _remove_episode(self, episode_index: int) -> None:
|
||||
"""
|
||||
Remove the episode in the given index (even if it is not complete yet)
|
||||
:param episode_index: the index of the episode to remove
|
||||
Remove either the first or the last index
|
||||
:param episode_index: the index of the episode to remove (either 0 or -1)
|
||||
:return: None
|
||||
"""
|
||||
self.assert_not_frozen()
|
||||
assert episode_index == 0 or episode_index == -1, "_remove_episode only supports removing the first or the last " \
|
||||
"episode"
|
||||
|
||||
if len(self._buffer) > episode_index:
|
||||
if len(self._buffer) > 0:
|
||||
episode_length = self._buffer[episode_index].length()
|
||||
self._length -= 1
|
||||
self._num_transitions -= episode_length
|
||||
self._num_transitions_in_complete_episodes -= episode_length
|
||||
del self.transitions[:episode_length]
|
||||
if episode_index == 0:
|
||||
del self.transitions[:episode_length]
|
||||
else: # episode_index = -1
|
||||
del self.transitions[-episode_length:]
|
||||
del self._buffer[episode_index]
|
||||
|
||||
def remove_episode(self, episode_index: int) -> None:
|
||||
def remove_first_episode(self, lock: bool = True) -> None:
|
||||
"""
|
||||
Remove the episode in the given index (even if it is not complete yet)
|
||||
:param episode_index: the index of the episode to remove
|
||||
Remove the first episode (even if it is not complete yet)
|
||||
:param lock: if true, will lock the readers writers lock. this can cause a deadlock if an inheriting class
|
||||
locks and then calls store with lock = True
|
||||
:return: None
|
||||
"""
|
||||
self.reader_writer_lock.lock_writing_and_reading()
|
||||
if lock:
|
||||
self.reader_writer_lock.lock_writing_and_reading()
|
||||
|
||||
self._remove_episode(episode_index)
|
||||
self._remove_episode(0)
|
||||
if lock:
|
||||
self.reader_writer_lock.release_writing_and_reading()
|
||||
|
||||
self.reader_writer_lock.release_writing_and_reading()
|
||||
def remove_last_episode(self, lock: bool = True) -> None:
|
||||
"""
|
||||
Remove the last episode (even if it is not complete yet)
|
||||
:param lock: if true, will lock the readers writers lock. this can cause a deadlock if an inheriting class
|
||||
locks and then calls store with lock = True
|
||||
:return: None
|
||||
"""
|
||||
if lock:
|
||||
self.reader_writer_lock.lock_writing_and_reading()
|
||||
|
||||
self._remove_episode(-1)
|
||||
|
||||
if lock:
|
||||
self.reader_writer_lock.release_writing_and_reading()
|
||||
|
||||
# for API compatibility
|
||||
def get(self, episode_index: int, lock: bool = True) -> Union[None, Episode]:
|
||||
@@ -372,15 +409,6 @@ class EpisodicExperienceReplay(Memory):
|
||||
|
||||
return episode
|
||||
|
||||
# for API compatibility
|
||||
def remove(self, episode_index: int):
|
||||
"""
|
||||
Remove the episode in the given index (even if it is not complete yet)
|
||||
:param episode_index: the index of the episode to remove
|
||||
:return: None
|
||||
"""
|
||||
self.remove_episode(episode_index)
|
||||
|
||||
def clean(self) -> None:
|
||||
"""
|
||||
Clean the memory by removing all the episodes
|
||||
@@ -446,7 +474,7 @@ class EpisodicExperienceReplay(Memory):
|
||||
|
||||
transitions.append(
|
||||
Transition(state={'observation': state},
|
||||
action=current_transition['action'], reward=current_transition['reward'],
|
||||
action=int(current_transition['action']), reward=current_transition['reward'],
|
||||
next_state={'observation': next_state}, game_over=False,
|
||||
info={'all_action_probabilities':
|
||||
ast.literal_eval(current_transition['all_action_probabilities'])}),
|
||||
@@ -516,3 +544,36 @@ class EpisodicExperienceReplay(Memory):
|
||||
self.last_training_set_episode_id = episode_num
|
||||
self.last_training_set_transition_id = \
|
||||
len([t for e in self.get_all_complete_episodes_from_to(0, self.last_training_set_episode_id + 1) for t in e])
|
||||
|
||||
def save(self, file_path: str) -> None:
|
||||
"""
|
||||
Save the replay buffer contents to a pickle file
|
||||
:param file_path: the path to the file that will be used to store the pickled transitions
|
||||
"""
|
||||
with open(file_path, 'wb') as file:
|
||||
pickle.dump(self.get_all_complete_episodes(), file)
|
||||
|
||||
def load_pickled(self, file_path: str) -> None:
|
||||
"""
|
||||
Restore the replay buffer contents from a pickle file.
|
||||
The pickle file is assumed to include a list of transitions.
|
||||
:param file_path: The path to a pickle file to restore
|
||||
"""
|
||||
self.assert_not_frozen()
|
||||
|
||||
with open(file_path, 'rb') as file:
|
||||
episodes = pickle.load(file)
|
||||
num_transitions = sum([len(e.transitions) for e in episodes])
|
||||
if num_transitions > self.max_size[1]:
|
||||
screen.warning("Warning! The number of transition to load into the replay buffer ({}) is "
|
||||
"bigger than the max size of the replay buffer ({}). The excessive transitions will "
|
||||
"not be stored.".format(num_transitions, self.max_size[1]))
|
||||
|
||||
progress_bar = ProgressBar(len(episodes))
|
||||
for episode_idx, episode in enumerate(episodes):
|
||||
self.store_episode(episode)
|
||||
|
||||
# print progress
|
||||
progress_bar.update(episode_idx)
|
||||
|
||||
progress_bar.close()
|
||||
|
||||
@@ -58,9 +58,6 @@ class Memory(object):
|
||||
def get(self, index):
|
||||
raise NotImplementedError("")
|
||||
|
||||
def remove(self, index):
|
||||
raise NotImplementedError("")
|
||||
|
||||
def length(self):
|
||||
raise NotImplementedError("")
|
||||
|
||||
|
||||
@@ -198,15 +198,6 @@ class ExperienceReplay(Memory):
|
||||
"""
|
||||
return self.get_transition(transition_index, lock)
|
||||
|
||||
# for API compatibility
|
||||
def remove(self, transition_index: int, lock: bool=True):
|
||||
"""
|
||||
Remove the transition in the given index
|
||||
:param transition_index: the index of the transition to remove
|
||||
:return: None
|
||||
"""
|
||||
self.remove_transition(transition_index, lock)
|
||||
|
||||
def clean(self, lock: bool=True) -> None:
|
||||
"""
|
||||
Clean the memory by removing all the episodes
|
||||
|
||||
116
rl_coach/presets/Acrobot_DDQN_BCQ_BatchRL.py
Normal file
116
rl_coach/presets/Acrobot_DDQN_BCQ_BatchRL.py
Normal file
@@ -0,0 +1,116 @@
|
||||
import tensorflow as tf
|
||||
|
||||
from rl_coach.agents.ddqn_agent import DDQNAgentParameters
|
||||
from rl_coach.base_parameters import VisualizationParameters, PresetValidationParameters
|
||||
from rl_coach.core_types import TrainingSteps, EnvironmentEpisodes, EnvironmentSteps, CsvDataset
|
||||
from rl_coach.environments.gym_environment import GymVectorEnvironment
|
||||
from rl_coach.graph_managers.batch_rl_graph_manager import BatchRLGraphManager
|
||||
from rl_coach.graph_managers.graph_manager import ScheduleParameters
|
||||
from rl_coach.memories.memory import MemoryGranularity
|
||||
from rl_coach.schedules import LinearSchedule
|
||||
from rl_coach.memories.episodic import EpisodicExperienceReplayParameters
|
||||
from rl_coach.architectures.head_parameters import QHeadParameters
|
||||
from rl_coach.agents.ddqn_bcq_agent import DDQNBCQAgentParameters
|
||||
|
||||
from rl_coach.agents.ddqn_bcq_agent import KNNParameters
|
||||
|
||||
DATASET_SIZE = 50000
|
||||
|
||||
|
||||
####################
|
||||
# Graph Scheduling #
|
||||
####################
|
||||
|
||||
schedule_params = ScheduleParameters()
|
||||
schedule_params.improve_steps = TrainingSteps(10000000000)
|
||||
schedule_params.steps_between_evaluation_periods = TrainingSteps(1)
|
||||
schedule_params.evaluation_steps = EnvironmentEpisodes(10)
|
||||
schedule_params.heatup_steps = EnvironmentSteps(DATASET_SIZE)
|
||||
|
||||
#########
|
||||
# Agent #
|
||||
#########
|
||||
|
||||
agent_params = DDQNBCQAgentParameters()
|
||||
agent_params.network_wrappers['main'].batch_size = 128
|
||||
# TODO cross-DL framework abstraction for a constant initializer?
|
||||
agent_params.network_wrappers['main'].heads_parameters = [QHeadParameters(output_bias_initializer=tf.constant_initializer(-100))]
|
||||
|
||||
agent_params.algorithm.num_steps_between_copying_online_weights_to_target = TrainingSteps(
|
||||
100)
|
||||
agent_params.algorithm.discount = 0.99
|
||||
|
||||
agent_params.algorithm.action_drop_method_parameters = KNNParameters()
|
||||
|
||||
# NN configuration
|
||||
agent_params.network_wrappers['main'].learning_rate = 0.0001
|
||||
agent_params.network_wrappers['main'].replace_mse_with_huber_loss = False
|
||||
agent_params.network_wrappers['main'].softmax_temperature = 0.2
|
||||
|
||||
# ER size
|
||||
agent_params.memory = EpisodicExperienceReplayParameters()
|
||||
# DATATSET_PATH = 'acrobot.csv'
|
||||
# agent_params.memory.load_memory_from_file_path = CsvDataset(DATATSET_PATH, True)
|
||||
|
||||
# E-Greedy schedule
|
||||
agent_params.exploration.epsilon_schedule = LinearSchedule(0, 0, 10000)
|
||||
agent_params.exploration.evaluation_epsilon = 0
|
||||
|
||||
# Experience Generating Agent parameters
|
||||
experience_generating_agent_params = DDQNAgentParameters()
|
||||
|
||||
# schedule parameters
|
||||
experience_generating_schedule_params = ScheduleParameters()
|
||||
experience_generating_schedule_params.heatup_steps = EnvironmentSteps(1000)
|
||||
experience_generating_schedule_params.improve_steps = TrainingSteps(
|
||||
DATASET_SIZE - experience_generating_schedule_params.heatup_steps.num_steps)
|
||||
experience_generating_schedule_params.steps_between_evaluation_periods = EnvironmentEpisodes(10)
|
||||
experience_generating_schedule_params.evaluation_steps = EnvironmentEpisodes(1)
|
||||
|
||||
# DQN params
|
||||
experience_generating_agent_params.algorithm.num_steps_between_copying_online_weights_to_target = EnvironmentSteps(100)
|
||||
experience_generating_agent_params.algorithm.discount = 0.99
|
||||
experience_generating_agent_params.algorithm.num_consecutive_playing_steps = EnvironmentSteps(1)
|
||||
|
||||
# NN configuration
|
||||
experience_generating_agent_params.network_wrappers['main'].learning_rate = 0.0001
|
||||
experience_generating_agent_params.network_wrappers['main'].batch_size = 128
|
||||
experience_generating_agent_params.network_wrappers['main'].replace_mse_with_huber_loss = False
|
||||
experience_generating_agent_params.network_wrappers['main'].heads_parameters = \
|
||||
[QHeadParameters(output_bias_initializer=tf.constant_initializer(-100))]
|
||||
|
||||
# ER size
|
||||
experience_generating_agent_params.memory = EpisodicExperienceReplayParameters()
|
||||
experience_generating_agent_params.memory.max_size = \
|
||||
(MemoryGranularity.Transitions,
|
||||
experience_generating_schedule_params.heatup_steps.num_steps +
|
||||
experience_generating_schedule_params.improve_steps.num_steps + 1)
|
||||
|
||||
# E-Greedy schedule
|
||||
experience_generating_agent_params.exploration.epsilon_schedule = LinearSchedule(1.0, 0.01, DATASET_SIZE)
|
||||
experience_generating_agent_params.exploration.evaluation_epsilon = 0
|
||||
|
||||
|
||||
################
|
||||
# Environment #
|
||||
################
|
||||
env_params = GymVectorEnvironment(level='Acrobot-v1')
|
||||
|
||||
########
|
||||
# Test #
|
||||
########
|
||||
preset_validation_params = PresetValidationParameters()
|
||||
preset_validation_params.test = True
|
||||
preset_validation_params.min_reward_threshold = 150
|
||||
preset_validation_params.max_episodes_to_achieve_reward = 50
|
||||
preset_validation_params.read_csv_tries = 500
|
||||
|
||||
graph_manager = BatchRLGraphManager(agent_params=agent_params,
|
||||
experience_generating_agent_params=experience_generating_agent_params,
|
||||
experience_generating_schedule_params=experience_generating_schedule_params,
|
||||
env_params=env_params,
|
||||
schedule_params=schedule_params,
|
||||
vis_params=VisualizationParameters(dump_signals_to_csv_every_x_episodes=1),
|
||||
preset_validation_params=preset_validation_params,
|
||||
reward_model_num_epochs=30,
|
||||
train_to_eval_ratio=0.4)
|
||||
378
tutorials/4. Batch Reinforcement Learning.ipynb
Normal file
378
tutorials/4. Batch Reinforcement Learning.ipynb
Normal file
@@ -0,0 +1,378 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Batch Reinforcement Learning with Coach"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In many real-world problems, a learning agent cannot interact with the real environment or with a simlulated one. This might be due to the risk of taking sub-optimal actions in the real world, or due to the complexity of creating a simluator that immitates correctly the real environment dynamics. In such cases, the learning agent is only exposed to data that was collected using some deployed policy, and we would like to use that data to learn a better policy for solving the problem. \n",
|
||||
"One such example might be developing a better drug dose or admission scheduling policy. We have data based on the policy that was used with patients so far, but cannot experiment (and explore) on patients to collect new data. \n",
|
||||
"\n",
|
||||
"But wait... If we don't have a simulator, how would we evaluate our newly learned policy and know if it is any good? Which algorithms should we be using in order to better address the problem of learning only from a batch of data? \n",
|
||||
"\n",
|
||||
"Alternatively, what do we do if we don't have a simulator, but instead we can actually deploy our policy on that real-world environment, and would just like to separate the new data collection part from the learning part (i.e. if we have a system that can quite easily run inference, but is very hard to integrate a reinforcement learning framework with, such as Coach, for learning a new policy).\n",
|
||||
"\n",
|
||||
"We will try to address these questions and more in this tutorial, demonstrating how to use [Batch Reinforcement Learning](http://tgabel.de/cms/fileadmin/user_upload/documents/Lange_Gabel_EtAl_RL-Book-12.pdf). \n",
|
||||
"\n",
|
||||
"First, let's use a simple environment to collect the data to be used for learning a policy using Batch RL. In reality, we probably would already have a dataset of transitions of the form `<current_observation, action, reward, next_state>` to be used for learning a new policy. Ideally, we would also have, for each transtion, $p(a|o)$ the probabilty of an action, given that transition's `current_observation`. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Preliminaries\n",
|
||||
"First, get the required imports and other general settings we need for this notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Solving Acrobot with Batch RL"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from copy import deepcopy\n",
|
||||
"import tensorflow as tf\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from rl_coach.agents.dqn_agent import DQNAgentParameters\n",
|
||||
"from rl_coach.agents.ddqn_bcq_agent import DDQNBCQAgentParameters, KNNParameters\n",
|
||||
"from rl_coach.base_parameters import VisualizationParameters\n",
|
||||
"from rl_coach.core_types import TrainingSteps, EnvironmentEpisodes, EnvironmentSteps, CsvDataset\n",
|
||||
"from rl_coach.environments.gym_environment import GymVectorEnvironment\n",
|
||||
"from rl_coach.graph_managers.batch_rl_graph_manager import BatchRLGraphManager\n",
|
||||
"from rl_coach.graph_managers.graph_manager import ScheduleParameters\n",
|
||||
"from rl_coach.memories.memory import MemoryGranularity\n",
|
||||
"from rl_coach.schedules import LinearSchedule\n",
|
||||
"from rl_coach.memories.episodic import EpisodicExperienceReplayParameters\n",
|
||||
"from rl_coach.architectures.head_parameters import QHeadParameters\n",
|
||||
"from rl_coach.agents.ddqn_agent import DDQNAgentParameters\n",
|
||||
"from rl_coach.base_parameters import TaskParameters\n",
|
||||
"from rl_coach.spaces import SpacesDefinition, DiscreteActionSpace, VectorObservationSpace, StateSpace, RewardSpace\n",
|
||||
"\n",
|
||||
"# Get all the outputs of this tutorial out of the 'Resources' folder\n",
|
||||
"os.chdir('Resources')\n",
|
||||
"\n",
|
||||
"# the dataset size to collect \n",
|
||||
"DATASET_SIZE = 50000\n",
|
||||
"\n",
|
||||
"task_parameters = TaskParameters(experiment_path='.')\n",
|
||||
"\n",
|
||||
"####################\n",
|
||||
"# Graph Scheduling #\n",
|
||||
"####################\n",
|
||||
"\n",
|
||||
"schedule_params = ScheduleParameters()\n",
|
||||
"\n",
|
||||
"# 100 epochs (we run train over all the dataset, every epoch) of training\n",
|
||||
"schedule_params.improve_steps = TrainingSteps(100)\n",
|
||||
"\n",
|
||||
"# we evaluate the model every epoch\n",
|
||||
"schedule_params.steps_between_evaluation_periods = TrainingSteps(1)\n",
|
||||
"\n",
|
||||
"# only for when we have an enviroment\n",
|
||||
"schedule_params.evaluation_steps = EnvironmentEpisodes(10)\n",
|
||||
"schedule_params.heatup_steps = EnvironmentSteps(DATASET_SIZE)\n",
|
||||
"\n",
|
||||
"################\n",
|
||||
"# Environment #\n",
|
||||
"################\n",
|
||||
"env_params = GymVectorEnvironment(level='Acrobot-v1')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's use OpenAI Gym's `Acrobot-v1` in order to collect a dataset of experience, and then use that dataset in order to learn a policy solving the environment using Batch RL. \n",
|
||||
"\n",
|
||||
"### The Preset \n",
|
||||
"\n",
|
||||
"First we will collect a dataset using a random action selecting policy. Then we will use that dataset to train an agent in a Batch RL fashion. <br>\n",
|
||||
"Let's start simple - training an agent with Double DQN. \n",
|
||||
"\n",
|
||||
" "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tf.reset_default_graph() # just to clean things up; only needed for the tutorial\n",
|
||||
"\n",
|
||||
"#########\n",
|
||||
"# Agent #\n",
|
||||
"#########\n",
|
||||
"agent_params = DDQNAgentParameters()\n",
|
||||
"agent_params.network_wrappers['main'].batch_size = 128\n",
|
||||
"agent_params.algorithm.num_steps_between_copying_online_weights_to_target = TrainingSteps(100)\n",
|
||||
"agent_params.algorithm.discount = 0.99\n",
|
||||
"\n",
|
||||
"# to jump start the agent's q values, and speed things up, we'll initialize the last Dense layer's bias\n",
|
||||
"# with a number in the order of the discounted reward of a random policy\n",
|
||||
"agent_params.network_wrappers['main'].heads_parameters = \\\n",
|
||||
"[QHeadParameters(output_bias_initializer=tf.constant_initializer(-100))]\n",
|
||||
"\n",
|
||||
"# NN configuration\n",
|
||||
"agent_params.network_wrappers['main'].learning_rate = 0.0001\n",
|
||||
"agent_params.network_wrappers['main'].replace_mse_with_huber_loss = False\n",
|
||||
"\n",
|
||||
"# ER - we'll need an episodic replay buffer for off-policy evaluation\n",
|
||||
"agent_params.memory = EpisodicExperienceReplayParameters()\n",
|
||||
"\n",
|
||||
"# E-Greedy schedule - there is no exploration in Batch RL. Disabling E-Greedy. \n",
|
||||
"agent_params.exploration.epsilon_schedule = LinearSchedule(initial_value=0, final_value=0, decay_steps=1)\n",
|
||||
"agent_params.exploration.evaluation_epsilon = 0\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"graph_manager = BatchRLGraphManager(agent_params=agent_params,\n",
|
||||
" env_params=env_params,\n",
|
||||
" schedule_params=schedule_params,\n",
|
||||
" vis_params=VisualizationParameters(dump_signals_to_csv_every_x_episodes=1),\n",
|
||||
" reward_model_num_epochs=30)\n",
|
||||
"graph_manager.create_graph(task_parameters)\n",
|
||||
"graph_manager.improve()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First we see Coach running a long heatup of 50,000 steps (as we have defined a `DATASET_SIZE` of 50,000 in the preliminaries section), in order to collect a dataset of random actions. Then we can see Coach training a supervised reward model that is needed for the `Doubly Robust` OPE (off-policy evaluation). Last, Coach starts using the collected dataset of experience to train a Double DQN agent. Since, for this environment, we actually do have a simulator, Coach will be using it to evaluate the learned policy. As you can probably see, since this is a very simple environment, a dataset of just random actions is enough to get a Double DQN agent training, and reaching rewards of less than -100 (actually solving the environment). As you can also probably notice, the learning is not very stable, and if you take a look at the Q values predicted by the agent (e.g. in Coach Dashboard; this tutorial experiment results are under the `Resources` folder), you will see them increasing unboundedly. This is caused due to the Batch RL based learning, where not interacting with the environment any further, while randomly exposing only small parts of the MDP in the dataset, makes learning even harder than standard Off-Policy RL. This phenomena is very nicely explained in [Off-Policy Deep Reinforcement Learning without Exploration](https://arxiv.org/abs/1812.02900). We have implemented a discrete-actions variant of [Batch Constrained Q-Learning](https://github.com/NervanaSystems/coach/blob/master/rl_coach/agents/ddqn_bcq_agent.py), which helps mitigating this issue. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, let's switch to a dataset containing data combined from several 'deployed' policies, as is often the case in real-world scenarios, where we already have a policy (hopefully not a random one) in-place and we want to improve it. For instance, a recommender system already using a policy for generating recommendations, and we want to use Batch RL to learn a better policy. <br>\n",
|
||||
"\n",
|
||||
"We will demonstrate that by training an agent, and using its replay buffer content as the dataset from which we will learn a new policy, without any further interaction with the environment. This should allow for both a better trained agent and for more meaningful Off-Policy Evaluation (as the more extensive your input data is, i.e. exposing more of the MDP, the better the evaluation of a new policy based on it)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tf.reset_default_graph() # just to clean things up; only needed for the tutorial\n",
|
||||
"\n",
|
||||
"# Experience Generating Agent parameters\n",
|
||||
"experience_generating_agent_params = DDQNAgentParameters()\n",
|
||||
"\n",
|
||||
"# schedule parameters\n",
|
||||
"experience_generating_schedule_params = ScheduleParameters()\n",
|
||||
"experience_generating_schedule_params.heatup_steps = EnvironmentSteps(1000)\n",
|
||||
"experience_generating_schedule_params.improve_steps = TrainingSteps(\n",
|
||||
" DATASET_SIZE - experience_generating_schedule_params.heatup_steps.num_steps)\n",
|
||||
"experience_generating_schedule_params.steps_between_evaluation_periods = EnvironmentEpisodes(10)\n",
|
||||
"experience_generating_schedule_params.evaluation_steps = EnvironmentEpisodes(1)\n",
|
||||
"\n",
|
||||
"# DQN params\n",
|
||||
"experience_generating_agent_params.algorithm.num_steps_between_copying_online_weights_to_target = EnvironmentSteps(100)\n",
|
||||
"experience_generating_agent_params.algorithm.discount = 0.99\n",
|
||||
"experience_generating_agent_params.algorithm.num_consecutive_playing_steps = EnvironmentSteps(1)\n",
|
||||
"\n",
|
||||
"# NN configuration\n",
|
||||
"experience_generating_agent_params.network_wrappers['main'].learning_rate = 0.0001\n",
|
||||
"experience_generating_agent_params.network_wrappers['main'].batch_size = 128\n",
|
||||
"experience_generating_agent_params.network_wrappers['main'].replace_mse_with_huber_loss = False\n",
|
||||
"experience_generating_agent_params.network_wrappers['main'].heads_parameters = \\\n",
|
||||
"[QHeadParameters(output_bias_initializer=tf.constant_initializer(-100))]\n",
|
||||
"\n",
|
||||
"# ER size\n",
|
||||
"experience_generating_agent_params.memory = EpisodicExperienceReplayParameters()\n",
|
||||
"experience_generating_agent_params.memory.max_size = \\\n",
|
||||
" (MemoryGranularity.Transitions,\n",
|
||||
" experience_generating_schedule_params.heatup_steps.num_steps +\n",
|
||||
" experience_generating_schedule_params.improve_steps.num_steps)\n",
|
||||
"\n",
|
||||
"# E-Greedy schedule\n",
|
||||
"experience_generating_agent_params.exploration.epsilon_schedule = LinearSchedule(1.0, 0.01, DATASET_SIZE)\n",
|
||||
"experience_generating_agent_params.exploration.evaluation_epsilon = 0\n",
|
||||
"\n",
|
||||
"# 50 epochs of training (the entire dataset is used each epoch)\n",
|
||||
"schedule_params.improve_steps = TrainingSteps(50)\n",
|
||||
"\n",
|
||||
"graph_manager = BatchRLGraphManager(agent_params=agent_params,\n",
|
||||
" experience_generating_agent_params=experience_generating_agent_params,\n",
|
||||
" experience_generating_schedule_params=experience_generating_schedule_params,\n",
|
||||
" env_params=env_params,\n",
|
||||
" schedule_params=schedule_params,\n",
|
||||
" vis_params=VisualizationParameters(dump_signals_to_csv_every_x_episodes=1),\n",
|
||||
" reward_model_num_epochs=30,\n",
|
||||
" train_to_eval_ratio=0.5)\n",
|
||||
"graph_manager.create_graph(task_parameters)\n",
|
||||
"graph_manager.improve()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Off-Policy Evaluation\n",
|
||||
"As we mentioned earlier, one of the hardest problems in Batch RL is that we do not have a simulator or cannot easily deploy a trained policy on the real-world environment, in order to test its goodness. This is where OPE comes in handy. </br>\n",
|
||||
"\n",
|
||||
"Coach supports several off-policy evaluators, some are useful for bandits problems (only evaluating a single step return), and others are for full-blown Reinforcement Learning problems. The main goal of the OPEs is to help us select the best model, either for collecting more data to do another round of Batch RL on, or for actual deployment in the real-world environment. \n",
|
||||
"\n",
|
||||
"Opening the experiment that we have just ran (under the `tutorials/Resources` folder, with Coach Dashboard), you will be able to plot the actual simulator's `Evaluation Reward`. Usually, we won't have this signal available as we won't have a simulator, but since we're using a dummy environment for demonstration purposes, we can take a look and examine how the OPEs correlate with it. \n",
|
||||
"\n",
|
||||
"Here are two example plots from Dashboard showing how well the `Weighted Importance Sampling` (RL estimator) and the `Doubly Robust` (bandits estimator) each correlate with the `Evaluation Reward`. </br>\n",
|
||||
" \n",
|
||||
"</br>\n",
|
||||
" \n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using a Dataset to Feed a Batch RL Algorithm "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Ok, so we now understand how things are expected to work. But, hey... if we don't have a simulator (which we did have in this tutorial so far, and have used it to generate a training/evaluation dataset) how will we feed Coach with the dataset to train/evaluate on?\n",
|
||||
"\n",
|
||||
"### The CSV\n",
|
||||
"Coach defines a csv data format that can be used to fill its replay buffer. We have created an example csv from the same `Acrobot-v1` environment, and have placed it under the [Tutorials' Resources folder](https://github.com/NervanaSystems/coach/tree/master/tutorials/Resources).\n",
|
||||
"\n",
|
||||
"Here are the first couple of lines from it so you can get a grip of what to expect - \n",
|
||||
"\n",
|
||||
"| action | all_action_probabilities | episode_id | episode_name | reward | transition_number | state_feature_0 | state_feature_1 | state_feature_2 | state_feature_3 | state_feature_4 | state_feature_5 \n",
|
||||
"|---|---|---|---|---|---|---|---|---|---|---|---------------------------------------------------------------------------|\n",
|
||||
"|0|[0.4159157,0.23191088,0.35217342]|0|acrobot|-1|0|0.996893843|0.078757007|0.997566524|0.069721088|-0.078539907|-0.072449002 |\n",
|
||||
"|1|[0.46244532,0.22402011,0.31353462]|0|acrobot|-1|1|0.997643051|0.068617369|0.999777604|0.021088905|-0.022653483|-0.40743716|\n",
|
||||
"|0|[0.4961428,0.21575058,0.2881066]|0|acrobot|-1|2|0.997613067|0.069051922|0.996147629|-0.087692077|0.023128103|-0.662019594|\n",
|
||||
"|2|[0.49341106,0.22363988,0.28294897]|0|acrobot|-1|3|0.997141344|0.075558854|0.972780655|-0.231727853|0.035575821|-0.771402023|\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### The Preset"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tf.reset_default_graph() # just to clean things up; only needed for the tutorial\n",
|
||||
"\n",
|
||||
"#########\n",
|
||||
"# Agent #\n",
|
||||
"#########\n",
|
||||
"# note that we have moved to BCQ, which will help the training to converge better and faster\n",
|
||||
"agent_params = DDQNBCQAgentParameters() \n",
|
||||
"agent_params.network_wrappers['main'].batch_size = 128\n",
|
||||
"agent_params.algorithm.num_steps_between_copying_online_weights_to_target = TrainingSteps(100)\n",
|
||||
"agent_params.algorithm.discount = 0.99\n",
|
||||
"\n",
|
||||
"# to jump start the agent's q values, and speed things up, we'll initialize the last Dense layer\n",
|
||||
"# with something in the order of the discounted reward of a random policy\n",
|
||||
"agent_params.network_wrappers['main'].heads_parameters = \\\n",
|
||||
"[QHeadParameters(output_bias_initializer=tf.constant_initializer(-100))]\n",
|
||||
"\n",
|
||||
"# NN configuration\n",
|
||||
"agent_params.network_wrappers['main'].learning_rate = 0.0001\n",
|
||||
"agent_params.network_wrappers['main'].replace_mse_with_huber_loss = False\n",
|
||||
"\n",
|
||||
"# ER - we'll be needing an episodic replay buffer for off-policy evaluation\n",
|
||||
"agent_params.memory = EpisodicExperienceReplayParameters()\n",
|
||||
"\n",
|
||||
"# E-Greedy schedule - there is no exploration in Batch RL. Disabling E-Greedy. \n",
|
||||
"agent_params.exploration.epsilon_schedule = LinearSchedule(initial_value=0, final_value=0, decay_steps=1)\n",
|
||||
"agent_params.exploration.evaluation_epsilon = 0\n",
|
||||
"\n",
|
||||
"# can use either a kNN or a NN based model for predicting which actions not to max over in the bellman equation\n",
|
||||
"agent_params.algorithm.action_drop_method_parameters = KNNParameters()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"DATATSET_PATH = 'acrobot_dataset.csv'\n",
|
||||
"agent_params.memory = EpisodicExperienceReplayParameters()\n",
|
||||
"agent_params.memory.load_memory_from_file_path = CsvDataset(DATATSET_PATH, is_episodic = True)\n",
|
||||
"\n",
|
||||
"spaces = SpacesDefinition(state=StateSpace({'observation': VectorObservationSpace(shape=6)}),\n",
|
||||
" goal=None,\n",
|
||||
" action=DiscreteActionSpace(3),\n",
|
||||
" reward=RewardSpace(1))\n",
|
||||
"\n",
|
||||
"graph_manager = BatchRLGraphManager(agent_params=agent_params,\n",
|
||||
" env_params=None,\n",
|
||||
" spaces_definition=spaces,\n",
|
||||
" schedule_params=schedule_params,\n",
|
||||
" vis_params=VisualizationParameters(dump_signals_to_csv_every_x_episodes=1),\n",
|
||||
" reward_model_num_epochs=30,\n",
|
||||
" train_to_eval_ratio=0.4)\n",
|
||||
"graph_manager.create_graph(task_parameters)\n",
|
||||
"graph_manager.improve()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Model Selection with OPE\n",
|
||||
"Running the above preset will train an agent based on the experience in the csv dataset. Note that now we are finally demonstarting the real scenario with Batch Reinforcement Learning, where we train and evaluate solely based on the recorded dataset. Coach uses the same dataset (after internally splitting it, obviously) for both training and evaluation. \n",
|
||||
"\n",
|
||||
"Now that we have ran this preset, we have 100 agents (one is saved after every training epoch), and we would have to decide which one we choose for deployment (either for running another round of experience collection and training, or for final deployment, meaning going into production). \n",
|
||||
"\n",
|
||||
"Opening the experiment csv in Dashboard and displaying the OPE signals, we can now choose a checkpoint file for deployment on the end-node. Here is an example run, where we show the `Weighted Importance Sampling` and `Sequential Doubly Robust` OPEs. \n",
|
||||
"</br>\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"Based on this plot we would probably have chosen a checkpoint from around Epoch 85. From here, if we are not satisfied with the deployed agent's performance, we can iteratively continue with data collection, policy training (maybe based on a combination of all the data collected so far), and deployment. \n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
BIN
tutorials/Resources/img/dr.png
Normal file
BIN
tutorials/Resources/img/dr.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 106 KiB |
BIN
tutorials/Resources/img/model_selection.png
Normal file
BIN
tutorials/Resources/img/model_selection.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 87 KiB |
BIN
tutorials/Resources/img/wis.png
Normal file
BIN
tutorials/Resources/img/wis.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 107 KiB |
Reference in New Issue
Block a user