diff --git a/README.md b/README.md
index d315bba..f8083fd 100644
--- a/README.md
+++ b/README.md
@@ -10,7 +10,7 @@ Basic RL components (algorithms, environments, neural network architectures, exp
Training an agent to solve an environment is as easy as running:
```bash
-python coach.py -p CartPole_DQN -r
+python3 coach.py -p CartPole_DQN -r
```

@@ -19,7 +19,7 @@ Blog post from the Intel® Nervana™ website can be found [here](https://www.in
## Installation
-Note: Coach has been tested on Ubuntu 16.04 LTS only.
+Note: Coach has only been tested on Ubuntu 16.04 LTS, and with Python 3.5.
Coach's installer will setup all the basics needed to get the user going with running Coach on top of [OpenAI Gym](https://github.com/openai/gym) environments. This can be done by running the following command and then following the on-screen printed instructions:
@@ -48,7 +48,7 @@ In addition to OpenAI Gym, several other environments were tested and are suppor
Coach's installer installs [Intel-Optimized TensorFlow](https://software.intel.com/en-us/articles/intel-optimized-tensorflow-wheel-now-available), which does not support GPU, by default. In order to have Coach running with GPU, a GPU supported TensorFlow version must be installed. This can be done by overriding the TensorFlow version:
```bash
-pip install tensorflow-gpu
+pip3 install tensorflow-gpu
```
## Running Coach
@@ -67,38 +67,38 @@ To list all the available presets use the `-l` flag.
To run a preset, use:
```bash
-python coach.py -r -p
+python3 coach.py -r -p
```
For example:
1. CartPole environment using Policy Gradients:
```bash
- python coach.py -r -p CartPole_PG
+ python3 coach.py -r -p CartPole_PG
```
2. Pendulum using Clipped PPO:
```bash
- python coach.py -r -p Pendulum_ClippedPPO -n 8
+ python3 coach.py -r -p Pendulum_ClippedPPO -n 8
```
3. MountainCar using A3C:
```bash
- python coach.py -r -p MountainCar_A3C -n 8
+ python3 coach.py -r -p MountainCar_A3C -n 8
```
-
+
4. Doom basic level using Dueling network and Double DQN algorithm:
```bash
- python coach.py -r -p Doom_Basic_Dueling_DDQN
+ python3 coach.py -r -p Doom_Basic_Dueling_DDQN
```
5. Doom health gathering level using Mixed Monte Carlo:
```bash
- python coach.py -r -p Doom_Health_MMC
+ python3 coach.py -r -p Doom_Health_MMC
```
It is easy to create new presets for different levels or environments by following the same pattern as in presets.py
@@ -113,7 +113,7 @@ While Coach trains an agent, a csv file containing the relevant training signals
To use it, run:
```bash
-python dashboard.py
+python3 dashboard.py
```
@@ -143,7 +143,7 @@ Once a parallelized run is started, the ```train_and_sync_networks``` API will a
Then, it merely requires running Coach with the ``` -n``` flag and with the number of workers to run with. For instance, the following command will set 16 workers to work together to train a MuJoCo Hopper:
```bash
-python coach.py -p Hopper_A3C -n 16
+python3 coach.py -p Hopper_A3C -n 16
```
diff --git a/coach.py b/coach.py
index 97565c2..bcd2089 100644
--- a/coach.py
+++ b/coach.py
@@ -273,7 +273,7 @@ if __name__ == "__main__":
set_cpu()
# create a parameter server
- Popen(["python",
+ Popen(["python3",
"./parallel_actor.py",
"--ps_hosts={}".format(ps_hosts),
"--worker_hosts={}".format(worker_hosts),
@@ -296,7 +296,7 @@ if __name__ == "__main__":
run_dict['visualization.render'] = False # #In a parallel setting, only the evaluation agent renders
json_run_dict_path = run_dict_to_json(run_dict, i)
- workers_args = ["python", "./parallel_actor.py",
+ workers_args = ["python3", "./parallel_actor.py",
"--ps_hosts={}".format(ps_hosts),
"--worker_hosts={}".format(worker_hosts),
"--job_name=worker",
diff --git a/dashboard.py b/dashboard.py
index 6d0a77f..dadb25d 100644
--- a/dashboard.py
+++ b/dashboard.py
@@ -16,7 +16,7 @@
"""
To run Coach Dashboard, run the following command:
-python dashboard.py
+python3 dashboard.py
"""
from utils import *
diff --git a/install.sh b/install.sh
index 69a485b..2c3606b 100755
--- a/install.sh
+++ b/install.sh
@@ -105,7 +105,7 @@ if [ ${GET_PREFERENCES_MANUALLY} -eq 1 ]; then
INSTALL_NEON=${retval}
fi
-IN_VIRTUAL_ENV=`python -c 'import sys; print("%i" % hasattr(sys, "real_prefix"))'`
+IN_VIRTUAL_ENV=`python3 -c 'import sys; print("%i" % hasattr(sys, "real_prefix"))'`
# basic installations
sudo -E apt-get install python3-pip cmake zlib1g-dev python3-tk python-opencv -y
@@ -139,13 +139,13 @@ sudo -E apt-get install libboost-all-dev -y
# Coach
if [ ${INSTALL_COACH} -eq 1 ]; then
echo "Installing Coach requirements"
- pip install -r ./requirements_coach.txt
+ pip3 install -r ./requirements_coach.txt
fi
# Dashboard
if [ ${INSTALL_DASHBOARD} -eq 1 ]; then
echo "Installing Dashboard requirements"
- pip install -r ./requirements_dashboard.txt
+ pip3 install -r ./requirements_dashboard.txt
sudo -E apt-get install dpkg-dev build-essential python3.5-dev libjpeg-dev libtiff-dev libsdl1.2-dev libnotify-dev \
freeglut3 freeglut3-dev libsm-dev libgtk2.0-dev libgtk-3-dev libwebkitgtk-dev libgtk-3-dev libwebkitgtk-3.0-dev libgstreamer-plugins-base1.0-dev -y
@@ -164,8 +164,8 @@ fi
if [ ${INSTALL_GYM} -eq 1 ]; then
echo "Installing Gym support"
sudo -E apt-get install libav-tools libsdl2-dev swig cmake -y
- pip install box2d # for bipedal walker etc.
- pip install gym
+ pip3 install box2d # for bipedal walker etc.
+ pip3 install gym
fi
# NGraph and Neon