1
0
mirror of https://github.com/gryf/coach.git synced 2026-04-30 12:34:11 +02:00

Release 0.9

Main changes are detailed below:

New features -
* CARLA 0.7 simulator integration
* Human control of the game play
* Recording of human game play and storing / loading the replay buffer
* Behavioral cloning agent and presets
* Golden tests for several presets
* Selecting between deep / shallow image embedders
* Rendering through pygame (with some boost in performance)

API changes -
* Improved environment wrapper API
* Added an evaluate flag to allow convenient evaluation of existing checkpoints
* Improve frameskip definition in Gym

Bug fixes -
* Fixed loading of checkpoints for agents with more than one network
* Fixed the N Step Q learning agent python3 compatibility
This commit is contained in:
Itai Caspi
2017-12-19 19:27:16 +02:00
committed by GitHub
parent 11faf19649
commit 125c7ee38d
41 changed files with 1713 additions and 260 deletions
+3 -1
View File
@@ -74,7 +74,9 @@ class EpisodicExperienceReplay(Memory):
def sample(self, size):
assert self.num_transitions_in_complete_episodes() > size, \
'There are not enough transitions in the replay buffer'
'There are not enough transitions in the replay buffer. ' \
'Available transitions: {}. Requested transitions: {}.'\
.format(self.num_transitions_in_complete_episodes(), size)
batch = []
transitions_idx = np.random.randint(self.num_transitions_in_complete_episodes(), size=size)
for i in transitions_idx: