Skip to content

Release v0.8.2 #2157

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 48 commits into from
Jun 24, 2019
Merged

Release v0.8.2 #2157

merged 48 commits into from
Jun 24, 2019

Conversation

xiaomaogy
Copy link
Contributor

No description provided.

eshvk and others added 30 commits April 4, 2019 13:27
Install dependencies for ml-agents-envs and ml-agents in Docker
Features:
- Reformat code via black.
- Adding circleci configurations.
- Add contribution guidelines.

Steps to reproduce:
- `pip install black`
- `black <source code directory>`
… scene using environment variable (#1956)

* Added the builder script

* Removed the menu item

* Changed the brainToControl to public

* Added the scene for switching

* Modified according to the comments

* Removed the Builder and BuilderUtils script, made all of the logic into the Startup.cs

* Switched back to the previous way using PreExport method

* Added the return at the EOF.

* Resolved the codacy comments.

* Removed one empty line

* Resolved the 2 round comments
* Add exception for external brains and array-ify

* Fix exception message
* fixed the format

* changed the circleci config
> Added the no_graphics argument to the gym interface. #1413
Added a paragraph in the docs/Learning-Environment-Design-Agents.md document regarding the use of SetReward and how it is different from AddReward
* Update Learning-Environment-Create-New.md

Section : Final Editor Setup - Step 3. It says:
Drag the Brain RollerBallPlayer from the Project window to the RollerAgent Brain field.

Should say:
Drag the Brain RollerBallBrain from the Project window to the RollerAgent Brain field.

* Develop black format fix (#1998)

* fixed the format

* changed the circleci config

* [Gym] Added no_graphics argument (#1997)

> Added the no_graphics argument to the gym interface. #1413

* [Documentation] SetReward method (#1996)

Added a paragraph in the docs/Learning-Environment-Design-Agents.md document regarding the use of SetReward and how it is different from AddReward

* [Documentation] Added information for the environments the trainer cannot train with the default configurations (#1995)

* Format gym_unity using black
* Sanitize demo filenames so that they can't be too long, overflow the header, and corrupt demo files
* Fix issue where 1st demo of each episode is always recorded as 0 action
* Add allow_multiple_visual_obs option to the UnityEnv class

* Edit associated documentation for the `allow_multiple_visual_obs` option
* Add GetTotalStepCount to the Academy

This will allow the RecordVideos plugin to record based on the current academy step

* fixup! Add GetTotalStepCount to the Academy

* Add the video recorder to the documentation
* Fix for recording mid-play

* Change docstring

* Add additional null check
* Update Learning-Environment-Executable.md

fixed issue when creating a build folder in the assets folder.  Refering to #2033

* Update Learning-Environment-Executable.md
added missing instruction at the end
When using parallel SubprocessUnityEnvironment instances along
with Academy Done(), a new step might be taken when reset should
have been called because some environments may have been done while
others were not (making "global done" less useful).

This change manages the reset on `global_done` at the level of the
environment worker, and removes the global reset from
TrainerController.
The CircleCI checks have been broken because of outdated setuptools,
this change should fix the issue.
Jonathan Harper and others added 18 commits May 30, 2019 17:24
* Update Learning-Environment-Create-New.md

- Clarify that training is done in the original ml-agents project folder
- Remove mistype
- In the future it could help to show the user that they can copy the config folder and run training in a new project folder so they don't have to mix project settings in the original config folder

* Update Learning-Environment-Create-New.md

Add file paths
Before the CSV file's mean rewards would lag much behind the rest of the code since this buffer was never cleared.
Run black after Barracuda 0.2 merge
Documentation change to Heuristic Brain
* Script to validate .meta files are set up correctly

* Add command to CI

* don't gitignore Gizmos, add .meta

* Move to utils directory
* Minor basic guide fix

* made it clear for training instruction
* Fixed the import issue

* make black happy

* Use find_namespace_packages
* ignore the idea file

* Retrained most of the models

* Updated the remaining models
@xiaomaogy xiaomaogy requested a review from chriselion June 24, 2019 21:51
@xiaomaogy xiaomaogy merged commit 2befe58 into master Jun 24, 2019
@awjuliani awjuliani deleted the release-v0.8.2 branch July 23, 2019 20:19
@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 18, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.