Unity
From Robin
(→Download and install Unity) |
|||
(11 intermediate revisions not shown) | |||
Line 7: | Line 7: | ||
=== Installing Unity ML-Agents === | === Installing Unity ML-Agents === | ||
- | In the Unity editor, go to ''window->package manager''. Inside the package manager search for ML agents in the Unity Registry. Install the most recent ML Agents package. ML-Agents examples can be downloaded from the | + | In the Unity editor, go to ''window->package manager''. Inside the package manager search for ML agents in the Unity Registry. Install the most recent ML Agents package. ML-Agents examples can be downloaded from the [https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Installation.md ML-Agents Github] |
=== Setting up the Python API === | === Setting up the Python API === | ||
Line 17: | Line 17: | ||
=== Running ML-Agents through the Python API === | === Running ML-Agents through the Python API === | ||
+ | |||
+ | You can interact with Unity ML-Agents by [1] opening Unity through python (''env = UnityEnvironment(side_channels=[])''), [2] resetting the environment (''env.reset()''), [3] stepping the environment forward until a new input from python is required (''env.step()''), [4] getting the state of the agent (with any sensory information; ''env.get_steps(behavior_name: str)'') and [5] setting the actions for agents (''env.set_actions(behavior_name: str, action: ActionTuple)''). For more information, please consult the guides below. | ||
[https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Python-API.md Python API] | [https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Python-API.md Python API] | ||
Line 27: | Line 29: | ||
[https://docs.unity3d.com/Manual/index.html Unity Documentation] | [https://docs.unity3d.com/Manual/index.html Unity Documentation] | ||
+ | == Troubleshooting == | ||
+ | Although it is in principle quite easy to set up a Unity ML-Agents environment, in practice there can be a few tricky bugs either with Unity failing to start, version incompatibilities, or dependencies. If you're having specific issues with Unity and/or ML-Agents, for now, contact Frank. | ||
+ | |||
+ | === Getting rid of too many log messages in the terminal === | ||
+ | |||
+ | Depending on your experiments, it can be the case that you'll see that Unity is sending a bit too many messages. To limit the number of log messages, go to ''Project Settings->Player->Other Settings->Stack Trace'' and adjust the settings accordingly. | ||
== Notes on deterministic behavior == | == Notes on deterministic behavior == | ||
+ | |||
+ | To ensure determinism, it seems that the physics of a scene needs to be reset. This can be done by loading a scene whenever you are evaluating something. Note that when coupling Python and Unity in the ''Editor Mode'', experiments will not be deterministic. You will have to build and run an executable of your project to get deterministic behavior. One can subscribe a function to the ''Academy.Instance.OnEnvironmentReset'' event in Unity that loads the scene containing the robot. You can't just reset the scene then you will lose the connection between python and ml-agents. You then have to create a "singleton" that is not reset when the scene is reloaded. The idea is to have a "RobotManager" with a single tone that keeps on and registers all the side channels. Robotmanager does very little other than maintain the connection, in another script you can have the agent script as usual. | ||
+ | PS: Since you reload the scene, it will also create a "new" agent, it behaves the same but it will get a different id recommended way to retrieve the observations: | ||
+ | decisionSteps,other = self.env.get_steps(individual_name) | ||
+ | obs = decisionSteps.obs # <---- Beste måten å gjøre det på når det er en agent | ||
+ | end_position = obs[0][0][:3] # Henter observasjonene til agent 0 | ||
+ | |||
+ | How to declare a singelton: | ||
+ | public class RobotManager : MonoBehaviour | ||
+ | { | ||
+ | // singleton | ||
+ | private static RobotManager instance; | ||
+ | public static RobotManager Instance { get { return instance; } } | ||
+ | } | ||
+ | |||
+ | [https://docs.nvidia.com/gameworks/content/gameworkslibrary/physx/guide/Manual/BestPractices.html#determinism On PhysX determinism] | ||
[[Unity-Timing-David | David's tips on getting deterministic timing]] | [[Unity-Timing-David | David's tips on getting deterministic timing]] |
Current revision as of 16:52, 20 February 2024
Contents |
Installation
Download and install Unity
Installing Unity ML-Agents
In the Unity editor, go to window->package manager. Inside the package manager search for ML agents in the Unity Registry. Install the most recent ML Agents package. ML-Agents examples can be downloaded from the ML-Agents Github
Setting up the Python API
pip install mlagents
Using Unity ML-Agents
Running ML-Agents through the Python API
You can interact with Unity ML-Agents by [1] opening Unity through python (env = UnityEnvironment(side_channels=[])), [2] resetting the environment (env.reset()), [3] stepping the environment forward until a new input from python is required (env.step()), [4] getting the state of the agent (with any sensory information; env.get_steps(behavior_name: str)) and [5] setting the actions for agents (env.set_actions(behavior_name: str, action: ActionTuple)). For more information, please consult the guides below.
Documentation
Unity ML-Agents Github Page Unity Documentation
Troubleshooting
Although it is in principle quite easy to set up a Unity ML-Agents environment, in practice there can be a few tricky bugs either with Unity failing to start, version incompatibilities, or dependencies. If you're having specific issues with Unity and/or ML-Agents, for now, contact Frank.
Getting rid of too many log messages in the terminal
Depending on your experiments, it can be the case that you'll see that Unity is sending a bit too many messages. To limit the number of log messages, go to Project Settings->Player->Other Settings->Stack Trace and adjust the settings accordingly.
Notes on deterministic behavior
To ensure determinism, it seems that the physics of a scene needs to be reset. This can be done by loading a scene whenever you are evaluating something. Note that when coupling Python and Unity in the Editor Mode, experiments will not be deterministic. You will have to build and run an executable of your project to get deterministic behavior. One can subscribe a function to the Academy.Instance.OnEnvironmentReset event in Unity that loads the scene containing the robot. You can't just reset the scene then you will lose the connection between python and ml-agents. You then have to create a "singleton" that is not reset when the scene is reloaded. The idea is to have a "RobotManager" with a single tone that keeps on and registers all the side channels. Robotmanager does very little other than maintain the connection, in another script you can have the agent script as usual. PS: Since you reload the scene, it will also create a "new" agent, it behaves the same but it will get a different id recommended way to retrieve the observations:
decisionSteps,other = self.env.get_steps(individual_name) obs = decisionSteps.obs # <---- Beste måten å gjøre det på når det er en agent end_position = obs[0][0][:3] # Henter observasjonene til agent 0
How to declare a singelton:
public class RobotManager : MonoBehaviour { // singleton private static RobotManager instance; public static RobotManager Instance { get { return instance; } } }