Releases: jjshoots/PyFlyt
0.15.0
PZ environments converted to parallel
from PyFlyt.pz_envs import MAQuadXHoverEnv
env = MAQuadXHoverEnv(render_mode="human")
observations, infos = env.reset()
while env.agents:
# this is where you would insert your policy
actions = {agent: env.action_space(agent).sample() for agent in env.agents}
observations, rewards, terminations, truncations, infos = env.step(actions)
env.close()
0.14.1
0.14.0
PettingZoo Support!
AEC ENVIRONMENTS HAVE BEEN DEPRECATED
from PyFlyt.pz_envs import MAQuadXHoverEnv
env = MAQuadXHoverEnv(render_mode="human")
env = wrappers.OrderEnforcingWrapper(env)
env.reset(seed=42)
for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()
if termination or truncation:
action = None
else:
action = env.action_space(agent).sample()
env.step(action)
0.13.0
0.12.0
Added the ability to flatten the waypoints environment to be used with popular reinforcement learning libraries.
0.11.1
0.11.0
0.11.0
The main improvement here is the jitting of various can-be-statically-typed functions, resulting in about a >50% speedup in the fixedwing environment.
See #10 for more results.
On top of that, the logic for the lifting surface has been changed, there is no longer the usage of command_id
and command_sign
, instead, all actuator commands are assumed to be mapped and signed within the update_control
function of the drone itself, in a similar manner to how the motors are done.
0.10.1
0.10.0
Enabled full control over actuator surfaces for the fixedwing model under control mode -1.
In this mode, the setpoint for the fixedwing is a 6-long vector, with each element corresponding to control over:
- Left Aileron [-1, 1]
- Right Aileron [-1, 1]
- Horizontal Tail [-1, 1]
- Vertical Tail [-1, 1]
- Main Wing [-1, 1]
- Thrust [0, 1]