-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
one_d_rpm breaks with different masses for HoverAviary #225
Comments
Yes. There is a bug in this |
I don't think it's the policy or the environment (reward function) that's the problem but rather the simulation settings itself. It doesn't make sense that a reward function would suddenly fail with marginal changes in mass. I'm suspecting the PyBullet physics implementation may be wrong. I saw that the thrust2weight ratio was also set as a constant in the urdf file, which doesn't make sense to me either. |
I agree with you. I am also facing the same problem. I think it would be much better to make own environment using ChatGPT. I have realized that physics and low level controller does not work properly particularly one action is |
Two main clarifications:
Hope this helps, btw, if you are looking for an environment where to randomize mass and inertia, we created safe-control-gym specifically for that. |
Hi,
I am running into an odd issue where if I change the mass by a small amount, say 0.05 kg, using the
pybullet.changeDynamics()
method in the_housekeeping()
method, the training forone_d_rpm
gets stuck at a very small reward and never improves. In some cases, during the rendering/evaluation it just oscillates above and below 1m. I have no idea why this is. It seems thatone_d_pid
does not have this issue and works well. I'm also slightly confused about the 5% parameter when updating RPMs in relation to the hover RPM. Does anyone have any experience with this?The text was updated successfully, but these errors were encountered: