You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems you are right, this is a valid concern. The Adam optimizer uses the history gradient to update all parameters. Currently, for each global BA, we initialize a new optimizer and update the pose parameters only twice, so the impact of this will not be significant. However, I will try to do some experiments to see if this will indeed have a negative impact on pose optimization when I have time. If so, I will switch to SparseAdam for global BA.
Hi,
Thanks for your great work. I looked at your code and found the implementation uses a shared Adam optimizer for all poses in global BA.
My question is that if some poses are not selected in a global BA iteration, do they get updated in the optimizer ?
Do you think this is a valid concern ?
Thanks.
The text was updated successfully, but these errors were encountered: