Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extrinsic calibration from two trajectories #2

Open
sandeepnmenon opened this issue Nov 19, 2021 · 9 comments
Open

Extrinsic calibration from two trajectories #2

sandeepnmenon opened this issue Nov 19, 2021 · 9 comments

Comments

@sandeepnmenon
Copy link

As per the synthetic datasets, I see the data loaded from the dataset are a list of transformation matrices and the extrinsic calibration.

I have a system of two non-overlapping stereo cameras and I am doing ORB-SLAM to get the trajectories for both of them. Can I use the solver mentioned in this repo to get the extrinsic calibration between the non overlapping cameras.

@mattgiamou
Copy link
Contributor

Hi Sandeep,

Yes, you should be able to get the extrinsic calibration between them if you replace T_v_rel_sub and T_c_rel_sub with your data in this code snippet from the example_hand_eye.py script you've referenced:

#Load the data into the solver
my_solver.set_T_v_rel(T_v_rel_sub) #Load relative egomotion sensor poses
my_solver.set_T_c_rel(T_c_rel_sub, scale=scale) # Load and scale relative camera sensor poses
# Run Solver
dual_time, dual_gt, dual_primal, dual_gap, dual_opt, dual_solution, dual_flag = my_solver.dual_solve(cons="RCH")

One thing to note is that this algorithm assumes that one of the sensor's trajectory (T_c_rel) has an incorrect scale factor. Since both your cameras are stereo, this will not be the case for your data. This incorrect assumption might introduce some minor errors in the extrinsic calibration estimate. You'll be able to tell if

print("Estimated Scale: {}".format(1/dual_opt[3]))

is very far from 1. The MATLAB code in our repo does not make this unknown-scale assumption and therefore will be slightly more accurate for your application. We will eventually be adding this functionality to the Python code as well, my apologies for the inconvenience.

@sandeepnmenon
Copy link
Author

@mattgiamou
Thank you so much for the insights. No inconvenience at all.
Will be on the lookout for the python functionality.
You mentioned the matlab code does not make this assumption and therefore only "slightly" more accurate. Is there any other constraint for this solver to not work for the stereo system that I mentioned.
In my case, I know the distance between the two cameras. Can I use that somehow?

@mattgiamou
Copy link
Contributor

You're welcome!
There should not be any other problem with using your data, just be sure that the inputs to the Python solver are relative poses, not the absolute pose of the trajectory (the papers referenced in the README have more details on this).
Other than that, as long as the camera measurements are temporally calibrated, it should work quite well.

The distance between the two cameras cannot be incorporated into this method: incorporating this information by constraining their squared distance would be an interesting extension of the traditional extrinsic calibration problem!

@sandeepnmenon
Copy link
Author

Thank you @mattgiamou

@sandeepnmenon
Copy link
Author

sandeepnmenon commented Dec 1, 2021

@mattgiamou
Reopening the issue for additional clarification.
I used SLAM to get the trajectory of two rigidly connected monocular cameras. They are temporally calibrated.
In such scenario the scale of the translation for the two trajectories do not match.
As per the above discussion, the matlab code should take care of this.

However I am getting a huge error
Actual extrinsics:
roll: -0.154°
yaw: 0.808°
pitch: 0.790°

Obtained from matlab code after using the relative poses.
Roll: -46.0507
Pitch: 75.7248
Yaw: 120.2224

Sharing the data I used: https://drive.google.com/drive/folders/1WxmmWQNFvussY7jqRe6WSGSLPMO7iN93?usp=sharing
camera_1_2_images.mat : Relative rotation and translations
camera1.npy : Absolute transformations of each frame with respect to first frame for camera 1
camera2.npy : Absolute transformations of each frame with respect to first frame for camera 2

Some guidance would be really helpful.
Thank you

@sandeepnmenon sandeepnmenon reopened this Dec 1, 2021
@mattgiamou
Copy link
Contributor

It looks like I misunderstood - the Matlab code was suggested because I thought you had two separate stereo cameras! For the case you've described, the Python code is actually appropriate, but only if you have the distance between the monocular cameras (you need to divide the translation in the extrinsic calibration by this distance to resolve the scale).

I tried to run the Python code below on your data, to make sure that we're on the same page. I'm also getting rotation extrinsics that are very different than the actual extrinsics you posted, so there is probably some other issue. It may be a problem with the code, which I will try to investigate sometime in the next week or so.

import numpy as np
from liegroups import SE3
from solver.ext_solver import solver
from utility.utils import inertial_to_relative

# Load the data
c1 = np.load("/path/to/camera1.npy")
c2 = np.load("/path/to/camera2.npy")

T_v = [SE3.from_matrix(c1[i, :, :]) for i in range(c1.shape[0])]
T_c = [SE3.from_matrix(c2[i, :, :]) for i in range(c2.shape[0])]

# Initialize solver
my_solver = solver()

# Convert inertial poses to relative
T_v_rel = inertial_to_relative(T_v)
T_c_rel = inertial_to_relative(T_c)

# Load the data into the solver
my_solver.set_T_v_rel(T_v_rel)  # Load relative egomotion sensor poses
my_solver.set_T_c_rel(T_c_rel)

# Run Solver
rel_time, rel_gt, rel_primal, rel_gap, relax_opt, relax_solution, rel_flag = my_solver.relax_solve(cons="RCH")

print("Estimated Scale: {}".format(1/relax_opt[3]))
print("Estimated Extrinsic Calibration:")
print(relax_solution)

from scipy.spatial.transform import Rotation
R = Rotation.from_matrix(relax_solution[0:3, 0:3])
print(R.as_euler('xyz', degrees=True))

From just playing with the data, it looks like the rotation estimates in camera1.npy and camera2.npy are pretty accurate, but it might be the case that there is some sort of scale drift occurring with the translation estimates: this code, which assumes a constant relative scale between the two monocular trajectories, would not be appropriate if there is significant drift. I haven't worked with ORB-SLAM enough to know if this would be an issue, but it probably depends heavily on the camera and environment. Here's some basic evidence that this is happening:

trans_mag = [np.linalg.norm(c1[i,0:3, 3] - c2[i,0:3,3]) for i in range(c1.shape[0])]

returns

[0.4237606317888156, 2.363412869557756, 3.1313004140460583, 5.007729336744534, 6.855084146019304, 8.542333583249121, 8.513463186230169, 8.587373682816306]

which should be constant if the scale were not drifting.

Thanks for your patience, and I'm sorry this isn't working out of the box for you. I appreciate you sharing these problems, as I would like to improve the code and cover more use cases in the near future.

@mattgiamou
Copy link
Contributor

Actually, my point about constant distance between the estimated absolute trajectories isn't true. I'll keep thinking about this some more.

@sandeepnmenon
Copy link
Author

Thank @mattgiamou you for explaining the scale drift issue.
Is there something that can be done in the data collection to avoid this issue?

Actually, my point about constant distance between the estimated absolute trajectories isn't true. I'll keep thinking about this some more.

Are you saying the scale drift is not an issue?

@sandeepnmenon
Copy link
Author

@mattgiamou
Any update on this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants