-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Matlab's Camera Calibrator App #20
Comments
That is very helpful shawn! I think that is a good idea since we have been having issues with the Caltech toolbox running on certain versions of MATLAB. I will add it to the list. I am actually hoping to address some issues tomorrow and will start! |
Dear SRHarrison, As I am in starting stage of using this Toolbox, I have been going through the documentation and finished first step that movies2frame, I collected the user inputs such as GCPs in FOV of video, Extrinsics (X,Y,Z, Azimuth, Tilt, Roll of fixed camera), but, in case of Intrinsics (camera parameters such as above said 11 parameters). How would I get those 11 intrinsic parameters of camera? Please let me know. Do I need any per-requistes to get those 11 parameter. Sorry for inconvenient with my simple and silly questions. |
Hi @sivaiahborra , There are many ways to skin a dog, but typically we introduce people to intrinsic calibration / lens calibration with this presentation on Intrinsic Calibration and Distortion and with this hand's on Lens Calibration Practicum. Basically, you have to assume that your UAS camera has a fixed aperture and fixed focus (probably not true), and use it to take photos of a graduated checkerboard pattern (must be on a flat surface, not curvy). You effectively 'paint' the entire FOV of the sensor with images of the checkerboard. Then you can try to fit a camera model with it... The CalTech toolbox is free and fairly accessible. I definitely prefer feeding the images to Matlab's Camera Calibrator App, but it is part of a toolbox and probably not worth the extra cost if that's all you need the toolbox for. Structure from motion software, e.g. Agisoft Metashape, or Pix4D Mapper determine the intrinsic parameters in a similar way, but do not require you to do the checkerboard thing. They use images of the same object from differing views to determine the lens distortion. However, translating those parameters to the format that the CIRN toolbox expects is not always straight forward. Brittany or others might have some translation suggestions if you plan to go that route. I suggest taking your UAS, placing it on a table and just walking around in front of it with the checkboard displayed. Make sure that you use the exact same settings on the camera that you did during your flight/video capture. Typically these UAS cameras will use a subset of the sensor to do video, and so you want to make sure and calibrate the lens for that subset. If you later decide to change the resolution to record video, you'll need to calibrate again for those settings. |
Dear Sir,
Thank you for a timely and concise response.
Actually, I did capture a video by fixing my camera position as constant
and collected few GCPs in FOV and also its extrinsics (X,Y,Z, azimuth, tilt
and roll (which is zero since there is no any side to side movement if I am
not wrong) of my camera), Later, I could extracted frames from a video by
using movies2frame.m
Now, what I understood from your mail that intially I have to take few
images (around 20) of checkborad by basing it on flat surface in different
anlgles and mostly covering the FOV of camera. In this context, a small
query that *shall I only change the orientation of checkboard for each time
by keeping FOV as constant? or can I also change the FOV of camera (Camera
viewing angle) for each time? *since I am dealing now with single fixed
camera maybe later I can go for mutiple cameras if I once success in
running this toolbox for my study region.
Then I will go through camera calibraion toolbox.
I have been through the documentation of toolbox_calib.zip and its examples.
Hope
Thank you.
*Thanks & **Regards*
*B. Sivaiah*
Research Scholar
Dept. of Meteorology & Oceanography
Andhra University
Visakhapatnam - 530003
INDIA - 91-9676155827
…On Mon, Jul 20, 2020 at 8:56 PM Shawn Harrison ***@***.***> wrote:
Dear SRHarrison,
As I am in starting stage of using this Toolbox, I have been going through
the documentation and finished first step that movies2frame, I collected
the user inputs such as GCPs in FOV of video, Extrinsics (X,Y,Z, Azimuth,
Tilt, Roll of fixed camera), but, in case of Intrinsics (camera parameters
such as above said 11 parameters). How would I get those 11 intrinsic
parameters of camera? Please let me know. Do I need any per-requistes to
get those 11 parameter. Sorry for inconvenient with my simple and silly
questions.
Hi @sivaiahborra <https://github.com/sivaiahborra> ,
Let me see if I understand correctly.... You have collected a video of the
surfzone/beach using a UAS hovering 'still'. You were able to extract the
video frames, and determine the camera position (extrinsic parameters) for
each frame in time, using Brittany's Toolbox. Now you wonder how to get the
intrinsic parameters that A_formatIntrinsics.m assumes that you've
already gathered?
*There are many ways to skin a dog*, but typically we introduce people to
intrinsic calibration / lens calibration with this presentation on Intrinsic
Calibration and Distortion
<https://drive.google.com/file/d/19urm-rg--ufdylFKeBv-2-9MDqarARRF/view?usp=sharing>
and with this hand's on Lens Calibration Practicum
<https://drive.google.com/file/d/1QIyd0wQGBVYKA9xLgK6W-sFqxOg3_C-H/view?usp=sharing>
.
Basically, you have to assume that your UAS camera has a fixed aperture
and fixed focus (probably not true), and use it to take photos of a
graduated checkerboard pattern (must be on a flat surface, not curvy). You
effectively 'paint' the entire FOV of the sensor with images of the
checkerboard. Then you can try to fit a camera model with it... The CalTech
toolbox <http://www.vision.caltech.edu/bouguetj/calib_doc/> is free and
fairly accessible. I definitely prefer feeding the images to Matlab's
Camera Calibrator App
<https://www.mathworks.com/help/vision/ug/single-camera-calibrator-app.html>,
but it is part of a toolbox and probably not worth the extra cost if that's
all you need the toolbox for.
Structure from motion software, e.g. Agisoft Metashape, or Pix4D Mapper
determine the intrinsic parameters in a similar way, but do not require you
to do the checkerboard thing. They use images of the same object from
differing views to determine the lens distortion. However, translating
those parameters to the format that the CIRN toolbox expects is not always
straight forward. Brittany or others might have some translation
suggestions if you plan to go that route.
I suggest taking your UAS, placing it on a table and just walking around
in front of it with the checkboard displayed. Make sure that you use the
exact same settings on the camera that you did during your flight/video
capture. Typically these UAS cameras will use a subset of the sensor to do
video, and so you want to make sure and calibrate the lens for that subset.
If you later decide to change the resolution to record video, you'll need
to calibrate again for those settings.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#20 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOKXS4S5JDYEDL2HEP6WQGTR4ROZ5ANCNFSM4O4OY2TA>
.
|
Dear SRHarrison,
I have been through the documentation of toolbox_calib.zip and tried the
first example and could create the Calib_Results.mat file. so, Shall I try
the same for my camera calibration?
If Yes, should I need to measure the length and width of the square in my
checkboard to give those values in place of *dx and dy* instead of default
values in an example? Then I should use the Calib_Results.mat file to run
the formatIntrinsics.m. *Then, In which .m file do I need to give my
gathered GCPs and extrinsics (X,Y, Z, Azimuth, Tilt, Roll) camera?*
Sorry for inconvenience with my simple and silly questions and I have been
knowing well from day to day by going through the documentation as well.
Thank you for your time and I hope I can successfully run this toolbox and
carry my work for my study area with help of people like you.
*Thanks & **Regards*
*B. Sivaiah*
Research Scholar
Dept. of Meteorology & Oceanography
Andhra University
Visakhapatnam - 530003
INDIA - 91-9676155827
…On Wed, Jul 22, 2020 at 12:59 AM Sivaiah Borra ***@***.***> wrote:
Dear Sir,
Thank you for a timely and concise response.
Actually, I did capture a video by fixing my camera position as constant
and collected few GCPs in FOV and also its extrinsics (X,Y,Z, azimuth, tilt
and roll (which is zero since there is no any side to side movement if I am
not wrong) of my camera), Later, I could extracted frames from a video by
using movies2frame.m
Now, what I understood from your mail that intially I have to take few
images (around 20) of checkborad by basing it on flat surface in different
anlgles and mostly covering the FOV of camera. In this context, a small
query that *shall I only change the orientation of checkboard for each
time by keeping FOV as constant? or can I also change the FOV of camera
(Camera viewing angle) for each time? *since I am dealing now with single
fixed camera maybe later I can go for mutiple cameras if I once success in
running this toolbox for my study region.
Then I will go through camera calibraion toolbox.
I have been through the documentation of toolbox_calib.zip and its
examples.
Hope
Thank you.
*Thanks & **Regards*
*B. Sivaiah*
Research Scholar
Dept. of Meteorology & Oceanography
Andhra University
Visakhapatnam - 530003
INDIA - 91-9676155827
On Mon, Jul 20, 2020 at 8:56 PM Shawn Harrison ***@***.***>
wrote:
> Dear SRHarrison,
>
> As I am in starting stage of using this Toolbox, I have been going
> through the documentation and finished first step that movies2frame, I
> collected the user inputs such as GCPs in FOV of video, Extrinsics (X,Y,Z,
> Azimuth, Tilt, Roll of fixed camera), but, in case of Intrinsics (camera
> parameters such as above said 11 parameters). How would I get those 11
> intrinsic parameters of camera? Please let me know. Do I need any
> per-requistes to get those 11 parameter. Sorry for inconvenient with my
> simple and silly questions.
>
> Hi @sivaiahborra <https://github.com/sivaiahborra> ,
> Let me see if I understand correctly.... You have collected a video of
> the surfzone/beach using a UAS hovering 'still'. You were able to extract
> the video frames, and determine the camera position (extrinsic parameters)
> for each frame in time, using Brittany's Toolbox. Now you wonder how to get
> the intrinsic parameters that A_formatIntrinsics.m assumes that you've
> already gathered?
>
> *There are many ways to skin a dog*, but typically we introduce people
> to intrinsic calibration / lens calibration with this presentation on Intrinsic
> Calibration and Distortion
> <https://drive.google.com/file/d/19urm-rg--ufdylFKeBv-2-9MDqarARRF/view?usp=sharing>
> and with this hand's on Lens Calibration Practicum
> <https://drive.google.com/file/d/1QIyd0wQGBVYKA9xLgK6W-sFqxOg3_C-H/view?usp=sharing>
> .
>
> Basically, you have to assume that your UAS camera has a fixed aperture
> and fixed focus (probably not true), and use it to take photos of a
> graduated checkerboard pattern (must be on a flat surface, not curvy). You
> effectively 'paint' the entire FOV of the sensor with images of the
> checkerboard. Then you can try to fit a camera model with it... The CalTech
> toolbox <http://www.vision.caltech.edu/bouguetj/calib_doc/> is free and
> fairly accessible. I definitely prefer feeding the images to Matlab's
> Camera Calibrator App
> <https://www.mathworks.com/help/vision/ug/single-camera-calibrator-app.html>,
> but it is part of a toolbox and probably not worth the extra cost if that's
> all you need the toolbox for.
>
> Structure from motion software, e.g. Agisoft Metashape, or Pix4D Mapper
> determine the intrinsic parameters in a similar way, but do not require you
> to do the checkerboard thing. They use images of the same object from
> differing views to determine the lens distortion. However, translating
> those parameters to the format that the CIRN toolbox expects is not always
> straight forward. Brittany or others might have some translation
> suggestions if you plan to go that route.
>
> I suggest taking your UAS, placing it on a table and just walking around
> in front of it with the checkboard displayed. Make sure that you use the
> exact same settings on the camera that you did during your flight/video
> capture. Typically these UAS cameras will use a subset of the sensor to do
> video, and so you want to make sure and calibrate the lens for that subset.
> If you later decide to change the resolution to record video, you'll need
> to calibrate again for those settings.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#20 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AOKXS4S5JDYEDL2HEP6WQGTR4ROZ5ANCNFSM4O4OY2TA>
> .
>
|
Hi Brittany,
After some testing, I think that the Camera Calibrator App (included with Matlab's Computer Vision toolbox) is not only convenient (no clicking), but also seems to work better in some cases at resolving the lens model than caltech toolbox.
For people wanting to use that, it might be worthwhile including a translator (similar to your
caltech2CIRN.m
) from that output to the CIRN intrinsic variable (and maybe call itcamcalibrator2CIRN.m
).Assuming that the user exports the "camera parameters variable" as
params
to workspace from the Camera Calibrator, then the translation tointrinsics
is:The text was updated successfully, but these errors were encountered: