Replies: 2 comments 1 reply
-
Here is a temporary link to the image files used in the python code above. /left: (140) photos [used to calculate distortion coefficients for left camera] |
Beta Was this translation helpful? Give feedback.
-
The first workaround I can guess, is to capture the images from the first already-calibrated camera (let's call it A), undistort these images according to manufacturer specs and then use them in conjunction with camera B with common settings. If the manufacturer correction is almost perfect, resulting distortion coefficients for camera A will be almost zero. The only disadvantage is longer optimisation time for the OpenCV algorithm but this shouldn't be an issue.
I looked at the code and I see you are loading intrinsics for both cameras. Are both of them already calibrated or only one?
This issue is pure geometrically. Only minor obvious things can be done: move the cameras, change focal length, change chessboard size, etc... Converting this thread to a discussion as it is not related to SS issue. |
Beta Was this translation helpful? Give feedback.
-
We have the unusual case where one camera in the stereo rig provides its own undistortion based on its own manufacturer calibration. Like other vision libraries, OpenCV’s ‘stereoCalibrate’ is meant to take unprocessed photos as input, preferably with the same camera specs. Even if we give ‘stereoCalibrate’ coefficients with zero values for one or both cameras, it will still merrily alter both sets of coefficients as it tries to find the best extrinsics estimate. We can lock the intrinsics with the ‘cv2.CALIB_FIX_INTRINSIC’ flag, however this locks both cameras.
We are therefore calibrating our left camera in two passes using chessboardHybridStereo, while setting the right camera distortion coefficients to zero:
Pass One: We use the distortion coefficients estimated from a set of 140 images for the left camera: these photos capture a wide range of distortion across the field of view and have a mean reprojection error of 0.037811.
Pass Two: We then use the intrinsic parameters of the left camera from a separate 21-images set with mean reprojection error of 0.017860. For this image set, both left and right cameras can see the chessboard. The right camera's distortion coefficients we set as zeros, assuming that any physical lens distortion has been corrected by the camera's internal processing, which as above is a ‘black box’ for us since we do not have code for its methods.
Our results aren't good (see below) -- is this approach valid? Notes follow:
Because we lock intrinsics during stereo calibration, we also perform subpixel refinement on the corner points and apply distortion correction for these chessboard corner points which would normally be freely optimized in ‘stereoCalibrate’. Also, because both cameras are fixed we must also estimate the right camera pose from the above set of 21 images, to pass its matrix to ‘cv.stereoCalibrate’.
To complicate things, the placement of cameras ensures there is little overlap between camera frustums as described in
this forum post. The result is that we cannot shoot a group of calibration images where the chessboard occupies most parts of the frame, since that will cause the chessboard to exit frame in the other photo in the stereo pair.
My steps follow, with code available in this branch of my SS fork.
‘calib.py’ - Chessboard calibration and reprojection for the two sets of images
‘buildStereoRig.py’ - Call 'chessboardHybridStereo' for stereo calibration (yields ~2.4 pixels net error), and two reprojection tests, one using computed matrices and another calling PnP to estimate camera pose for reprojection. The distortion coefficients are indeed different between our two sets of images:
‘display_images.py’ - Display epipolar lines on corrected images -- note they are close but not close enough for matching:
‘rectifyRig.py’
‘imageRectify.py’
Beta Was this translation helpful? Give feedback.
All reactions