Raspberry pi camera module calibration using OpenCV

Sharad Rawat
5 min readFeb 21, 2021

--

Before you begin reading this blog, if you have not checked my previous blog about adding a speed sensor to the sensor stack and integrate it with raspberry pi, then check it out.

Few months back I integrated a camera sensor module with raspberry pi. Now is the time to make use of this powerful sensor. However, before we begin its usage, we need to calibrate the camera. Below is the blog about setting up the camera module with raspberry pi.

Why calibrate?

Let’s take an example, imagine the speedometer in your car shows 3 km/h even though your car is not moving. You will quickly realize that whenever you are moving, the actual velocity is 3 km/h less than what is displayed in your speedometer. Essentially, you are correcting the faulty reading from the sensor to get the real value.
In the same way, the raw image obtained from the camera is not the real view from the ideal camera. We need to transform this raw image into something that would look like if we were to use the ideal camera. Therefore we need 2 sets of parameters to produce an ideal image from the actual image captured from the camera.

1. Intrinsic Parameters

Camera Model

These parameters are internal to the camera and remain constant for a given camera. These are focal length, and image center in x and y.

2. Distortion Parameters

The above camera model is based on a pinhole camera model. However, in real life, we have an important feature of a camera which is missing in this model, the lens. Light lays often bend a little too much at the edges of a curved lens of a camera, and this creates the effect that distorts the edges of the images.Therefore this distortion needs to be corrected. Hence we compute these parameters.
The distortion can be expressed as either tangential distortion or a radial distortion. Both of these distortions can be expressed by 5 parameters.

Distortion coefficients

The process:

  1. Collect 20 images of chess board captured by raspberry pi. Why chess board? That’s because it is very easy to identify corners of a chess board. We pass the coordinates of these corners in 3D world. These can be something like (1,0,0), (2,0,0)…. so on. The opencv algorithm will compute the Homography between the 3D points and the corresponding 2D image points which are identified by the algorithm.
Images captured by Rapberry Pi

2. Write some preparatory code.

# Number of object points
num_intersections_in_x = 7
num_intersections_in_y = 7

# Size of square in meters
square_size = 0.0225

# Arrays to store 3D points and 2D image points
obj_points = []
img_points = []

# Prepare expected object 3D object points (0,0,0), (1,0,0) ...
object_points = np.zeros((7*7,3), np.float32)
object_points[:,:2] = np.mgrid[0:7, 0:7].T.reshape(-1,2)
object_points = object_points*square_size

fnames = glob.glob('path/to/images/'+'*.'+'jpg')

a. I expect 7 corners in x direction and 7 corners in y direction.

b. Moreover, I manually measure the physical length of square’s size which is 0.0225 m in my case.

c. Create a matrix grid of the physical points in 3D using mgrid.

3. Find chess board corners.

# Find chess board corners
ret, corners = cv2.findChessboardCorners(gray_scale, (num_intersections_in_x, num_intersections_in_y), None)

This finds the corners in the in the image and return ret (bool) and a list of detected corners.

One can double check if this function returns meaningful values.

if ret:

# Draw the corners
drawn_img = cv2.drawChessboardCorners(img, (7,7), corners, ret)
cv2.imshow("main", drawn_img)
cv2.waitKey(0)

The output is as follows:

Detected corners drawn on a chess board.

4. Now that the algorithm knows the corners, we can calibrate using the 3D points from point 2 and 2D image points from point 3.

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(obj_points, img_points, img_size, None, None)

This returns a bunch of output. Only mtx and dist are important to us. mtx describes the intrinsic parameters. dist describes the distortion parameters.

5. Since computing these parameters are expensive, we store these parameters right here via pickle library.

dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump(dist_pickle, open("dist_pickle.p", "wb"))

6. Almost there! Now in order to undistort the images, use undistort() function from opencv.

undst_image = cv2.undistort(img, mtx, dist, None, mtx)
Comparison of image before and after distortion.

7. Job Done. Next time, when you want to use cool computer vision algorithm, make sure to use cv2.undistort() with the camera calibration parameters.

Photo by Adam Winger on Unsplash

The code used in this blog is available in one of my git repositories. Link below.

Thank you reading the article. If you like reading this series of blogs, please clap. Your appreciation is a huge encouragement.

--

--

Sharad Rawat
Sharad Rawat

Written by Sharad Rawat

Autonomous Vehicles Algorithm Developer

Responses (1)