You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The camera resolution was set to 640x480, everything is done in Python.
9
+
The camera resolution was set to 640x480, everything was done in Python.
10
10
11
11
Later I improved the project and migrated it to a [EU5 car](https://en.wikipedia.org/wiki/Beijing_U5), still processing in a Xavier AGX, and got a better result: (see `img/car.mp4`)
This EU5 car version used the four CSI cameras of resolution 960x640. The full birdview image has resolution 1200x1600, the fps is about 17/7 without/with post-precessing, respectively.
15
+
This EU5 car version used the four CSI cameras of resolution 960x640. The full review image has a resolution 1200x1600, the fps is about 17/7 without/with post-precessing, respectively.
16
16
17
17
18
-
> **Remark**:The black area in front of the car is the blind area after projection,it's because the front camera wasn't installed correctly.
18
+
> **Remark**:The black area in front of the car is the blind area after projection, it's because the front camera wasn't installed correctly.
19
19
20
20
The project is not very complex, but it does involve some careful computations. Now we explain the whole process step by step.
21
21
22
22
23
23
# Hardware and software
24
24
25
-
The hardwares I used in the small car project are:
25
+
The hardware I used in the small car project is:
26
26
27
-
1. Four USB fisheye cameras,supporting three different modes of resolution: 640x480|800x600|1920x1080. I used 640x480 because it suffices for a car of this size.
27
+
1. Four USB fisheye cameras, supporting three different modes of resolution: 640x480|800x600|1920x1080. I used 640x480 because it suffices for a car of this size.
28
28
2. AGX Xavier.
29
29
30
30
Indeed you can do all the development on your laptop, an AGX is not a strict prerequisite to reproduce this project.
31
31
32
32
The hardware I used in the EU5 car project is:
33
33
34
34
1. Four CSI cameras of resolution 960x640。I used Sekonix's [SF3326-100-RCCB camera](http://sekolab.com/products/camera/).
35
-
2. Also, AGX Xavier as in the small car.
35
+
2. Also, AGX Xavier is the same as in the small car.
36
36
37
37
The software:
38
38
@@ -48,9 +48,9 @@ The software:
48
48
49
49
The four cameras will be named `front`、`back`、`left`、`right`,and with device numbers 0, 1, 2, and 3, respectively. Please modify this according to your actual device numbers.
50
50
51
-
The camera intrinsic matrix is denoted as `camera_matrix`,this is a 3x3 matrix.
52
-
The distort coefficients are stored in `dist_coeffs`, this is a 1x4 vector.
53
-
The projection matrix is denoted as `project_matrix`,this is a 3x3 projective matrix.
51
+
The camera intrinsic matrix is denoted as `camera_matrix`,this is a 3x3 matrix.
52
+
The distorted coefficients are stored in `dist_coeffs`, this is a 1x4 vector.
53
+
The projection matrix is denoted as `project_matrix`,this is a 3x3 projective matrix.
54
54
55
55
56
56
# Prepare work: camera calibration
@@ -74,7 +74,7 @@ You can see there is a black-white calibration pattern on the ground, the size o
74
74
# Setting projection parameters
75
75
76
76
77
-
Now we compute the projection matrix for each camera. This matrix will transform the undistorted image to a birdview of the ground. All four projection matrices must fit together to make sure the four projected images can be stitched together.
77
+
Now we compute the projection matrix for each camera. This matrix will transform the undistorted image into a bird's view of the ground. All four projection matrices must fit together to make sure the four projected images can be stitched together.
78
78
79
79
This is done by putting calibration patterns on the ground, taking the camera images, manually choosing the feature points, and then computing the matrix.
80
80
@@ -88,10 +88,10 @@ OF course, each board must be seen by the two adjacent cameras.
88
88
89
89
Now we need to set a few parameters: (in `cm` units)
90
90
91
-
+`innerShiftWidth`, `innerShiftHeight`:distance between the inner edges of the left/right calibration boards and the car, distance bewtween the inner edges of the front/back calibration boards and the car。
92
-
+`shiftWidth`, `shiftHeight`:How far you will want to look at out of the boards. The bigger these values, the larger area the birdview image will cover.
93
-
+`totalWidth`, `totalHeight`:Size of the area that the birdview image covers. In this project, the calibration pattern is of width `600cm` and height `1000cm`, hence the birdview image will cover an area of size `(600 + 2 * shiftWidth, 1000 + 2 * shiftHeight)`. For simplicity,
94
-
we let each pixel corresponds to 1cm, so the final birdview image also has resolution
91
+
+`innerShiftWidth`, `innerShiftHeight`:distance between the inner edges of the left/right calibration boards and the car, the distance between the inner edges of the front/back calibration boards and the car。
92
+
+`shiftWidth`, `shiftHeight`:How far you will want to look at out of the boards. The bigger these values, the larger the area the birdview image will cover.
93
+
+`totalWidth`, `totalHeight`:Size of the area that the birdview image covers. In this project, the calibration pattern is of width `600cm` and height `1000cm`, hence the bird view image will cover an area of size `(600 + 2 * shiftWidth, 1000 + 2 * shiftHeight)`. For simplicity,
94
+
we let each pixel correspond to 1cm, so the final bird-view image also has a resolution
95
95
96
96
```
97
97
totalWidth = 600 + 2 * shiftWidth
@@ -124,7 +124,7 @@ Firstly you need to run this script, [run_get_projection_maps.py](https://github
124
124
125
125
The scale and shift parameters are needed because the default OpenCV calibration method for fisheye cameras involves cropping the corrected image to a region that OpenCV "thinks" is appropriate. This inevitably results in the loss of some pixels, especially the feature points that we may want to select.
126
126
127
-
Fortunately, the[cv2.fisheye.initUndistortRectifyMap](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#ga0d37b45f780b32f63ed19c21aa9fd333) allows us to provide a new intrinsic matrix, which can be used to perform a scaling and translation of the un-cropped corrected image. By adjusting the horizontal and vertical scaling ratios and the position of the image center, we can ensure that the feature points on the ground plane appear in comfortable places in the image, making it easier to perform calibration.
127
+
Fortunately, the function [`cv2.fisheye.initUndistortRectifyMap`](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#ga0d37b45f780b32f63ed19c21aa9fd333) allows us to provide a new intrinsic matrix, which can be used to perform a scaling and translation of the un-cropped corrected image. By adjusting the horizontal and vertical scaling ratios and the position of the image center, we can ensure that the feature points on the ground plane appear in comfortable places in the image, making it easier to perform calibration.
128
128
129
129
130
130
```bash
@@ -139,11 +139,11 @@ Then, click on the four predetermined feature points in order (the order cannot
The script for setting up the points[这里](https://github.com/neozhaoliang/surround-view-system-introduction/blob/master/surround_view/param_settings.py#L40)。
142
+
The script for setting up the points is [here](https://github.com/neozhaoliang/surround-view-system-introduction/blob/master/surround_view/param_settings.py#L40)。
143
143
144
-
These four points can be freely set, but you need to manually modify their pixel coordinates in the bird's-eye view in the program. When you click on these four points in the corrected image, OpenCV will calculate a perspective transformation matrix based on the correspondence between their pixel coordinates in the corrected image and their corresponding coordinates in the bird's-eye view. The principle used here is that a perspective transformation can be uniquely determined by four corresponding points (four points can give eight equations, from which the eight unknowns in the perspective matrix can be solved. Note that the last component of the perspective matrix is always fixed to 1).
144
+
These four points can be freely set, but you need to manually modify their pixel coordinates in the bird's-eye view in the program. When you click on these four points in the corrected image, OpenCV will calculate a perspective transformation matrix based on the correspondence between their pixel coordinates in the corrected image and their corresponding coordinates in the bird view. The principle used here is that a perspective transformation can be uniquely determined by four corresponding points (four points can give eight equations, from which the eight unknowns in the perspective matrix can be solved. Note that the last component of the perspective matrix is always fixed to 1).
145
145
146
-
If you accidentally clicked the wrong point, you can press the d key to delete the last selected point. After selecting the four points, press Enter, and the program will display the resulting bird's-eye view image:
146
+
If you accidentally click the wrong point, you can press the d key to delete the last selected point. After selecting the four points, press Enter, and the program will display the resulting bird's-eye view image:
0 commit comments