Current location - Education and Training Encyclopedia - Graduation thesis - Optimization of Long Corridor of Lidar Camera
Optimization of Long Corridor of Lidar Camera
Paper Name: In the fuzzy environment of laser scanning, we use the hybrid method of 2d laser scanning and single camera image to locate and draw simultaneously based on the graph structure.

Positioning is the basic problem of robot navigation, which allows robots to perform tasks autonomously. However, in the environment where laser scanning is fuzzy, such as long corridors, the traditional SLAM (simultaneous positioning and mapping) algorithm using laser scanner may not be able to estimate the robot posture robustly. In order to solve this problem, we propose a new positioning method based on hybrid method, which combines 2D laser scanner and monocular camera under the SLAM framework based on graphic structure. The three-dimensional coordinates of image feature points are obtained by hybrid method, assuming that the wall is vertical to the ground and vertical and flat. However, this assumption can be alleviated, because the subsequent feature matching process rejects outliers on sloping or uneven walls. The final attitude of the robot is estimated by optimizing the constraint graph generated by the hybrid method. In order to verify the effectiveness of this method, a real experiment was carried out in an indoor long corridor environment. The experimental results are compared with the traditional GMapping method. The results show that this method can locate the robot in real time in the fuzzy environment of laser scanning, and its performance is better than the traditional method.

This paper presents a SLAM method based on graph structure, which uses 2D laser scanner and monocular camera to solve the above problems. Image feature data is obtained from the camera, and depth information is obtained from the laser scanner. The graphic structure of sensor data obtained by fusion is used to make up for the shortcomings of each sensor. This hybrid method, which combines laser scanner and camera, enables us to estimate the pose of the robot in the fuzzy environment of laser scanning, but it is difficult to estimate the pose only by using laser scanner. Such an environment is a very long corridor or a very large space, and the laser scanner can only detect one side around it, because the range from the robot to the wall is greater than the maximum detection range of the laser scanner. The hybrid method can overcome the shortcomings of each sensor. By using these methods, the attitude of the robot can be accurately estimated in the fuzzy environment of laser scanning.

The rest of this article is arranged as follows. Section 2 describes the overall method for indoor robots to minimize robot attitude errors by using SLAM based on graphic structures from multiple sensors. In order to verify the superiority of the proposed method, we describe the experimental environment and the whole system in the third section, and the experimental results obtained by using our method. The fourth part is the conclusion and future work.

This section introduces our method in detail. Firstly, GMapping [4], grid-based SLAM and Rao-Blackwellized particle filters and graph-based SLAM formula are briefly introduced. Then, we explain in detail how to fuse the feature data extracted from monocular camera and the depth data obtained from laser scanner to locate the robot. Finally, we describe the method of generating and optimizing the attitude map.

GMapping is a grid-based SLAM using Rao-Blackwellized particle filter. By using adaptive resampling in GMapping, the long-term problem of SLAM based on particle filtering is solved. By adopting a method called improved suggestion distribution, GMapping reduces the uncertainty of robot attitude. Assuming that laser scanning is more accurate than ranging, this method generates a suggested distribution based on recent sensor measurements. This is a probabilistic method for estimating the robot posture, and the robot posture on the 2D plane can be estimated by using the depth data of the laser scanner. GMapping algorithm can be used when continuous laser scanning can match unambiguously. However, if the GMapping algorithm is used in the fuzzy environment of laser scanning, the pose error of the robot may appear. Therefore, it is necessary to deal with this situation, and the next section will describe the graph-based SLAM for multi-sensor fusion.

In this section, a method for fusing the feature data of the camera and the depth information of the laser scanner is described. The hybrid method is used to predict the posture of three-dimensional robot, and this information is used as the constraint of graph-based SLAM. The overall algorithm and concept are shown in Figure 1 and Figure 2 respectively. Before the hybrid method of laser scanner and monocular camera is realized, internal calibration must be carried out to determine the internal parameters of the camera. It is also necessary to know the relative attitude between the camera and the laser scanner in order to fuse the data from them. Therefore, the relative attitude information between the two sensors is obtained by external calibration.

After installing multiple sensors on the robot, the corresponding sensors are used to estimate the attitude of the robot. Then, the hybrid algorithm fuses the respective results, as shown in Figure 4. The covariance value and measurement value of each sensor are used to generate constraints of the graph structure. The generated constraint information is used to organize the graphic structure, and the final corrected robot posture is obtained through graphic optimization. The rangefinder generates ranging information, which is used to generate attitude constraints. Make a 2D grid map using depth data from a 2D laser scanner. Then, using GMapping, 2D robot attitude constraints are generated from 2D grid graph and ICP matching [4]. After feature points are extracted by SURF (Accelerated Robust Feature) [29] algorithm of camera images, 3D coordinates of each feature point are obtained by adding depth information from 2D laser scanner. Then the pose constraints of the 3D robot are generated according to the 3D coordinates of the feature points. In the last section, the hybrid method of obtaining robot posture using monocular camera and laser scanner is described in detail.