Reference: /depth-camera-d435i/
Intel officials have given a very detailed introduction, especially the product manual, which covers almost all the information that users need (and don't need) to know.
The key information of D435i is extracted here for future reference.
The InterReal Sence D4×× series, including D435i, all measure the depth through classical binocular vision. Although there is an infrared projector, infrared reflection ranging is not used. Its function is only to project invisible and fixed infrared texture patterns, improve the depth calculation accuracy in the environment with inconspicuous texture (such as white walls), and assist binocular vision ranging. The left and right cameras send the image data to the built-in depth processor, in which the depth value of each pixel is calculated based on the principle of binocular ranging.
The following figure shows the texture pattern of infrared projection on white paper:
Binocular ranging camera parameters
Infrared projector parameters
RGB camera parameters
Depth image resolution and supported frame rate
RGB image resolution and supported frame rate
IMU parameter
Intel RealSense SDK 2.0 is a cross-platform development kit, which includes basic camera tools such as realsense-viewer, and also provides rich interfaces for secondary development, including ROS, Python, Matlab, Node.js, LabVIEW, OpenCV, PCL,. Net, etc.
In Linux system, there are two ways to install the development tool library, one is to install the precompiled debian package, and the other is to compile from the source code.
If the Linux kernel version is 4.4, 4.8, 4. 10, 4. 13, 4. 15, 4. 18 * 5.0 * and 5.3 *, and there is no custom module, it is better to install the precompiled debian package.
Check the ubuntu kernel version by the following command
The display result is 5.0.0-23-generic, which meets the requirements of the above version. We chose to install the precompiled debian package.
Please refer to/intelrealsense/librealsense/blob/master/doc/distribution _ linux.md for the installation steps under Ubuntu.
The specific steps are summarized as follows (for Ubuntu 18.04):
Then you can run realsense-viewer to view the camera's depth and RGB image, as well as the measurement in IMU, as shown in the following figure:
Besides, you need to check it out.
Make sure to include the word realsense, such as version:1.1.2.realsense-1.3.14.
Check dkms again
The returned results include things like librealsense2-dkms, 1.3. 14, 5.0.0-23-generic, x86 _ 64: installed.
If all the above are ok, RealSense SDK 2.0 is successfully installed!
If the above returned result is incorrect, it may affect the subsequent operation. According to our experience, realsense-dkms will select the first kernel in /lib/modules for installation. If there are multiple kernels in the system and the kernel currently running is not the first kernel in /lib/modules, an error may occur.
Since the firmware version 5. 12.02. 100, the D4××× series cameras of Intel RealSense have added self-calibration function, which greatly improves the automation of camera calibration and eliminates the need to hold a calibration board.
The detailed operation can be viewed here.
Brief process:
In this paper, our ultimate goal is to publish the depth and RGB data of the camera to ros topic, and then map the point cloud through ORB SLAM 2.
Here we need to use the real sense library ROS-$ ROS _ VER- real sense 2-camera. It should be noted that this ROS library does not depend on RealSense SDK 2.0, and the two are completely independent. Therefore, if you just want to use realsense in ROS, you don't need to install RealSense SDK 2.0 or above first.
Refer to /IntelRealSense/realsense-ros for installation steps.
The specific command is as follows (premise: ROS melodic version is installed):
Includes two parts:
Before starting the camera, we need to set the rs_camera.launch file in realsense2_camera rospack.
For the introduction of various parameters in ros launch, please refer to here.
Ensure that the following two parameters in the rs_camera.launch file are true:
The former is the time to synchronize different sensor data (depth, RGB, IMU), that is, to have the same timestamp;
The latter will add some rostopic, among which we are more concerned about/camera/aligned _ depth _ to _ color/image _ raw, in which the depth map is aligned with the RGB map, and the comparison is as follows.
Then, you can use the following command to start the camera:
Some topics are as follows:
The key point is that /camera/color/image_raw and/camera/aligned _ depth _ to _ color/image _ raw correspond to RGB images and depth images respectively. Based on these data, we hope to achieve the effect of ORB SLAM 2+ point cloud mapping.
Compared with ORB SLAM 2 using rosbag packet, there are the following modifications:
After the above modifications, you can compile and run ORB SLAM 2 according to the steps in the previous article. At this point, the depth and RGB data are no longer from rosbag, but from the camera.
The command is summarized as follows:
The finally saved point cloud map has the following effects:
This paper records the process of realizing ORB SLAM 2 based on the depth camera Intel RealSense D435i. Since the previous article (1, 2) has recorded ORB SLAM 2 based on rosbag data packet in great detail, most of this article records some settings related to the depth camera, which is convenient for my future reference and I hope it will be helpful to readers in other similar research directions.