Preface
In this article, I will explain how to run ORB SLAM3 with OAK-D stereo camera and ROS1 noetic on LXD. I assume the reader is using Ubuntu as the host machine OS and has some primary knowledge about ROS1 and LXD.
Hardware Requirement
As far as I tested, you need a CPU stronger than Cerelon N5105 for 400p 30fps images. 4GB RAM can work but the RAM usage will grow with the running time.
General Settings
In this section, I will explain about not ORB-SLAM3 specific settings.
OAK-D Settings
To use OAK-D on Linux, you have to set udev rule like the following. See here for an official document.
This is needed only once.
echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"' | sudo tee /etc/udev/rules.d/80-movidius.rules
sudo udevadm control --reload-rules && sudo udevadm trigger
LXD Settings
Firstly of all, we need to launch LXC container ("noetic" in this article) to use.
export container_name=noetic
lxc launch ubuntu:20.04 $container_name
OAK-D Pass-Through
To use the OAK-D on LXD, you have to pass through the OAK-D to the container. As mentioned in the official document, we have to pass through 2 devices because OAK-D changes its device id after the boot.
lxc config device add $container_name oak-d usb vendorid=03e7 productid=2485
lxc config device add $container_name oak-d-loop usb vendorid=03e7 productid=f63b
lxc config device set $container_name oak-d-loop mode 0666
Open GL Settings
Because ROS1's GUI tools like RViz uses OpenGL v1.4, we have to let LXC call Host's OpenGL API. The following commands came from this blog post.
The command needed only once
echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
The commands needed for each containers
lxc config set noetic raw.idmap "both $UID 1000"
lxc restart $container_name
lxc config device add $container_name X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0
lxc config device add $container_name Xauthority disk path=/home/ubuntu/.Xauthority source=${XAUTHORITY}
lxc config device add $container_name mygpu gpu
lxc config device set $container_name mygpu uid 1000
lxc config device set $container_name mygpu gid 1000
lxc exec $container_name -- bash -c 'echo "export DISPLAY=:0" >> /home/ubuntu/.bashrc'
For NVIDA GPU users
Because NVIDIA provides additional packages for controlling its capability management for containers, we need addtional settings for the host machine equipped with NVIDA's GPUs.
lxc config set $container_name nvidia.driver.capabilities all
lxc config set $container_name nvidia.runtime "true"
In Container Installation
We should log in to the container to install programs described in this section.
lxc exec $container_name -- sudo --login --user ubuntu
Installing ROS1 noetic
Follow the official instruction to install ros-noetic-desktop.
Following commands are just for completeness. If you find any contradiction with the official document, follow the official one.
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
sudo apt update
sudo apt install -y ros-noetic-desktop
echo "source /opt/ros/noetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
sudo apt install -y python3-rosdep python3-rosinstall python3-rosinstall-generator python3-wstool build-essential
sudo rosdep init
rosdep update
Building ORB SLAM3
In this article, we assume using ORB SLAM3 v0.4, the latest version at the time of writing.
Installing OpenCV4.5.5
Because ORB SLAM3 requires OpenCV 4.4 but Ubuntu 20.04 provides OpenCV 4.2 as an official package, we have to install OpenCV manually.
CAUTION Following commands I present will overwrite your current OpenCV installation. Since I assume you are using an LXC container, it is not harmful, but if you are trying to follow this instruction on your local, do so at your own risk.
Following instructions install OpenCV 4.5.5, the latest version at the time of writing but you can install another version if you want though I did not test that.
Firstly, download the unofficial bash script. I'm using this because it is easy to use.
wget --no-check-certificate https://raw.githubusercontent.com/milq/milq/master/scripts/bash/install-opencv.sh
Change the OPENCV_VERSION
in the script to OPENCV_VERSION=4.5.5
by opening it with a text editor like vi.
vi install-opencv.sh
Just run it.
bash install-opencv.sh
Installing Pangolin v0.6
ORB SLAM3 seems to be assuming Pangolin v0.6, the latest stable version of Pangolin at the time of writing, is already installed. Therefore, we will install it by following commands.
sudo apt install -y libgl1-mesa-dev libglew-dev ffmpeg libavcodec-dev libavutil-dev libavformat-dev libswscale-dev libavdevice-dev libdc1394-22-dev libraw1394-dev libjpeg-dev libtiff5-dev libopenexr-dev
git clone https://github.com/stevenlovegrove/Pangolin -b v0.6
cd Pangolin
mkdir build
cd build
cmake ..
make
sudo make install
cd ../../
Building ORB SLAM3 library
At the time of writing, official repository does not support ROS, so I made a fork with ROS1 support.
git clone https://github.com/nindanaoto/ORB_SLAM3.git
cd ORB_SLAM3
bash build.sh
Building the ROS Wrapper of ORB SLAM3
After that, build it by following commands. Because Ubuntu 20.04's default python interpreter is Python2, we change it by installing the python-is-python3 package.
sudo apt install -y python-is-python3
echo "export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:~/ORB_SLAM3/Examples/ROS" >> ~/.bashrc
source ~/.bashrc
bash build_ros.sh
cd ..
Installing depthai-ros
To publish OAK-D's data on ROS1 network, I used depthai-ros. Follow [the official install instruction](sudo wget -qO- https://raw.githubusercontent.com/luxonis/depthai-ros/main/install_dependencies.sh | sudo bash).
Install commands are like the following at the time of writing.
sudo wget -qO- https://raw.githubusercontent.com/luxonis/depthai-ros/main/install_dependencies.sh | sudo bash
sudo apt install -y python3-vcstool
mkdir -p catkin_ws/src
cd catkin_ws/src
git clone https://github.com/luxonis/depthai-ros.git
cd ..
vcs import src < underlay.repos
rosdep install --from-paths src --ignore-src -r -y
catkin_make
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc
echo "export ROS_PACKAGE_PATH=\${ROS_PACKAGE_PATH}:~/ORB_SLAM3/Examples/ROS" >> ~/.bashrc
Installing stereo_image_proc
To undistort the camera images, I used stereo_image_proc.
sudo apt install ros-noetic-stereo-image-proc
Running ORB SLAM3 in Stereo Mode
In this section, I explain how to run installed ORB SLAM3 with OAK-D.
When You Can't Find The Device
In this section, you may try to connect OAK-D to the lxc container and can't find the device. In this case, try to restart the container. This error may be caused by the LXC's USB pass-through behavior. I have not resolved this issue yet.
Camera Parameter
ORB SLAM3 needs YAML setting file which describes the camera parameters. OAK-D's parameter can be extracted by running EEPROM dump in the container.
python3 -m pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local/ depthai
git clone https://github.com/luxonis/depthai-python
python3 depthai-python/examples/calibration/calibration_reader.py
My output is presented below as an example.
RGB Camera Default intrinsics...
[[1570.975830078125, 0.0, 970.232666015625], [0.0, 1568.9881591796875, 540.0018310546875], [0.0, 0.0, 1.0]]
1920
1080
/home/nimda/sources/depthai-python/examples/calibration/calibration_reader.py:23: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
M_rgb, width, height = np.array(calibData.getDefaultIntrinsics(dai.CameraBoardSocket.RGB))
LEFT Camera resized intrinsics...
[[836.39874268 0. 636.00115967]
[ 0. 836.66235352 357.10168457]
[ 0. 0. 1. ]]
LEFT Distortion Coefficients...
k1: 12.44894027709961
k2: 20.618579864501953
p1: 0.0032805176451802254
p2: -0.0014607576886191964
k3: 261.510986328125
k4: 12.51024341583252
k5: 19.012231826782227
k6: 260.66668701171875
s1: 0.0
s2: 0.0
s3: 0.0
s4: 0.0
τx: 0.0
τy: 0.0
RIGHT Camera resized intrinsics...
[[843.39074707 0. 635.90380859]
[ 0. 843.87249756 359.52798462]
[ 0. 0. 1. ]]
RIGHT Distortion Coefficients...
k1: 8.136334419250488
k2: 15.044692039489746
p1: 0.0011016841745004058
p2: -0.001027329359203577
k3: 176.60096740722656
k4: 8.13979721069336
k5: 13.856715202331543
k6: 176.6233367919922
s1: 0.0
s2: 0.0
s3: 0.0
s4: 0.0
τx: 0.0
τy: 0.0
RGB FOV 68.7938003540039, Mono FOV 71.86000061035156
LEFT Camera stereo rectification matrix...
[[ 1.01874362e+00 -2.81472561e-02 -1.40969483e+01]
[ 3.21303086e-02 1.00703708e+00 -1.84303370e+01]
[ 1.70172454e-05 -3.45508922e-06 9.90305330e-01]]
RIGHT Camera stereo rectification matrix...
[[ 1.01029788e+00 -2.79067627e-02 -8.64526404e+00]
[ 3.18639371e-02 9.98432838e-01 -1.76077311e+01]
[ 1.68761665e-05 -3.42556854e-06 9.90394469e-01]]
Transformation matrix of where left Camera is W.R.T right Camera's optical center
[[ 9.99938488e-01 7.27824634e-03 -8.37180577e-03 -7.47539711e+00]
[-7.23146135e-03 9.99958158e-01 5.60518634e-03 2.47205094e-01]
[ 8.41225125e-03 -5.54430112e-03 9.99949217e-01 4.30144593e-02]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
Transformation matrix of where left Camera is W.R.T RGB Camera's optical center
[[ 0.9997673 -0.01337837 -0.0169198 -3.73851228]
[ 0.01365407 0.99977416 0.01628564 0.16693963]
[ 0.0166981 -0.01651287 0.99972415 -0.08542535]
[ 0. 0. 0. 1. ]]
The important part is here.
LEFT Camera resized intrinsics...
[[836.39874268 0. 636.00115967]
[ 0. 836.66235352 357.10168457]
[ 0. 0. 1. ]]
RIGHT Camera resized intrinsics...
[[843.39074707 0. 635.90380859]
[ 0. 843.87249756 359.52798462]
[ 0. 0. 1. ]]
The meaning of these matrices is like the following.
LEFT Camera resized intrinsics...
[[ Camera.fx 0. Camera.cx]
[ 0. Camera.fy Camera.cy]
[ 0. 0. 1. ]]
RIGHT Camera resized intrinsics...
[[ Camera2.fx 0. Camera2.cx]
[ 0. Camera2.fy Camera2.cy]
[ 0. 0. 1. ]]
In addition to that, because the baseline of the OAK-D is 7.5cm (0.075m),Camera.bf
is 0.075*Camera.fx
.
The distortion coefficients and rectification matrices are not needed because they will be corrected by stereo_image_proc.
Therefore, the example YAML file will be like the following.
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
Camera.type: "PinHole"
# Camera calibration and distortion parameters (OpenCV)
# Copied from the EEPROM data dump in depthai_demo.py.
Camera.fx: 836.39874268
Camera.fy: 836.66235352
Camera.cx: 636.00115967
Camera.cy: 357.10168457
Camera.k1: 0.0
Camera.k2: 0.0
Camera.k3: 0.0
Camera.k4: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera2.fx: 843.39074707
Camera2.fy: 843.87249756
Camera2.cx: 635.90380859
Camera2.cy: 359.52798462
Camera2.k1: 0.0
Camera2.k2: 0.0
Camera2.k3: 0.0
Camera2.k4: 0.0
Camera2.p1: 0.0
Camera2.p2: 0.0
Camera.bFishEye: 0
Camera.width: 1280
Camera.height: 720
# Camera frames per second
Camera.fps: 30.0
# stereo baseline times fx
Camera.bf: 62.729905701
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
# Close/Far threshold. Baseline times.
ThDepth: 37.5
#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------
# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1200
# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500
Running the node
In this subsection, we try to run the ORB SLAM3. This instruction needs two terminals that are logging into the container's shell. Of course, one terminal and two windows by tmux also work;)
1st terminal
Launch the stereo node for OAK-D.
roslaunch depthai_examples stereo_inertial_node.launch
2nd terminal
Run stereo_image_proc to undistort the images.
ROS_NAMESPACE=/stereo_inertial_publisher/ rosrun stereo_image_proc stereo_image_proc
3rd terminal
I assume the name of config YAML file is ~/oak-d-params.yaml
.
rosrun ORB_SLAM3 Stereo ORB_SLAM3/Vocabulary/ORBvoc.txt oak-d-params.yaml false /camera/left/image_raw:=/stereo_inertial_publisher/left/image_rect /camera/right/image_raw:=/stereo_inertial_publisher/right/image_rect
I hope you will be able to see ORB SLAM3's output now ;)
Running ORB SLAM3 in Stereo_Inertial mode
Because OAK-D is equipped with an IMU (BNO085), we can use it for getting more acculate results.
Calibrating IMU
Because there are no factory calibration for the IMU on OAK-D, we have to calibrate it by hands.
Noise and bias calibration
First of all, we have to calibrate the noise and bias in the IMU to filtering them in ORB SLAM3.
I used imu_utils for this purpose.
The usage is relatively simple.
First of all, we have to build the library.
cd ~/catkin_ws/src
git clone https://github.com/gaowenliang/imu_utils.git
cd ..
catkin_make
Secondly, we need the launch file like the below as imu.launch
:
<launch>
<node pkg="imu_utils" type="imu_an" name="imu_an" output="screen">
<param name="imu_topic" type="string" value= "/oakd/imu"/>
<param name="imu_name" type="string" value= "BNO085"/>
<param name="data_save_path" type="string" value= "$(find imu_utils)/data/"/>
<param name="max_time_min" type="int" value= "120"/>
<param name="max_cluster" type="int" value= "100"/>
</node>
</launch>
Lastly, run it. This calibration collects the noise from a stationary IMU, so we have to put your camera in a quiet place.
roslaunch imu.launch
After this calibration was finished, you can find the output at catkin_ws/src/imu_utils/data/BNO085_imu_param.yaml
. For example, below is my output.
%YAML:1.0
---
type: IMU
name: BNO085
Gyr:
unit: " rad/s"
avg-axis:
gyr_n: 6.1812048014273056e-03
gyr_w: 5.3543409844795362e-05
x-axis:
gyr_n: 4.5418794718006189e-03
gyr_w: 3.8935636505378507e-05
y-axis:
gyr_n: 9.4460641028299049e-03
gyr_w: 5.1407042612016025e-05
z-axis:
gyr_n: 4.5556708296513940e-03
gyr_w: 7.0287550416991553e-05
Acc:
unit: " m/s^2"
avg-axis:
acc_n: 4.4782638307960559e-02
acc_w: 1.5676203442719888e-03
x-axis:
acc_n: 7.7544942126337599e-02
acc_w: 3.7845627827887817e-03
y-axis:
acc_n: 2.6254504179465756e-02
acc_w: 3.5277636750032727e-04
z-axis:
acc_n: 3.0548468618078330e-02
acc_w: 5.6552188252685773e-04
Based on this output, we have to write YAML configuration file for the kalibr's calibration like below. As you can see, we only need avg-axis values. I assume that this configuration file is saved as imu.yaml
.
#Accelerometers
accelerometer_noise_density: 2.8138017235431984e-02 #Noise density (continuous-time)
accelerometer_random_walk: 7.1952095806215616e-04 #Bias random walk
#Gyroscopes
gyroscope_noise_density: 6.4190421491705540e-04 #Noise density (continuous-time)
gyroscope_random_walk: 2.3553045282732546e-06 #Bias random walk
rostopic: /oakd/imu #the IMU ROS topic
update_rate: 400.0 #Hz (for discretization of the values above)
Joint Calibration
Because the factory calibration of OAK-D does not include IMU, we have to calibrate Camera-IMU joint calibration to get the position of the IMU in the camera's coordinate.
We can use Kalibr for this purpose.
Though I describe the procedure of this calibration, if you are using OAK-D, you may be able to just copy my configuration because the position of IMU may not vary much between cameras.
Preparing calibration target
Kalibr needs a calibration target. The generation of the target is described in the official wiki.
I used my display for displaying the calibration targets although the official procedure seems to be recommending to print it on a large piece of paper.
As far as I tested, a 13.3inch display may work, but the viewing angle of the display can affect the calibration quality because it limits the possible movement of the camera during the calibration.
For example, I used the following configuration for my 4K TN 27.9 inch display, HP V28.
Please change the parameters to make each tag visible enough and the number of tags are as many as possible. (This is my criteria for determining the parameters but I'm not sure whether this is the best one or not.)
rosrun kalibr kalibr_create_target_pdf --type apriltag --nx 10 --ny 5 --tspace 0.3
To use the generated target, we have to write the configuration YAML file which matches the parameters used in the generation like the following. tagSize
is the actual size of the tag on the display measured by a physical measure or something like that.
target_type: 'aprilgrid' #gridtype
tagCols: 10 #number of apriltags
tagRows: 5 #number of apriltags
tagSize: 0.0456 #size of apriltag, edge to edge [m]
tagSpacing: 0.3 #ratio of space between tags to tagSize
In this post, I assume that the configuration file is saved as april.yaml
,
Write Factory calibrated YAML
Camera-IMU joint calibration generally needs camera-only calibration before the joint calibration however, because the OAK-D has factory calibration data, just formatting it in the kalibr's format is fine, like the following. I assume that this file is saved as factory-camchain-oak-d-stereo.yaml
.
Because the distortion model used in the factory calibration is not supported in kalibr, I assumed using stereo_image_proc
to distort the images, so distrotion_coeffs
are set to zero.
cam0:
cam_overlaps:
- 1
camera_model: pinhole
distortion_coeffs:
- 0
- 0
- 0
- 0
distortion_model: radtan
intrinsics:
- 836.39874268
- 836.66235352
- 636.00115967
- 357.10168457
resolution:
- 1280
- 720
rostopic: /oakd/stereo_ir/left/image_rect
cam1:
T_cn_cnm1:
- - 9.99938488e-01
- 7.27824634e-03
- -8.37180577e-03
- -7.47539711e-02
- - -7.23146135e-03
- 9.99958158e-01
- 5.60518634e-03
- 2.47205094e-03
- - 8.41225125e-03
- -5.54430112e-03
- 9.99949217e-01
- 4.30144593e-04
- - 0.0
- 0.0
- 0.0
- 1.0
cam_overlaps:
- 0
camera_model: pinhole
distortion_coeffs:
- 0
- 0
- 0
- 0
distortion_model: radtan
intrinsics:
- 843.39074707
- 843.87249756
- 635.90380859
- 359.52798462
resolution:
- 1280
- 720
rostopic: /oakd/stereo_ir/right/image_rect
Recording Camera-IMU joint data
1st terminal
Launch the stereo node for OAK-D.
roslaunch depthai_examples stereo_inertial_node.launch
2nd terminal
Run stereo_image_proc to undistort the images.
ROS_NAMESPACE=/stereo_inertial_publisher/ rosrun stereo_image_proc stereo_image_proc
3rd terminal
Record data as oak-d-stereo.bag
. See the official video for recommended recording movements.
rosbag record -O ./oak-d-stereo.bag /oakd/stereo_ir/left/image_rect /oakd/stereo_ir/right/image_rect /oakd/imu
Run calibration
rosrun kalibr kalibr_calibrate_imu_camera --target april.yaml --bag oak-d-stereo.bag --cam factory-camchain-oak-d-stereo.yaml --imu imu.yaml
After calibration, you will get output as camchain-imucam-oak-d-stereo.yaml
. If your calibration is good enough, the reprojection error that will be displayed at the end of the calibration will be within about 20, but this is just a rough indication to reject a bad calibration.
The output will be like the following:
cam0:
T_cam_imu:
- - -0.004297644870309092
- -0.9999379633526877
- 0.010276171205606692
- 0.057355539285231474
- - 0.9999890070827118
- -0.004278141289016657
- 0.0019191719158318053
- 0.014038544265156515
- - -0.001875089944512822
- 0.010284306159845909
- 0.9999453570493289
- -0.019419965178863116
- - 0.0
- 0.0
- 0.0
- 1.0
cam_overlaps:
- 1
camera_model: pinhole
distortion_coeffs:
- 0
- 0
- 0
- 0
distortion_model: radtan
intrinsics:
- 836.39874268
- 836.66235352
- 636.00115967
- 357.10168457
resolution:
- 1280
- 720
rostopic: /oakd/stereo_ir/left/image_rect
timeshift_cam_imu: -0.005743440329743733
cam1:
T_cam_imu:
- - 0.0029964838942637684
- -0.9999936708554896
- 0.0019181588134633415
- -0.017137204848541913
- - 0.9999677186176349
- 0.0030106958496116665
- 0.007449659934745395
- 0.01598639071112978
- - -0.007455387777549706
- 0.0018957741066330407
- 0.9999704111790622
- -0.018584179655013494
- - 0.0
- 0.0
- 0.0
- 1.0
T_cn_cnm1:
- - 0.9999384691844168
- 0.007278246450583977
- -0.0083718057552318
- -0.0747539711
- - -0.007231461239416023
- 0.9999581441300118
- 0.005605186373246936
- 0.00247205094
- - 0.008412251264768203
- -0.005544301086753065
- 0.9999492471680708
- 0.000430144593
- - 0.0
- 0.0
- 0.0
- 1.0
cam_overlaps:
- 0
camera_model: pinhole
distortion_coeffs:
- 0
- 0
- 0
- 0
distortion_model: radtan
intrinsics:
- 843.39074707
- 843.87249756
- 635.90380859
- 359.52798462
resolution:
- 1280
- 720
rostopic: /oakd/stereo_ir/right/image_rect
timeshift_cam_imu: -0.007189075332096728
What we need is 'T_cam_imu'.
T_cn_cnm1:
- - 0.9999384691844168
- 0.007278246450583977
- -0.0083718057552318
- -0.0747539711
- - -0.007231461239416023
- 0.9999581441300118
- 0.005605186373246936
- 0.00247205094
- - 0.008412251264768203
- -0.005544301086753065
- 0.9999492471680708
- 0.000430144593
- - 0.0
- 0.0
- 0.0
- 1.0
Write configuration file for Stero_Inertial mode
I assume the configuration file is saved as 'oak-d-param.yaml' The following is my configuration file. As you can see, the only difference from the one for Stereo mode is the addition of Tbc
and IMU
sections.
The transformation part of 'Tbc' is from the output of kalibr. Because the rotation part seems to not match with ORB_SLAM3's axis, I used a fixed value for the rotation part.
The values for 'IMU' section is the one from 'imu.yaml'.
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
Camera.type: "PinHole"
# Camera calibration and distortion parameters (OpenCV)
# Copied from the EEPROM data dump in depthai_demo.py.
Camera.fx: 836.39874268
Camera.fy: 836.66235352
Camera.cx: 636.00115967
Camera.cy: 357.10168457
Camera.k1: 0.0
Camera.k2: 0.0
Camera.k3: 0.0
Camera.k4: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera2.fx: 843.39074707
Camera2.fy: 843.87249756
Camera2.cx: 635.90380859
Camera2.cy: 359.52798462
Camera2.k1: 0.0
Camera2.k2: 0.0
Camera2.k3: 0.0
Camera2.k4: 0.0
Camera2.p1: 0.0
Camera2.p2: 0.0
Camera.bFishEye: 0
Camera.width: 1280
Camera.height: 720
# Camera frames per second
Camera.fps: 30.0
# stereo baseline
Camera.bf: 62.729905701
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
# Close/Far threshold. Baseline times.
ThDepth: 37.5
Stereo.T_c1_c2: !!opencv-matrix
rows: 4
cols: 4
dt: f
data: [9.99938488e-01, 7.27824634e-03, -8.37180577e-03, -7.47539711e+00,
-7.23146135e-03, 9.99958158e-01, 5.60518634e-03, 2.47205094e-01,
8.41225125e-03, -5.54430112e-03, 9.99949217e-01, 4.30144593e-02,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]
# Transformation from camera 0 to body-frame (imu)
Tbc: !!opencv-matrix
rows: 4
cols: 4
dt: f
data: [0, 1, 0, 0.01744779136481356,
-1, 0, 0, 0.012407593809577674,
0, 0, 1, -0.026713744779204896,
0., 0., 0., 1. ]
# IMU noise
IMU.NoiseGyro: 6.4190421491705540e-04
IMU.NoiseAcc: 2.8138017235431984e-02
IMU.GyroWalk: 2.3553045282732546e-06
IMU.AccWalk: 7.1952095806215616e-04
IMU.Frequency: 400
#--------------------------------------------------------------------------------------------
# Stereo Rectification. Only if you need to pre-rectify the images.
#
# NOT NEEDED ON OAK-D. We extract rectified images directly from the OAK-D
# itself, so no need for these.
#
# Camera.fx, .fy, etc must be the same as in LEFT.P
#--------------------------------------------------------------------------------------------
#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------
# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1200
# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1.0
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2.0
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3.0
Viewer.ViewpointX: 0.0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500.0
Viewer.imageViewScale: 1.0
Run Stero_Inertial mode
Finally, we can run it!
1st terminal
Launch the stereo node for OAK-D.
roslaunch depthai_examples stereo_inertial_node.launch
2nd terminal
Run stereo_image_proc to undistort the images.
ROS_NAMESPACE=/stereo_inertial_publisher/ rosrun stereo_image_proc stereo_image_proc
3rd terminal
rosrun ORB_SLAM3 Stereo_Inertial ORB_SLAM3/Vocabulary/ORBvoc.txt oak-d-params.yaml false /camera/left/image_raw:=/stereo_inertial_publisher/left/image_rect /camera/right/image_raw:=/stereo_inertial_publisher/right/image_rect /imu:=/stereo_inertial_publisher/imu