9
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

Ubuntu 20.04 on Raspberry PI 4 model BでRealsenseカメラ(D415)を動かしてみる

Posted at

librealsenseは現時点ではUbuntu 18.04までしかサポートしていない。
が、以下のコメントを読むと、-DFORCE_RSUSB_BACKEND=trueを指定すればいけそうなので試してみる。
https://github.com/IntelRealSense/librealsense/issues/6368#issuecomment-636316279

準備

以下を参考に、18.04向けの必要そうなパッケージをインストールしておく。
refer: https://github.com/IntelRealSense/librealsense/blob/master/doc/installation.md

sudo apt-get install git libssl-dev libusb-1.0-0-dev pkg-config libgtk-3-dev
sudo apt-get install libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev

まずはlibrealsense本体だけビルド

ソースからビルド。Ubuntu 20.04向けに-DFORCE_RSUSB_BACKEND=trueを忘れずにつけておく。

cd ~/sources
git clone https://github.com/IntelRealSense/librealsense
git checkout -b v2.36.0 refs/tags/v2.36.0
cd librealsense
mkdir build
cd build
cmake ../ \
-DFORCE_RSUSB_BACKEND=true \
-DCMAKE_BUILD_TYPE=release \
-DBUILD_EXAMPLES=true \
-DBUILD_GRAPHICAL_EXAMPLES=true \

sudo make uninstall && make clean && make -j 4 && sudo make install

# realsense-viewerでエラーになるので、言われたとおりにする。
sudo cp ~/.99-realsense-libusb.rules /etc/udev/rules.d/99-realsense-libusb.rules && sudo udevadm control --reload-rules && udevadm trigger

これで、realsense-viewerなどが動くようになる。
RGBもIRも有効にした状態で、4fpsくらいでは動く模様。

Pythonバインディング(pyrealsense2)を有効にする

pythonから使う場合は以下のように-DBUILD_PYTHON_BINDINGS-DPYTHON_EXECUTABLEを有効にする。

# python-devをインストールしないと以下のようなエラーになる
#   Could NOT find Python (missing: Python_EXECUTABLE Python_LIBRARIES
#   Python_INCLUDE_DIRS Interpreter Development)
sudo apt install python-dev

cmake ../ \
-DFORCE_RSUSB_BACKEND=true \
-DCMAKE_BUILD_TYPE=release \
-DBUILD_EXAMPLES=true \
-DBUILD_GRAPHICAL_EXAMPLES=true \
-DBUILD_PYTHON_BINDINGS=true \
-DPYTHON_EXECUTABLE=$(which python3)

# sudo make uninstall && make clean && make -j 4 && sudo make install
sudo make uninstall && make -j 4 && sudo make install
sudo cp ~/.99-realsense-libusb.rules /etc/udev/rules.d/99-realsense-libusb.rules && sudo udevadm control --reload-rules && udevadm trigger

# tfliteを事前にインストールしていた関係でpyenv上のPython3.7.8を使用している。
# このため、pyrealsenseは、/usr/lib/python3/dist-packages/以下にインストールされてしまう。
# pythonからpyrealsenseを呼ぶには以下の実行が必要。.bashrcにでも書いておくと良い。
export PYTHONPATH=$PYTHONPATH:/usr/local/lib:/usr/lib/python3/dist-packages/pyrealsense2

サンプルを実行してみる。

cd ~/sources/librealsense/wrappers/python/examples
python3 align-depth2color.py

fps計測アプリ

動作速度がわからんので、align-depth2color.pyをちょいと改造してfpsを表示させてみる。
以下のようにして測定したところ、結構ばらつきがあるが10fps程度と、USB2.0接続のWebカメラ程度の速度だった。

# First import the library
import pyrealsense2 as rs
# Import Numpy for easy array manipulation
import numpy as np
# Import OpenCV for easy image rendering
import cv2

# Create a pipeline
pipeline = rs.pipeline()

#Create a config and configure the pipeline to stream
#  different resolutions of color and depth streams
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Start streaming
profile = pipeline.start(config)

# Getting the depth sensor's depth scale (see rs-align example for explanation)
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)

# We will be removing the background of objects more than
#  clipping_distance_in_meters meters away
clipping_distance_in_meters = 1 #1 meter
clipping_distance = clipping_distance_in_meters / depth_scale

# Create an align object
# rs.align allows us to perform alignment of depth frames to others frames
# The "align_to" is the stream type to which we plan to align depth frames.
align_to = rs.stream.color
align = rs.align(align_to)

tm = cv2.TickMeter()
tm.start()

count = 0
max_count = 10
fps = 0

# Streaming loop
while True:
    # Get frameset of color and depth
    frames = pipeline.wait_for_frames()
    # frames.get_depth_frame() is a 640x360 depth image

    # Align the depth frame to color frame
    aligned_frames = align.process(frames)

    # Get aligned frames
    aligned_depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame is a 640x480 depth image
    color_frame = aligned_frames.get_color_frame()

    # Validate that both frames are valid
    if not aligned_depth_frame or not color_frame:
        continue

    if count == max_count:
        tm.stop()
        fps = max_count / tm.getTimeSec()
        tm.reset()
        tm.start()
        count = 0
    else:
        count += 1

    depth_image = np.asanyarray(aligned_depth_frame.get_data())
    color_image = np.asanyarray(color_frame.get_data())

    # Remove background - Set pixels further than clipping_distance to grey
    grey_color = 153
    depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) #depth image is 1 channel, color is 3 channels
    bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)

    # Render images
    depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
    images = np.hstack((bg_removed, depth_colormap))

    frame = cv2.putText(images, 'FPS: {:.2f}'.format(fps),
        (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 200), thickness=2)

    cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
    cv2.imshow('Align Example', images)
    key = cv2.waitKey(1)
    # Press esc or 'q' to close the image window
    if key & 0xFF == ord('q') or key == 27:
        cv2.destroyAllWindows()
        break

pipeline.stop()
9
3
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
9
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?