1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

ROS2でOAK-D lite depth AI Cameraを動作させる 2

Posted at

はじめに

OAK D lite depth AI cameraをROS2で動作させる方法を解説します.このカメラは,内部にチップを含んでおり,カメラ本体側でニューラルネットワークを動作させて推論をさせることができます.1物体認識やトラッキングなどをカメラ側で完結させることができます.ROSとはロボットオペレーティングシステムのことで,ロボットや自動走行車の開発に用いられています.2
  前回はDepthAI cameraのエコシステムとパイプラインの構成を理解し,ROS2ノードとしてカメラ画像をPublish, Subscribeしました.今回はDepthAI cameraの最大の特徴である内部VPUでのNeural Networkの動作を行っていきます.具体的に,カメラ側で物体認識を行い,物体の推論ラベル,座標,距離(depth)を計算します.

動作環境
Raspberry Pi 4 (8G) 
Ubuntu Mate 22.04
ROS hunble

目次

1. PipelineとNeural Network
2. depthai-のニューラルネットワークの推論結果物体認識をトピック通信する

1.PipelineとNeural Network

 DeptahAI Cameraを使用するときは,初期化によってPipelineをデプロイすることで,デバイス内での処理を規定します.Pipelineの設定はAPIが規定する範囲で自由に構成することができます.以下の図はカメラデバイスでカラーカメラの画像を出力するノードと,それを受け取ってなんらかの推論を行うノードの概念図です.このようにしてPipelineを構成します.DepthAI APIを用いてPythonで記述することができます.
以下のセクションで、Neural Networkの推論フェーズを含むPipelineを構築して動作させてみます.

DepthAI pipeline 概要
┌─────────────┐                    ┌───────────────┐
│             │                    │               │
│             │ preview      input │               │ inference
│ ColorCamera ├───────────────────►│ NeuralNetwork │──────────────►
│        Node │     [ImgFrame]     │          Node │ 
│             │                    │               │
└─────────────┘                    └───────────────┘

2. depthai のニューラルネットワークの推論結果(物体認識)をトピック通信する

カメラ側の推論の結果をMessageとしてpublishするROSノードを実装します。

2.1 推論フェーズを含むpipelineの構築

pipelineの構成は下の図の通りで,
①カメラからのRGB画像を出力するノード
②Neural Networkを介して逐次画像から推論結果を出力するノード
③出力結果をホストPCに転送するノード
になっています.処理の流れもこれに沿って行われます.

以下の図とサンプルコードはgithub3からも確認できます.

サンプルコード1

#!/usr/bin/env python3
"""
The code is edited from docs (https://docs.luxonis.com/projects/api/en/latest/samples/Yolo/tiny_yolo/)
We add parsing from JSON files that contain configuration
"""

from pathlib import Path
import sys
import cv2
import depthai as dai
import numpy as np
import time
import argparse
import json
import blobconverter

# parse arguments
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--model", help="Provide model name or model path for inference",
                    default='yolov4_tiny_coco_416x416', type=str)
parser.add_argument("-c", "--config", help="Provide config path for inference",
                    default='json/yolov4-tiny.json', type=str)
args = parser.parse_args()

# parse config
configPath = Path(args.config)
if not configPath.exists():
    raise ValueError("Path {} does not exist!".format(configPath))

with configPath.open() as f:
    config = json.load(f)
nnConfig = config.get("nn_config", {})

# parse input shape
if "input_size" in nnConfig:
    W, H = tuple(map(int, nnConfig.get("input_size").split('x')))

# extract metadata
metadata = nnConfig.get("NN_specific_metadata", {})
classes = metadata.get("classes", {})
coordinates = metadata.get("coordinates", {})
anchors = metadata.get("anchors", {})
anchorMasks = metadata.get("anchor_masks", {})
iouThreshold = metadata.get("iou_threshold", {})
confidenceThreshold = metadata.get("confidence_threshold", {})

print(metadata)

# parse labels
nnMappings = config.get("mappings", {})
labels = nnMappings.get("labels", {})

# get model path
nnPath = args.model
if not Path(nnPath).exists():
    print("No blob found at {}. Looking into DepthAI model zoo.".format(nnPath))
    nnPath = str(blobconverter.from_zoo(args.model, shaves=6, zoo_type="depthai", use_cache=True))
# sync outputs
syncNN = True

# Create pipeline
pipeline = dai.Pipeline()

# Define sources and outputs
camRgb = pipeline.create(dai.node.ColorCamera)
detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)
nnOut = pipeline.create(dai.node.XLinkOut)

nnOut.setStreamName("nn")

# Properties
camRgb.setPreviewSize(W, H)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setInterleaved(False)
camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
camRgb.setFps(40)

# Network specific settings
detectionNetwork.setConfidenceThreshold(confidenceThreshold)
detectionNetwork.setNumClasses(classes)
detectionNetwork.setCoordinateSize(coordinates)
detectionNetwork.setAnchors(anchors)
detectionNetwork.setAnchorMasks(anchorMasks)
detectionNetwork.setIouThreshold(iouThreshold)
detectionNetwork.setBlobPath(nnPath)
detectionNetwork.setNumInferenceThreads(2)
detectionNetwork.input.setBlocking(False)

# Linking
camRgb.preview.link(detectionNetwork.input)
detectionNetwork.out.link(nnOut.input)

# Connect to device and start pipeline
with dai.Device(pipeline) as device:

    # Output queue to get nn data from the outputs defined above
    qDet = device.getOutputQueue(name="nn", maxSize=4, blocking=False)

    startTime = time.monotonic()
    counter = 0

    while True:
        inDet = qDet.tryGet()

        if inDet is not None:
            detections = inDet.detections
            counter += 1
            for detection in detections:
                # Print detection results
                label = labels[detection.label] if detection.label < len(labels) else str(detection.label)
                confidence = detection.confidence * 100
                print(f"Label: {label}, Confidence: {confidence:.2f}%")
                print(f"Bounding box: ({detection.xmin:.2f}, {detection.ymin:.2f}), ({detection.xmax:.2f}, {detection.ymax:.2f})")
                print("-" * 30)

        if cv2.waitKey(1) == ord('q'):
            break


Pipeline
┌─────────────┐                    ┌────────────────────────┐
│             │                    │                        │
│             │ preview      input │                        │
│ ColorCamera ├───────────────────►│ YoloDetectionNetwork   │
│     Node    │     [ImgFrame]     │         Node           │
│             │                    │                        │
└─────────────┘                    └──────────────┬─────────┘
                                                  │
                                                 out
                                           [Detections]
                                                  │
                                                  ▼
                                        ┌─────────────┐
                                        │             │
                                        │  XLinkOut   │
                                        │    "nn"     │
                                        │             │
                                        └─────────────┘

2.2 ROS ノードとしての実装

ROS2ノードとして再実装しました.ノードの初期化部分にpipelineの初期化が含まれていること,timer_callback関数でカメラの推論結果を取得していること,カメラ側の推論をROSのメッセージ型に変換しているdetection_to_tuple関数がポイントです.

サンプルコード2
#!/usr/bin/env python3

import sys
import cv2
import depthai as dai
import numpy as np
import time
import argparse
import json
import blobconverter
from vision_msgs.msg import Detection2D, Detection2DArray, ObjectHypothesisWithPose, BoundingBox2D
from pathlib import Path
from std_msgs.msg import Header

import rclpy
from rclpy.node import Node

# tmp_path 
tmp_path = str(Path(__file__).parent.parent.parent.parent.parent.parent.parent)
package_name = 'arcanain_depthai_ros2'
class YoloDetectionNode(Node):
    def __init__(self):
        super().__init__('yolo_detection_node')
        
        self.detection_publisher = self.create_publisher(Detection2DArray, 'detections', 10)


        # parse command line 
        parser = argparse.ArgumentParser()
        # command line for <model>.blob 
        parser.add_argument("-m", "--model", help="Provide model name or model path for inference",
                            default=f"{tmp_path}/src/{package_name}/models/tiny-yolo-v4_openvino_2021.2_6shave.blob", type=str)
        #yolov4_tiny_coco_416x416'
        # command line for <config>.json
        parser.add_argument("-c", "--config", help="Provide config path for inference",
                            default=f"{tmp_path}/src/{package_name}/json/yolov4-tiny.json", type=str)
        args, unknown = parser.parse_known_args()


        # parse <config>.json
        configPath = Path(args.config)
        print("Config Path:", configPath)
        print(str(Path(__file__).parent.parent.parent.parent.parent.parent))
        print(str(Path(__file__).parent))
        if not configPath.exists():
            raise ValueError("Path {} does not exist!".format(configPath))

        with configPath.open() as f:
            config = json.load(f)
        nnConfig = config.get("nn_config", {})

        # 入力サイズの取得
        if "input_size" in nnConfig:
            W, H = tuple(map(int, nnConfig.get("input_size").split('x')))

        # メタデータの取得
        metadata = nnConfig.get("NN_specific_metadata", {})
        classes = metadata.get("classes", {})
        coordinates = metadata.get("coordinates", {})
        anchors = metadata.get("anchors", {})
        anchorMasks = metadata.get("anchor_masks", {})
        iouThreshold = metadata.get("iou_threshold", {})
        confidenceThreshold = metadata.get("confidence_threshold", {})

        self.get_logger().info(str(metadata))

        # ラベルの取得
        nnMappings = config.get("mappings", {})
        self.labels = nnMappings.get("labels", {})

        # モデルのパスを取得
        nnPath = args.model
        if not Path(nnPath).exists():
            self.get_logger().info("No blob found at {}. Looking into DepthAI model zoo.".format(nnPath))
            nnPath = str(blobconverter.from_zoo(args.model, shaves=6, zoo_type="depthai", use_cache=True))

        # パイプラインの作成
        pipeline = dai.Pipeline()

        # ノードの定義
        camRgb = pipeline.create(dai.node.ColorCamera)
        detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)
        nnOut = pipeline.create(dai.node.XLinkOut)

        nnOut.setStreamName("nn")

        # プロパティの設定
        camRgb.setPreviewSize(W, H)
        camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
        camRgb.setInterleaved(False)
        camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
        camRgb.setFps(40)

        # ネットワークの設定
        detectionNetwork.setConfidenceThreshold(confidenceThreshold)
        detectionNetwork.setNumClasses(classes)
        detectionNetwork.setCoordinateSize(coordinates)
        detectionNetwork.setAnchors(anchors)
        detectionNetwork.setAnchorMasks(anchorMasks)
        detectionNetwork.setIouThreshold(iouThreshold)
        detectionNetwork.setBlobPath(nnPath)
        detectionNetwork.setNumInferenceThreads(2)
        detectionNetwork.input.setBlocking(False)

        # ノード間の接続
        camRgb.preview.link(detectionNetwork.input)
        detectionNetwork.out.link(nnOut.input)

        # デバイスへの接続とパイプラインの開始
        self.device = dai.Device(pipeline)
        self.qDet = self.device.getOutputQueue(name="nn", maxSize=4, blocking=False)

        self.timer = self.create_timer(0.1, self.timer_callback)

        self.startTime = time.monotonic()
        self.counter = 0

    def detection_to_tuple(self, detection):
        """
        dai.Detectionオブジェクトをタプルに変換する関数。

        Returns:
        - tuple: (label: str, confidence: float, bbox: tuple(float, float, float, float))
        """
        # ラベル名の取得
        label_id = detection.label
        label_name = self.labels[label_id] if label_id < len(self.labels) else str(label_id)

        # 信頼度の取得
        confidence = detection.confidence  # 0.0 ~ 1.0

        # バウンディングボックスの取得(正規化された座標:0.0 ~ 1.0)
        xmin = detection.xmin
        ymin = detection.ymin
        xmax = detection.xmax
        ymax = detection.ymax

        # タプルにまとめる
        detection_tuple = (
            label_name,            # ラベル名(str)
            confidence,            # 信頼度(float)
            (xmin, ymin, xmax, ymax)  # バウンディングボックス座標(tuple of float)
        )

        return detection_tuple

    def timer_callback(self):
        inDet = self.qDet.tryGet()

        '''
        if inDet is not None:
            detections = inDet.detections
            self.counter += 1
            detection_tuples = []
            for detection in detections:
                # 検出結果をタプルに変換
                detection_tuple = self.detection_to_tuple(detection)
                detection_tuples.append(detection_tuple)

            # ターミナルに出力
            for dt in detection_tuples:
                label_name, confidence, bbox = dt
                self.get_logger().info(f"Label: {label_name}, Confidence: {confidence:.2f}")
                self.get_logger().info(f"BBox: xmin={bbox[0]:.2f}, ymin={bbox[1]:.2f}, xmax={bbox[2]:.2f}, ymax={bbox[3]:.2f}")
                self.get_logger().info("-" * 30)
        '''
        
        if inDet is not None:
            detections = inDet.detections
            detection_msg = Detection2DArray()

            # ヘッダー情報の作成
            header = Header()
            header.stamp = self.get_clock().now().to_msg()
            header.frame_id = "camera_frame"
            detection_msg.header = header

            for detection in detections:
                detection2d = Detection2D()

                label_id = detection.label
                label_name = self.labels[label_id]

                # ラベル名と信頼度を設定
                hypothesis = ObjectHypothesisWithPose()
                hypothesis.hypothesis.class_id =  label_name#detection.label  # class_idはhypothesisの中にある
                hypothesis.hypothesis.score = detection.confidence
                detection2d.results.append(hypothesis)

                # バウンディングボックスを設定
                bbox = BoundingBox2D()
                bbox.center.position.x = (detection.xmin + detection.xmax) / 2
                bbox.center.position.y = (detection.ymin + detection.ymax) / 2
                bbox.size_x = detection.xmax - detection.xmin
                bbox.size_y = detection.ymax - detection.ymin
                detection2d.bbox = bbox

                detection_msg.detections.append(detection2d)

            # 検出結果をトピックにパブリッシュ
            self.detection_publisher.publish(detection_msg)

            # ログの出力
            for detection in detections:
                label_id = label_name#detection.label
                confidence = detection.confidence
                xmin, ymin, xmax, ymax = detection.xmin, detection.ymin, detection.xmax, detection.ymax
                self.get_logger().info(f"Label: {label_id}, Confidence: {confidence:.2f}")
                self.get_logger().info(f"BBox: xmin={xmin:.2f}, ymin={ymin:.2f}, xmax={xmax:.2f}, ymax={ymax:.2f}")
                self.get_logger().info("-" * 30)

    def destroy_node(self):
        super().destroy_node()
        self.device.close()

def main(args=None):
    rclpy.init(args=args)

    node = YoloDetectionNode()

    try:
        rclpy.spin(node)
    except KeyboardInterrupt:
        pass
    finally:
        node.destroy_node()
        rclpy.shutdown()

if __name__ == '__main__':
    main()


ディレクトリの配置

yolo_detection_pkg/
    ├── package.xml
    ├── setup.py
    ├── resource/
    │   └── yolo_detection_pkg
    ├── models/
    │   └── yolov4_tiny_coco_416x416.blob
    ├── yolo_detection_pkg/
    │   ├── __init__.py
    │   └── yolo_detection_node.py
    └── json/
          └── yolov4-tiny.json

2.3 実行

$ ros2 run yolo_detection_pkg yolo_detection_node --ros-args -m yolov4_tiny_coco_416x416 -c json/yolov4-tiny.json

2.4 結果

terminal: ros2 run arcanain_depthai_ros2 inference_topic_testの実行結果
nkn4ryu@nkn4ryu:~/ros2_ws$ ros2 run  arcanain_depthai_ros2 inference_topic_test
Config Path: /home/nkn4ryu/ros2_ws/src/arcanain_depthai_ros2/json/yolov4-tiny.json
/home/nkn4ryu/ros2_ws/install
/home/nkn4ryu/ros2_ws/install/arcanain_depthai_ros2/lib/python3.10/site-packages/python_programs
[INFO] [1729671485.701868173] [yolo_detection_node]: {'classes': 80, 'coordinates': 4, 'anchors': [10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319], 'anchor_masks': {'side26': [1, 2, 3], 'side13': [3, 4, 5]}, 'iou_threshold': 0.5, 'confidence_threshold': 0.5}
[18443010C17C121300] [1.5] [1.009] [system] [warning] ColorCamera IMX214: capping FPS for selected resolution to 35
[INFO] [1729671491.225484865] [yolo_detection_node]: Label: 0, Confidence: 0.57
[INFO] [1729671491.226325517] [yolo_detection_node]: BBox: xmin=0.10, ymin=0.13, xmax=1.06, ymax=1.03
[INFO] [1729671491.226931114] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.325483635] [yolo_detection_node]: Label: 0, Confidence: 0.66
[INFO] [1729671491.326280294] [yolo_detection_node]: BBox: xmin=0.06, ymin=-0.07, xmax=0.95, ymax=0.91
[INFO] [1729671491.326878693] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.425296037] [yolo_detection_node]: Label: 0, Confidence: 0.73
[INFO] [1729671491.426067839] [yolo_detection_node]: BBox: xmin=0.01, ymin=-0.09, xmax=0.99, ymax=0.94
[INFO] [1729671491.426806875] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.525564475] [yolo_detection_node]: Label: 0, Confidence: 0.59
[INFO] [1729671491.526265684] [yolo_detection_node]: BBox: xmin=0.00, ymin=0.36, xmax=0.38, ymax=0.99
[INFO] [1729671491.526807923] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.527319488] [yolo_detection_node]: Label: 0, Confidence: 0.59
[INFO] [1729671491.527794832] [yolo_detection_node]: BBox: xmin=0.06, ymin=-0.08, xmax=0.95, ymax=0.93
[INFO] [1729671491.528245817] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.625721786] [yolo_detection_node]: Label: 0, Confidence: 0.72
[INFO] [1729671491.626502633] [yolo_detection_node]: BBox: xmin=0.06, ymin=-0.08, xmax=0.95, ymax=0.93
[INFO] [1729671491.627104344] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.627660298] [yolo_detection_node]: Label: 0, Confidence: 0.56
[INFO] [1729671491.628166162] [yolo_detection_node]: BBox: xmin=0.01, ymin=0.42, xmax=0.36, ymax=0.99
[INFO] [1729671491.628645957] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.725752715] [yolo_detection_node]: Label: 0, Confidence: 0.62
[INFO] [1729671491.726539648] [yolo_detection_node]: BBox: xmin=0.05, ymin=-0.06, xmax=0.96, ymax=0.91
[INFO] [1729671491.727143204] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.727694709] [yolo_detection_node]: Label: 0, Confidence: 0.61
[INFO] [1729671491.728188953] [yolo_detection_node]: BBox: xmin=0.01, ymin=0.43, xmax=0.37, ymax=1.00
[INFO] [1729671491.728664374] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.825611788] [yolo_detection_node]: Label: 0, Confidence: 0.61
[INFO] [1729671491.826399155] [yolo_detection_node]: BBox: xmin=0.05, ymin=-0.08, xmax=0.96, ymax=0.92
[INFO] [1729671491.827003880] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.827574637] [yolo_detection_node]: Label: 0, Confidence: 0.51
[INFO] [1729671491.828085363] [yolo_detection_node]: BBox: xmin=0.00, ymin=0.43, xmax=0.36, ymax=1.00
[INFO] [1729671491.828571742] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.925830530] [yolo_detection_node]: Label: 0, Confidence: 0.61
[INFO] [1729671491.926627121] [yolo_detection_node]: BBox: xmin=0.12, ymin=-0.01, xmax=1.03, ymax=1.00
[INFO] [1729671491.927238241] [yolo_detection_node]: ------------------------------
[INFO] [1729671491.927789793] [yolo_detection_node]: Label: 0, Confidence: 0.58
[INFO] [1729671491.928296335] [yolo_detection_node]: BBox: xmin=0.00, ymin=0.44, xmax=0.36, ymax=1.00
[INFO] [1729671491.928778151] [yolo_detection_node]: ------------------------------
[INFO] [1729671492.025456460] [yolo_detection_node]: Label: 0, Confidence: 0.66
[INFO] [1729671492.026124131] [yolo_detection_node]: BBox: xmin=0.14, ymin=-0.02, xmax=1.02, ymax=1.02
[INFO] [1729671492.026624555] [yolo_detection_node]: ------------------------------
[INFO] [1729671492.027104553] [yolo_detection_node]: Label: 0, Confidence: 0.59
[INFO] [1729671492.027561126] [yolo_detection_node]: BBox: xmin=0.00, ymin=0.44, xmax=0.36, ymax=1.00
[INFO] [1729671492.028081916] [yolo_detection_node]: ------------------------------
[INFO] [1729671492.125053447] [yolo_detection_node]: Label: 0, Confidence: 0.72
[INFO] [1729671492.125676837] [yolo_detection_node]: BBox: xmin=0.13, ymin=-0.01, xmax=1.02, ymax=1.01
[INFO] [1729671492.126161723] [yolo_detection_node]: ------------------------------
[INFO] [1729671492.225453146] [yolo_detection_node]: Label: 0, Confidence: 0.69
[INFO] [1729671492.226268236] [yolo_detection_node]: BBox: xmin=0.13, ymin=-0.02, xmax=1.02, ymax=1.02
[INFO] [1729671492.226882373] [yolo_detection_node]: ------------------------------
[INFO] [1729671492.325655466] [yolo_detection_node]: Label: 0, Confidence: 0.69
[INFO] [1729671492.326426705] [yolo_detection_node]: BBox: xmin=0.05, ymin=0.01, xmax=0.96, ymax=0.99
[INFO] [1729671492.327018411] [yolo_detection_node]: ------------------------------
[INFO] [1729671492.327582401] [yolo_detection_node]: Label: 0, Confidence: 0.59
[INFO] [1729671492.328108382] [yolo_detection_node]: BBox: xmin=0.01, ymin=0.46, xmax=0.37, ymax=1.00
[INFO] [1729671492.328595430] [yolo_detection_node]: ------------------------------
[INFO] [1729671492.424874716] [yolo_detection_node]: Label: 0, Confidence: 0.74
[INFO] [1729671492.425471694] [yolo_detection_node]: BBox: xmin=0.04, ymin=0.00, xmax=0.97, ymax=1.00
[INFO] [1729671492.425941820] [yolo_detection_node]: ------------------------------
[INFO] [1729671492.426387024] [yolo_detection_node]: Label: 0, Confidence: 0.55
[INFO] [1729671492.426802679] [yolo_detection_node]: BBox: xmin=0.01, ymin=0.47, xmax=0.37, ymax=1.00
[INFO] [1729671492.427201231] [yolo_detection_node]: ------------------------------
[INFO] [1729671492.525662253] [yolo_detection_node]: Label: 0, 

terminal: ros2 topic list, topic echoの結果
nkn4ryu@nkn4ryu:~/ros2_ws$ source /opt/ros/humble/setup.bash
nkn4ryu@nkn4ryu:~/ros2_ws$ ros2 topic list
/detections
/parameter_events
/rosout    
nkn4ryu@nkn4ryu:~/ros2_ws$ ros2 topic echo /detections 
header:
  stamp:
    sec: 1729672061
    nanosec: 365626219
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672061
    nanosec: 465541491
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672061
    nanosec: 565532786
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672061
    nanosec: 665528039
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672061
    nanosec: 765635059
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672061
    nanosec: 865642559
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672061
    nanosec: 965550610
  frame_id: camera_frame
detections:
- header:
    stamp:
      sec: 0
      nanosec: 0
    frame_id: ''
  results:
  - hypothesis:
      class_id: person
      score: 0.6248483657836914
    pose:
      pose:
        position:
          x: 0.0
          y: 0.0
          z: 0.0
        orientation:
          x: 0.0
          y: 0.0
          z: 0.0
          w: 1.0
      covariance:
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
  bbox:
    center:
      position:
        x: 0.4981220215559006
        y: 0.5778620541095734
      theta: 0.0
    size_x: 0.9618271887302399
    size_y: 1.0138983130455017
  id: ''
---
header:
  stamp:
    sec: 1729672062
    nanosec: 65568867
  frame_id: camera_frame
detections:
- header:
    stamp:
      sec: 0
      nanosec: 0
    frame_id: ''
  results:
  - hypothesis:
      class_id: person
      score: 0.5654535293579102
    pose:
      pose:
        position:
          x: 0.0
          y: 0.0
          z: 0.0
        orientation:
          x: 0.0
          y: 0.0
          z: 0.0
          w: 1.0
      covariance:
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
  bbox:
    center:
      position:
        x: 0.49823468923568726
        y: 0.41868239641189575
      theta: 0.0
    size_x: 0.9077539443969727
    size_y: 0.9193414449691772
  id: ''
---
header:
  stamp:
    sec: 1729672062
    nanosec: 165519377
  frame_id: camera_frame
detections:
- header:
    stamp:
      sec: 0
      nanosec: 0
    frame_id: ''
  results:
  - hypothesis:
      class_id: person
      score: 0.6349024772644043
    pose:
      pose:
        position:
          x: 0.0
          y: 0.0
          z: 0.0
        orientation:
          x: 0.0
          y: 0.0
          z: 0.0
          w: 1.0
      covariance:
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
  bbox:
    center:
      position:
        x: 0.42529296875
        y: 0.4208232909440994
      theta: 0.0
    size_x: 0.9667710065841675
    size_y: 0.8943253457546234
  id: ''
---
header:
  stamp:
    sec: 1729672062
    nanosec: 265533219
  frame_id: camera_frame
detections:
- header:
    stamp:
      sec: 0
      nanosec: 0
    frame_id: ''
  results:
  - hypothesis:
      class_id: person
      score: 0.6373729705810547
    pose:
      pose:
        position:
          x: 0.0
          y: 0.0
          z: 0.0
        orientation:
          x: 0.0
          y: 0.0
          z: 0.0
          w: 1.0
      covariance:
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
  bbox:
    center:
      position:
        x: 0.5771108865737915
        y: 0.4218750149011612
      theta: 0.0
    size_x: 0.9177818298339844
    size_y: 0.9712552726268768
  id: ''
---
header:
  stamp:
    sec: 1729672062
    nanosec: 365653572
  frame_id: camera_frame
detections:
- header:
    stamp:
      sec: 0
      nanosec: 0
    frame_id: ''
  results:
  - hypothesis:
      class_id: person
      score: 0.5947136878967285
    pose:
      pose:
        position:
          x: 0.0
          y: 0.0
          z: 0.0
        orientation:
          x: 0.0
          y: 0.0
          z: 0.0
          w: 1.0
      covariance:
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
  bbox:
    center:
      position:
        x: 0.5771484524011612
        y: 0.4242412745952606
      theta: 0.0
    size_x: 0.9133114516735077
    size_y: 0.9686502814292908
  id: ''
---
header:
  stamp:
    sec: 1729672062
    nanosec: 465488930
  frame_id: camera_frame
detections:
- header:
    stamp:
      sec: 0
      nanosec: 0
    frame_id: ''
  results:
  - hypothesis:
      class_id: person
      score: 0.5845389366149902
    pose:
      pose:
        position:
          x: 0.0
          y: 0.0
          z: 0.0
        orientation:
          x: 0.0
          y: 0.0
          z: 0.0
          w: 1.0
      covariance:
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
  bbox:
    center:
      position:
        x: 0.5013897269964218
        y: 0.4960937350988388
      theta: 0.0
    size_x: 0.9042148888111115
    size_y: 0.9283633530139923
  id: ''
---
header:
  stamp:
    sec: 1729672062
    nanosec: 565856312
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672062
    nanosec: 665454876
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672062
    nanosec: 765614410
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672062
    nanosec: 865425842
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672062
    nanosec: 965530019
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 65479360
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 165610904
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 265269671
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 365446755
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 465515809
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 565574817
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 665279008
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 765407948
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 865573463
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672063
    nanosec: 965654795
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 65365622
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 165385562
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 265680245
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 365687018
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 465376364
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 565577692
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 665612027
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 765502608
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 865584022
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672064
    nanosec: 965671715
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672065
    nanosec: 65489159
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672065
    nanosec: 165604561
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672065
    nanosec: 265426491
  frame_id: camera_frame
detections:
- header:
    stamp:
      sec: 0
      nanosec: 0
    frame_id: ''
  results:
  - hypothesis:
      class_id: bowl
      score: 0.5056905746459961
    pose:
      pose:
        position:
          x: 0.0
          y: 0.0
          z: 0.0
        orientation:
          x: 0.0
          y: 0.0
          z: 0.0
          w: 1.0
      covariance:
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
      - 0.0
  bbox:
    center:
      position:
        x: 0.32741133868694305
        y: 0.29490311443805695
      theta: 0.0
    size_x: 0.15042468905448914
    size_y: 0.15403923392295837
  id: ''
---
header:
  stamp:
    sec: 1729672065
    nanosec: 365272852
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672065
    nanosec: 465563582
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672065
    nanosec: 565471202
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672065
    nanosec: 665511799
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672065
    nanosec: 765606714
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672065
    nanosec: 865320177
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672065
    nanosec: 965539579
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 65507942
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 165541381
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 265602466
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 365622942
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 465211601
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 564994525
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 665388865
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 765354706
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 865372927
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672066
    nanosec: 965176417
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 65652119
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 165416986
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 265485661
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 365470774
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 465605534
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 565457123
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 665471327
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 765634994
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 865235718
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672067
    nanosec: 965345650
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 65230632
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 165595156
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 265457053
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 365613685
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 465405701
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 565420243
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 665398521
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 765496466
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 865402626
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672068
    nanosec: 965418813
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 65584981
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 165474489
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 265506524
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 365654381
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 465465840
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 565581031
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 665533857
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 765561510
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 865571819
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672069
    nanosec: 965614743
  frame_id: camera_frame
detections: []
---
header:
  stamp:
    sec: 1729672070


上記結果から正しくトピックで来ていることがわかります。

謝辞

この取り組みは, GxP(グロースエクスパートナーズ)株式会社様のサポートを受けて実施しています. 貴重なアドバイスや, ロボットに必要な機材の支援をいただきました. 心より感謝申し上げます.

  1. https://www.switch-science.com/products/7651?srsltid=AfmBOor688KlMrQs7V7G3NElCKl2oA0EjPCGBbAbwHP48pJ5zsXRzVqx

  2. https://www.itaccess.co.jp/service/adv/column/ros%E3%81%A8%E3%81%AF/

  3. https://github.com/Arcanain/arcanain_depthai_ros2/blob/feature/ARCDEV-118/integration_ros2_oakdlite/ROS2_OAKDlite_test/inference2tuple_test.py

1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?