0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

Add docker support for MASt3R-SLAM

Last updated at Posted at 2025-07-23

Adding Docker Support to MASt3R-SLAM: Problems and Solutions

Hi, I'm Masahiro Ogawa from Sensyn Robotics.
I want to share the docker environment creation for latest slam; MASt3R-SLAM.

This article documents the journey of adding Docker support to MASt3R-SLAM, a real-time dense SLAM system with 3D reconstruction priors. The goal was to create an easy-to-use Docker setup that handles all dependencies automatically.

Pull Request: https://github.com/rmurai0610/MASt3R-SLAM/pull/105

Overview

MASt3R-SLAM requires a complex environment with CUDA, PyTorch, OpenCV, and various compiled extensions. Setting this up locally can be challenging, especially across different systems. Docker provides a consistent, reproducible environment that works out of the box.

Problems Encountered and Solutions

Problem 1: OpenCV Import Error

Error: ModuleNotFoundError: No module named 'cv2'

Cause: The conda environment wasn't being activated properly when running commands in the Docker container. OpenCV was installed in the conda environment but the commands were using the system Python.

Solution: Ensure conda environment is activated before running commands:

# Wrong way - uses system Python
docker compose exec mast3r-slam python main.py

# Correct way - activates conda environment first
docker compose exec mast3r-slam bash -c "source /opt/conda/etc/profile.d/conda.sh && conda activate mast3r-slam && python main.py"

Problem 2: OpenCV Runtime Dependencies

Error: ImportError: libGL.so.1: cannot open shared object file

Cause: OpenCV requires system libraries for image processing that weren't included in the base CUDA image.

Solution: Install required system libraries:

RUN apt-get update && apt-get install -y \
    libgl1-mesa-glx \
    libglib2.0-0 \
    libgomp1

Problem 3: Bus Error During Execution

Error: Bus error (core dumped)

Cause:

  • Insufficient shared memory for multiprocessing
  • PyTorch's CUDA memory allocation issues

Solution: Two-part fix:

  1. Increase shared memory in docker-compose.yaml:
shm_size: '2gb'
  1. Set PyTorch memory configuration:
export PYTORCH_CUDA_ALLOC_CONF="expandable_segments:True"

Problem 4: Memory Management for Large Videos

Error: Out of memory errors with high-resolution or long videos

Cause: Default settings process every frame at full resolution.

Solution: Created low-memory configuration options:

# config/low_memory.yaml
img_size: 384
subsample: 3
workers: 1

And a dedicated script:

# scripts/run_docker_low_memory.sh

Problem 5: Result Saving and Visualization

Issue: Results were saved inside container but not accessible on host.

Cause:

  • Default save path was internal to container
  • No volume mapping for output directory

Solution:

  1. Added volume mapping in docker-compose.yaml:
volumes:
  - ./output:/app/output
  1. Modified scripts to save to mapped directory:
python main.py --dataset data/video.mp4 --save-as output

Problem 6: User Experience and Flexibility

Issue: Scripts had hardcoded video paths, making it difficult for users to process their own videos.

Solution: Made scripts accept command-line arguments:

# Default dataset if no argument provided
DATASET=${1:-"data/yourvideo.mp4"}

# Usage examples:
bash ./scripts/run_docker.sh
bash ./scripts/run_docker.sh data/myvideo.mov

Final Docker Setup

Dockerfile

FROM nvidia/cuda:12.1.0-devel-ubuntu22.04

# Install system dependencies
RUN apt-get update && apt-get install -y \
    wget curl git build-essential \
    libgl1-mesa-glx libglib2.0-0 libgomp1 \
    && rm -rf /var/lib/apt/lists/*

# Install Miniconda
RUN wget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
    && bash Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda \
    && rm Miniconda3-latest-Linux-x86_64.sh

# Setup environment
ENV PATH="/opt/conda/bin:$PATH"
WORKDIR /app

# Copy and install dependencies
COPY . .
RUN conda create -n mast3r-slam python=3.11
RUN /opt/conda/envs/mast3r-slam/bin/pip install -e thirdparty/mast3r
RUN /opt/conda/envs/mast3r-slam/bin/pip install -e thirdparty/in3d
RUN /opt/conda/envs/mast3r-slam/bin/pip install --no-build-isolation -e .

docker-compose.yaml

services:
  mast3r-slam:
    build: .
    container_name: mast3r-slam-container
    stdin_open: true
    command: tail -f /dev/null
    tty: true
    shm_size: '2gb'  # Critical for multiprocessing
    volumes:
      - ./data:/app/data
      - ./output:/app/output
      - ./scripts:/app/scripts
      - ./checkpoints:/app/checkpoints
      - ./config:/app/config
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

Convenient Run Script

#!/bin/bash
# scripts/run_docker.sh

DATASET=${1:-"data/yourvideo.mp4"}
cd "$(dirname "$0")/.."

./scripts/download_checkpoints.sh

FILENAME=$(basename "$DATASET")
FILENAME_NO_EXT="${FILENAME%.*}"

echo "Starting MASt3R-SLAM processing..."
echo "Dataset: $DATASET"
echo "Results will be saved to ./output/ directory:"
echo "  - ${FILENAME_NO_EXT}.txt (camera trajectory)"
echo "  - ${FILENAME_NO_EXT}.ply (3D point cloud)"
echo "  - keyframes/ (keyframe images)"

docker compose exec -T mast3r-slam bash -c \
  "source /opt/conda/etc/profile.d/conda.sh && \
   conda activate mast3r-slam && \
   python main.py --dataset $DATASET --config config/base.yaml \
   --no-viz --save-as output"

Key Takeaways

  1. System Dependencies Matter: Don't forget runtime libraries (libGL, libglib) when containerizing computer vision applications.

  2. Memory Configuration is Critical: Both shared memory (shm_size) and PyTorch memory settings need proper configuration.

  3. Volume Mapping for Results: Always map output directories to make results accessible on the host.

  4. Flexible Scripts: Accept command-line arguments to avoid hardcoding paths.

  5. Provide Options: Offer both standard and low-memory configurations for different use cases.

Conclusion

Docker significantly improves the accessibility of MASt3R-SLAM by eliminating complex setup procedures. The containerized solution handles all dependencies automatically while providing flexibility for different hardware configurations and use cases.

For detailed usage instructions, please refer to the MASt3R-SLAM repository README. This Docker implementation makes state-of-the-art SLAM technology accessible to researchers and developers with just a few commands.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?