0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

LeRobot SO-101 From Teleoperation to Imitation Learning

Posted at

LeRobot SO-101 From Teleoperation to Imitation Learning

Once you can teleoperate SO-101, it's human nature to want to try imitation learning.
I encountered quite a few minor errors until I reached imitation learning. I thought about giving up midway because it was annoying, but I managed to pull through. I decided to record this thinking it might be helpful for people stuck with the same troubles.

Camera Preparation

You may get the following error when handling images.

ModuleNotFoundError: No module named 'cv2'

In that case, execute the following. From here on, all work is done in the ~/lerobot folder.

cd ~/lerobot
uv sync
uv pip install "lerobot[feetech]"
uv pip install opencv-python

The following display will appear and you'll be ready.

Resolved 2 packages in 179ms
Installed 1 package in 5ms
 + opencv-python==4.12.0.88

Checking Camera Configuration Values

After attaching the camera, execute ~/lerobot/src/lerobot/find_cameras.py using UV run. Don't forget the argument opencv after the program name.

uv run python -m lerobot.find_cameras opencv

The following will be displayed

--- Detected Cameras ---
Camera #0:
  Name: OpenCV Camera @ /dev/video0
  Type: OpenCV
  Id: /dev/video0
  Backend api: V4L2
  Default stream profile:
    Format: 0.0
    Width: 640
    Height: 480
    Fps: 30.0
--------------------
Camera #1:
  Name: OpenCV Camera @ /dev/video2
  Type: OpenCV
  Id: /dev/video2
  Backend api: V4L2
  Default stream profile:
    Format: 0.0
    Width: 640
    Height: 480
    Fps: 30.0
--------------------
/home/abc/lerobot/src/lerobot/find_cameras.py:142: DeprecationWarning: 'mode' parameter is deprecated and will be removed in Pillow 13 (2026-10-15)
  img = Image.fromarray(img_array, mode="RGB")

Finalizing image saving...
Image capture finished. Images saved to outputs/captured_images

The PC's built-in camera appeared as Camera #0 /dev/video0,
The USB camera appeared as Camera #1 /dev/video2.
Here, check and note the Width, Height, and Fps.

At the end of the execution screen, there is the following text.

Example notation
Image capture finished. Images saved to outputs/captured_images

Looking at the home/lerobot/outputs/captured_images folder, the following two files were saved.
opencv__dev_video0.png
opencv__dev_video2.png

Next, run the official script. The location is here.
https://huggingface.co/docs/lerobot/cameras?shell_restart=Open+CV+Camera

Note that you need to modify the following file with an editor according to camera specifications.
~/lerobot/src/lerobot/OpenCVCamera.py 
The only modifications I made were fps=30, width=640, height=480.
The modified program is shown below.

Example notation
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
from lerobot.cameras.opencv.camera_opencv import OpenCVCamera
from lerobot.cameras.configs import ColorMode, Cv2Rotation

# Construct an `OpenCVCameraConfig` with your desired FPS, resolution, color mode, and rotation.
config = OpenCVCameraConfig(
    index_or_path=0,
    fps=30,
    width=640,
    height=480,
    color_mode=ColorMode.RGB,
    rotation=Cv2Rotation.NO_ROTATION
)

# Instantiate and connect an `OpenCVCamera`, performing a warm-up read (default).
camera = OpenCVCamera(config)
camera.connect()

# Read frames asynchronously in a loop via `async_read(timeout_ms)`
try:
    for i in range(10):
        frame = camera.async_read(timeout_ms=200)
        print(f"Async frame {i} shape:", frame.shape)
finally:
    camera.disconnect()

Execute OpenCVCamera.py with uv run. Don't forget opencv at the end.

Example notation
uv run python -m lerobot.OpenCVCamera opencv

When executed, the following is displayed

Example notation
Async frame 0 shape: (480, 640, 3)
Async frame 1 shape: (480, 640, 3)
Async frame 2 shape: (480, 640, 3)
Async frame 3 shape: (480, 640, 3)
Async frame 4 shape: (480, 640, 3)
Async frame 5 shape: (480, 640, 3)
Async frame 6 shape: (480, 640, 3)
Async frame 7 shape: (480, 640, 3)
Async frame 8 shape: (480, 640, 3)
Async frame 9 shape: (480, 640, 3)

Execute the following in ~/lerobot folder

uv sync
uv pip install "lerobot[feetech]"
uv pip install -U "huggingface_hub[cli]"
git config --global credential.helper store

For the token, delete the ${} part and enter.

uv run hf auth login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
# On some PCs, you may get an error unless you use the old method below
# uv run huggingface-cli login --token hf_??????   --add-to-git-credential

Then the following is displayed. The $$...$$ in the last line shows the name linked to the token.

Example notation
token is valid (permission: write).
The token `$$$$$$$$$$` has been saved to /home/abc/.cache/huggingface/stored_tokens
Your token has been saved in your configured git credential helpers (store).
Your token has been saved to /home/abc/.cache/huggingface/token
Login successful.
The current active token is: `$$$$$$$$$$`

How to Get Username

HF_USER=$(uv run hf auth whoami | head -n 1)
# On some PCs, you may get an error unless you use the old method below
# HF_USER=$(uv run huggingface-cli whoami | head -n 1)
echo $HF_USER

Finally, enter and execute the following. Episode time 20s, reset time 15s, number of episodes 50, dataset not open,

# In the following line, $HF_USER is a variable name and caused an error, but it worked fine with the actual username
#    --dataset.repo_id=$HF_USER/record-ore-proj-001 \

# In the following line, you'll get an error if the same folder name already exists
#    --dataset.repo_id=$HF_USER/record-ore-proj-001 \
# Configure the camera with the following line in the next list
# index_or_path: 0 This number specifies the camera. The number is
# remember the number displayed when you executed
# uv run python -m lerobot.find_cameras opencv
# and specify it.
#--- Detected Cameras ---
# Camera #0:
#  Name: OpenCV Camera @ /dev/video0
#  .....
# --------------------
# Camera #1:
#  Name: OpenCV Camera @ /dev/video2
#  .....
#                                                      ///
--robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}"
#                                                      ///

Learning starts with the following.

$ uv run python -m lerobot.record \
    --robot.type=so101_follower \
    --robot.port=/dev/usbserial_lerobot_follower \
    --robot.id=lerobot_follower \
    --teleop.type=so101_leader \
    --teleop.port=/dev/usbserial_lerobot_leader \
    --teleop.id=lerobot_leader \
    --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
    --display_data=true \
    --dataset.repo_id=$HF_USER/record-ore-proj-001 \
    --dataset.episode_time_s=30 \
    --dataset.reset_time_s=10 \
    --dataset.num_episodes=30 \
    --dataset.private=true \
    --dataset.single_task="ore_project"

The following site reported that they were able to do imitation learning with two cameras (PC built-in and iPhone) using control_robot.py instead of record.py. I'm noting this as a reference.
Practical Imitation Learning with Real SO-101 (Action Chunking with Transformers Applied)
https://tech-blog.abeja.asia/entry/so101-imitation-learning-202505

python lerobot/scripts/control_robot.py \
    --robot.type=so101 \
    --control.type=record \
    --control.fps=30 \
    --control.single_task=<task name> \
    --control.repo_id=<hugging face repo id> \
    --control.warmup_time_s=5 \
    --control.episode_time_s=30 \
    --control.reset_time_s=30 \
    --control.num_episodes=50 \
    --control.display_data=true \
    --control.push_to_hub=true

There is an example of the learning process using GPU on Google Colaboratory below.

Introduction to SO-101 (8) - Learning on Google Colab by npaka
https://note.com/npaka/n/n35335306fba3

I want to transfer learn SmolVLA and deploy it to LeRobot SO-101! by @B-SKY-Lab
https://qiita.com/B-SKY-Lab/items/1462543ea179b0c321d0

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?