2
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

Jetbot & PyTorch CNN(7)ライントレース+衝突回避用のデモプログラム

Last updated at Posted at 2020-10-26

PyTorch CNNで衝突回避用とライントレースができましたのでこれを合わせます。
ベースは 6)ライントレースのデモプログラムです。
変更点は衝突回避用モデルを取り込み、blocked/freeをRobot制御に反映します。

initialize the PyTorch model(変更なし)

live_demo.ipynb
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

from PIL import Image
import numpy as np

Define network(変更なし)

live_demo.ipynb
class miniNet(nn.Module):

    def __init__(self, num_classes=2):
        super(miniNet, self).__init__()

        self.conv1 = nn.Conv2d(3, 6, 3)
        self.pool1 = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 3)
        self.pool2 = nn.MaxPool2d(2, 2)
        self.conv3 = nn.Conv2d(16, 32, 3)
        self.pool3 = nn.MaxPool2d(2, 2)

        
        #self.fc1 = nn.Linear(32 * 26 * 26, 1024)  # コメントアウト
        self.fc1 = nn.Linear(4160, 1024)           # 修正
        self.fc2 = nn.Linear(1024, 512)
        self.fc3 = nn.Linear(512, num_classes)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = self.pool1(x)
        x = F.relu(self.conv2(x))
        x = self.pool2(x)
        x = F.relu(self.conv3(x))
        x = self.pool3(x)

        #x = x.view(-1, 32 * 26 * 26)  # コメントアウト
        x = x.view(-1, 4160)           # 修正
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)

        return x

Load model(変更あり、モデルを2つ読み込みます)

live_demo.ipynb
Steer_net = miniNet()
Steer_net.load_state_dict(torch.load('best_steering_model_xy.pth'))

Collision_net = miniNet()
Collision_net.load_state_dict(torch.load('best_model_CNN.pth'))

transfer to the GPU device(変更あり、モデルが2つです)

live_demo.ipynb
device = torch.device('cuda')
Steer_net.to(device)
Steer_net = Steer_net.eval().half()

Collision_net.to(device)
Collision_net = Collision_net.eval().half()

some preprocessing(修正なし)

live_demo.ipynb
import torchvision.transforms as transforms
import torch.nn.functional as F
import cv2
import PIL.Image
import numpy as np

mean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half()
std = torch.Tensor([0.229, 0.224, 0.225]).cuda().half()

def preprocess(image):
    image2 = np.copy(image)         # trimmed
    image2 = image2[168:224, 0:224] # trimmed
    #image2 = cv2.rectangle(image2,  (0, 0), (224, 168), (0, 0, 0), thickness=-1)
    
    image = PIL.Image.fromarray(image2)
    image = transforms.functional.to_tensor(image).to(device).half()
    image.sub_(mean[:, None, None]).div_(std[:, None, None])
    return image[None, ...]

start and display our camera(修正なし)

live_demo.ipynb
from IPython.display import display
import ipywidgets
import traitlets
from jetbot import Camera, bgr8_to_jpeg

camera = Camera()

image_widget = ipywidgets.Image()

def display_xy2(camera_image):      # trimmed
    image1 = np.copy(camera_image)  # trimmed
    image1 = image1[168:224, 0:224] #trimmed
    #image1 = cv2.rectangle(image1,  (0, 0), (224, 112), (0, 0, 0), thickness=-1)
    jpeg_image1 = bgr8_to_jpeg(image1)
    return jpeg_image1


#traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)
traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=display_xy2) # trimmed

display(image_widget)

create our robot instance(修正なし)

live_demo.ipynb
from jetbot import Robot
robot = Robot()

define sliders to control JetBot(修正なし)

live_demo.ipynb
speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01,  value=0.14, description='speed gain')  # 初期値修正
steering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.04, description='steering gain')  # 初期値修正
steering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.0, description='steering kd')
steering_bias_slider = ipywidgets.FloatSlider(min=-0.3, max=0.3, step=0.01, value=0.0, description='steering bias')

display(speed_gain_slider, steering_gain_slider, steering_dgain_slider, steering_bias_slider)

display some sliders(修正なし)

live_demo.ipynb
x_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='x')
y_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='y')
steering_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='steering')
speed_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='speed')

display(ipywidgets.HBox([y_slider, speed_slider]))
display(x_slider, steering_slider)

create a function(camera's value changes)(変更あり)

live_demo.ipynb
import time
angle = 0.0
angle_last = 0.0

def execute(change):
    global angle, angle_last
    image = change['new']

    #image2 = np.copy(image)          # trimmed
    #image2 = cv2.rectangle(image2,  (0, 0), (224, 168), (0, 0, 0), thickness=-1)  # trimmed
    #image2 = image2[168:224, 0:224] #trimmed

    #xy = model(preprocess(image)).detach().float().cpu().numpy().flatten() # trimmed
    
    image1 = preprocess(image)
    #xy = Steer_net(preprocess(image)).detach().float().cpu().numpy().flatten()
    xy = Steer_net(image1).detach().float().cpu().numpy().flatten()
    x = xy[0]
    y = (0.5 - xy[1]) / 2.0
    
    x_slider.value = x
    y_slider.value = y
    
    speed_slider.value = speed_gain_slider.value
    
    angle = np.arctan2(x, y)
    pid = angle * steering_gain_slider.value + (angle - angle_last) * steering_dgain_slider.value
    angle_last = angle
    
    steering_slider.value = pid + steering_bias_slider.value
    
    

    #global blocked_slider, robot
    #x = change['new'] 
    #x1 = preprocess(image)
    y1 = Collision_net(image1)
    
    # we apply the `softmax` function to normalize the output vector so it sums to 1 (which makes it a probability distribution)
    y1 = F.softmax(y1, dim=1)
    
    prob_blocked = float(y1.flatten()[0])
    
    if prob_blocked < 0.5:    # freeではライントレース制御
        robot.left_motor.value = max(min(speed_slider.value + steering_slider.value, 1.0), 0.025)
        robot.right_motor.value = max(min(speed_slider.value - steering_slider.value, 1.0), 0.025)
        
    else:
        robot.forward(0.0)    # Blocked では停止
    
    time.sleep(0.001)
    
execute({'new': camera.value})

camera for processing(変更なし)

live_demo.ipynb
camera.observe(execute, names='value')

ビデオのストリーミングを中止しプログラム負荷を減らす。(追加)

live_demo.ipynb
camera_link.unlink()  # don't stream to browser (will still run camera)

ロボットの停止(変更なし)

live_demo.ipynb
import time

camera.unobserve(execute, names='value')

time.sleep(0.1)  # add a small sleep to make sure frames have finished processing

robot.stop()

close the camera conneciton(変更なし)

live_demo.ipynb
camera.stop()
robot.stop()

ここまで読んで頂きありがとうございました。

2
3
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?