LoginSignup
0
0

More than 1 year has passed since last update.

PyTorchでCPUよりGPUの方が遅いケース

Posted at

はじめに

PyTorchを使っていたら、GPUよりもCPUの方が早いケースがあったのでメモ書き。

今回のケース

sklearnのデータセット、「ボストンの住宅価格」のデータをPyTorchでMLPを定義して学習させた。
その時にGPUを使うよりもCPUの方が早かった。

環境

  • Python:3.9.7
  • scikit-learn:0.24.2
  • PyTorch:1.11.0
  • CPU:Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz 3.60 GHz
  • GPU:RTX 2070 SUPER

当該のソースコードと実行結果

ソースコード
import sklearn.datasets as skdata
from sklearn.model_selection import train_test_split
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
import time

boston_data = skdata.load_boston()
df_boston = pd.DataFrame(boston_data.data, columns=boston_data.feature_names)

def ndarray_to_tensor(x):
    x = torch.tensor(x).float()
    return x

class MLP(nn.Module):
    def __init__(self, in_features=1):
        super().__init__()
        self.regression = nn.Sequential(
            nn.Linear(in_features, 16),
            nn.ReLU(inplace=True),
            nn.Linear(16, 32),
            nn.ReLU(inplace=True),
            nn.Linear(32, 16),
            nn.ReLU(inplace=True),
            nn.Linear(16, 1)
        )
        
    def forward(self, x):
        output = self.regression(x)
        return output

epochs = 10000

#CPU
print('CPUでの処理')
model_boston = MLP(in_features=13)
optimizer = optim.Adam(model_boston.parameters(), lr=0.01)
criterion = nn.MSELoss()
X_train, X_test, y_train, y_test = train_test_split(df_boston, boston_data.target)
X_train = ndarray_to_tensor(X_train.to_numpy())
X_test = ndarray_to_tensor(X_test.to_numpy())
y_train = ndarray_to_tensor(y_train)
y_test = ndarray_to_tensor(y_test)

losses = []
start_time = time.time()
for epoch in range(1, epochs+1):
    optimizer.zero_grad()
    y_pred = model_boston(X_train)
    loss = criterion(y_pred, y_train)
    loss.backward()
    optimizer.step()
    losses.append(loss.item())
    if epoch % 1000 == 0:
        print(f'epoch:{epoch}, loss:{losses[-1]}')
print(f'処理時間:{time.time() - start_time}\n')

#GPU
print('GPUでの処理')
model_boston = MLP(in_features=13).cuda()
optimizer = optim.Adam(model_boston.parameters(), lr=0.01)
criterion = nn.MSELoss()
X_train, X_test, y_train, y_test = train_test_split(df_boston, boston_data.target)
X_train = ndarray_to_tensor(X_train.to_numpy()).cuda()
X_test = ndarray_to_tensor(X_test.to_numpy()).cuda()
y_train = ndarray_to_tensor(y_train).cuda()
y_test = ndarray_to_tensor(y_test).cuda()

losses = []
start_time = time.time()
for epoch in range(1, epochs+1):
    optimizer.zero_grad()
    y_pred = model_boston(X_train)
    loss = criterion(y_pred, y_train)
    loss.backward()
    optimizer.step()
    losses.append(loss.item())
    if epoch % 1000 == 0:
        print(f'epoch:{epoch}, loss:{losses[-1]}')
print(f'処理時間:{time.time() - start_time}\n')

一言でまとめると、ボストンデータをMLPをつかってCPUとGPU別に実行させてるだけです。

実行結果(こっちの方が重要)

CPUでの処理
epoch:1000, loss:86.82138061523438
epoch:2000, loss:86.76355743408203
epoch:3000, loss:86.78435516357422
epoch:4000, loss:86.67534637451172
epoch:5000, loss:86.61820220947266
epoch:6000, loss:86.67384338378906
epoch:7000, loss:86.63085174560547
epoch:8000, loss:86.59297180175781
epoch:9000, loss:86.57149505615234
epoch:10000, loss:86.55558013916016
処理時間:11.979999780654907

GPUでの処理
epoch:1000, loss:83.15144348144531
epoch:2000, loss:83.0901107788086
epoch:3000, loss:83.07488250732422
epoch:4000, loss:83.05408477783203
epoch:5000, loss:83.0073013305664
epoch:6000, loss:83.05764770507812
epoch:7000, loss:82.965576171875
epoch:8000, loss:82.95514678955078
epoch:9000, loss:83.03688049316406
epoch:10000, loss:83.29747772216797
処理時間:29.112000703811646

処理時間をみると、GPUの方が遅いです。
ここでの仮説は、

  • MLPはシングルスレッド性能の高い方が強い
  • 単純にデータが少ないとシングルスレッド性能が高い方が強い
  • 回帰問題はシングルスレッド…(以下略)
  • ミニバッチによる学習をしているかどうか

だと思いましたのでそれぞれ検証します。

回帰問題と分類問題で比べる

まずデータを差し替えて、モデルの層を調整するだけでできる事からやってみます。
使うデータはアヤメのデータです。

ソースコード
import sklearn.datasets as skdata
from sklearn.model_selection import train_test_split
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
import time

iris_data = skdata.load_iris()
df1 = pd.DataFrame(iris_data.data, columns=iris_data.feature_names)

class MLP(nn.Module):
    def __init__(self, in_feature, out_feature):
        super().__init__()
        self.classifier = nn.Sequential(
            nn.Linear(in_feature, 64),
            nn.ReLU(inplace=True),
            nn.Linear(64, 32),
            nn.ReLU(inplace=True),
            nn.Linear(32, 16),
            nn.ReLU(inplace=True),
            nn.Linear(16, 3)
        )
    
    def forward(self, x):
        output = self.classifier(x)
        return output

epochs = 1000

#cpu
model_cpu = MLP(4, 3)
optimizer = optim.Adam(model_cpu.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()
feature_tensor = torch.tensor(df1.to_numpy()).float()
target_tensor = torch.tensor(iris_data.target).long()
X_train, X_test, y_train, y_test = train_test_split(feature_tensor, target_tensor)
losses = []
accs = []
start_time = time.time()
for epoch in range(1, epochs+1):
    optimizer.zero_grad()
    y_pred = model_cpu(X_train)
    loss = criterion(y_pred, y_train)
    loss.backward()
    optimizer.step()
    losses.append(loss.item())
    pred_index = torch.argmax(y_pred, dim=1)
    accs.append(torch.mean(pred_index.eq(y_train).float()))
    if epoch % 100 == 0:
        print(f'epoch: {epoch}, loss: {losses[-1]}, acc: {accs[-1]}')
print(f'処理時間: {time.time() - start_time}')

#gpu
model_gpu = MLP(4, 3)
model_gpu.cuda()
optimizer = optim.Adam(model_gpu.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()
feature_tensor = torch.tensor(df1.to_numpy()).float()
target_tensor = torch.tensor(iris_data.target).long()
X_train, X_test, y_train, y_test = train_test_split(feature_tensor, target_tensor)
X_train = X_train.cuda()
X_test = X_test.cuda()
y_train = y_train.cuda()
y_test = y_test.cuda()
losses = []
accs = []
start_time = time.time()
for epoch in range(1, epochs+1):
    optimizer.zero_grad()
    y_pred = model_gpu(X_train)
    loss = criterion(y_pred, y_train)
    loss.backward()
    optimizer.step()
    losses.append(loss.item())
    pred_index = torch.argmax(y_pred, dim=1)
    accs.append(torch.mean(pred_index.eq(y_train).float()))
    if epoch % 100 == 0:
        print(f'epoch: {epoch}, loss: {losses[-1]}, acc: {accs[-1]}')
print(f'処理時間: {time.time() - start_time}')

実行結果

epoch: 100, loss: 0.0414547361433506, acc: 0.9910714030265808
epoch: 200, loss: 0.03891688957810402, acc: 0.9821428656578064
epoch: 300, loss: 0.02476164884865284, acc: 0.9910714030265808
epoch: 400, loss: 0.05866124480962753, acc: 0.9732142686843872
epoch: 500, loss: 0.02421938069164753, acc: 0.9910714030265808
epoch: 600, loss: 0.02089422568678856, acc: 0.9910714030265808
epoch: 700, loss: 0.01696336641907692, acc: 0.9910714030265808
epoch: 800, loss: 0.012681546621024609, acc: 0.9910714030265808
epoch: 900, loss: 0.008845167234539986, acc: 1.0
epoch: 1000, loss: 0.005877476651221514, acc: 1.0
処理時間: 0.8889713287353516
epoch: 100, loss: 0.0624978169798851, acc: 0.9821429252624512
epoch: 200, loss: 0.054292310029268265, acc: 0.9821429252624512
epoch: 300, loss: 0.050297074019908905, acc: 0.9821429252624512
epoch: 400, loss: 0.04855770990252495, acc: 0.9821429252624512
epoch: 500, loss: 0.06240412965416908, acc: 0.9642857313156128
epoch: 600, loss: 0.04740297421813011, acc: 0.9821429252624512
epoch: 700, loss: 0.04734937474131584, acc: 0.9821429252624512
epoch: 800, loss: 0.06575076282024384, acc: 0.973214328289032
epoch: 900, loss: 0.047715965658426285, acc: 0.9821429252624512
epoch: 1000, loss: 0.047352135181427, acc: 0.9821429252624512
処理時間: 2.514024496078491

実行結果を見るとGPUの方が遅いため、この仮説はナシ

データが少ない、ミニバッチについて、MLPについての検証

ここら辺は結果的に同時に検証できたのでまとめます。

まず、MNISTを使った学習をします。

  • ミニバッチを使用した学習
  • 使用しない学習

の二つを検証します。

ミニバッチ使用
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import time

device = "cuda" if torch.cuda.is_available() else "cpu"

transform = transforms.Compose([
    transforms.ToTensor()                              
])
train_dataset = datasets.MNIST(root="./data", train=True, download=True, transform=transform)

num_batches = 100
train_dataloader = DataLoader(train_dataset, batch_size=num_batches, shuffle=True)

class MLP(nn.Module):
    def __init__(self):
        super().__init__()
        self.classifier = nn.Sequential(
            nn.Linear(28 * 28, 400),
            nn.ReLU(inplace=True),
            nn.Linear(400, 200),
            nn.ReLU(inplace=True),
            nn.Linear(200, 100),
            nn.ReLU(inplace=True),
            nn.Linear(100, 10)
        )
    def forward(self, x):
        output = self.classifier(x)
        return output

model = MLP()
model.to(device)

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

num_epochs = 15
losses = []
accs = []
start_time = time.time()
for epoch in range(num_epochs):
    running_loss = 0.0
    running_acc = 0.0
    for imgs, labels in train_dataloader:
        imgs = imgs.view(num_batches, -1)
        imgs = imgs.to(device)
        labels = labels.to(device)
        optimizer.zero_grad()
        output = model(imgs)
        loss = criterion(output, labels)
        running_loss += loss.item()
        pred = torch.argmax(output, dim=1)
        running_acc += torch.mean(pred.eq(labels).float())
        loss.backward()
        optimizer.step()
    running_loss /= len(train_dataloader)
    running_acc /= len(train_dataloader)
    losses.append(running_loss)
    accs.append(running_acc)
    print("epoch: {}, loss: {}, acc: {}".format(epoch, running_loss, running_acc))
    
print(f'処理時間: {time.time()-start_time}')

cpu_model = MLP()

num_epochs = 15
losses = []
accs = []
start_time = time.time()
for epoch in range(num_epochs):
    running_loss = 0.0
    running_acc = 0.0
    for imgs, labels in train_dataloader:
        imgs = imgs.view(num_batches, -1)
        optimizer.zero_grad()
        output = cpu_model(imgs)
        loss = criterion(output, labels)
        running_loss += loss.item()
        pred = torch.argmax(output, dim=1)
        running_acc += torch.mean(pred.eq(labels).float())
        loss.backward()
        optimizer.step()
    running_loss /= len(train_dataloader)
    running_acc /= len(train_dataloader)
    losses.append(running_loss)
    accs.append(running_acc)
    print("epoch: {}, loss: {}, acc: {}".format(epoch, running_loss, running_acc))
    
print(f'処理時間: {time.time()-start_time}')

実行結果

epoch: 0, loss: 0.08799199781768645, acc: 0.9726330637931824
epoch: 1, loss: 0.05908205614890903, acc: 0.9817822575569153
epoch: 2, loss: 0.04449094361548001, acc: 0.9859480857849121
epoch: 3, loss: 0.034318095439812166, acc: 0.9890987277030945
epoch: 4, loss: 0.027084401380173706, acc: 0.9913316965103149
epoch: 5, loss: 0.02304065584433071, acc: 0.9922987222671509
epoch: 6, loss: 0.021813860360659115, acc: 0.9926981925964355
epoch: 7, loss: 0.01776100124157286, acc: 0.9941319227218628
epoch: 8, loss: 0.01533906555352587, acc: 0.9952317476272583
epoch: 9, loss: 0.015234155511570861, acc: 0.9951984882354736
epoch: 10, loss: 0.01419761469989074, acc: 0.9956986904144287
epoch: 11, loss: 0.01116215966969321, acc: 0.9964156150817871
epoch: 12, loss: 0.012005361656235133, acc: 0.9961321353912354
epoch: 13, loss: 0.010572400064529575, acc: 0.9965987205505371
epoch: 14, loss: 0.009939108608882634, acc: 0.9966990351676941
処理時間: 116.52845478057861

epoch: 0, loss: 2.3035859751701353, acc: 0.05318335071206093
epoch: 1, loss: 2.3035859807332355, acc: 0.05318336561322212
epoch: 2, loss: 2.3035859926541646, acc: 0.05318339914083481
epoch: 3, loss: 2.3035859847068787, acc: 0.05318337306380272
epoch: 4, loss: 2.303585989077886, acc: 0.053183332085609436
epoch: 5, loss: 2.3035859791437785, acc: 0.05318339169025421
epoch: 6, loss: 2.3035859751701353, acc: 0.05318336561322212
epoch: 7, loss: 2.3035859886805214, acc: 0.05318340286612511
epoch: 8, loss: 2.303585997422536, acc: 0.05318337306380272
epoch: 9, loss: 2.3035859926541646, acc: 0.05318339169025421
epoch: 10, loss: 2.3035859847068787, acc: 0.05318337678909302
epoch: 11, loss: 2.3035859870910644, acc: 0.05318337678909302
epoch: 12, loss: 2.3035859799385072, acc: 0.05318337678909302
epoch: 13, loss: 2.3035859843095143, acc: 0.053183335810899734
epoch: 14, loss: 2.3035859807332355, acc: 0.05318337306380272
処理時間: 125.60800242424011

GPUの方が早いです。

ミニバッチ不使用
import sklearn.datasets as skdata
from sklearn.model_selection import train_test_split
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
import time

X, y = skdata.fetch_openml('mnist_784', version=1, return_X_y=True)
X = X.to_numpy()
y = y.apply(int)

class MLP(nn.Module):
    def __init__(self):
        super().__init__()
        self.classifier = nn.Sequential(
            nn.Linear(28*28, 400),
            nn.ReLU(inplace=True),
            nn.Linear(400, 200),
            nn.ReLU(inplace=True),
            nn.Linear(200, 100),
            nn.ReLU(inplace=True),
            nn.Linear(100, 10)
        )
    
    def forward(self, x):
        output = self.classifier(x)
        return output

epochs = 100

#cpu
X_tensor = torch.tensor(X).float()
y_tensor = torch.tensor(y).long()
X_train, X_test, y_train, y_test = train_test_split(X_tensor, y_tensor)
model_cpu = MLP()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model_cpu.parameters(), lr=0.01)
losses = []
accs = []
start_time = time.time()
for epoch in range(1, epochs+1):
    optimizer.zero_grad()
    output = model_cpu(X_train)
    loss = criterion(output, y_train)
    losses.append(loss.item())
    pred = torch.argmax(output, dim=1)
    accs.append(torch.mean(pred.eq(y_train).float()))
    loss.backward()
    optimizer.step()
    if epoch % 10 == 0:
        print(f'epoch: {epoch}, loss: {losses[-1]}, acc: {accs[-1]}')
print(f'処理時間: {time.time() - start_time}')

#gpu
X_tensor = torch.tensor(X).float().cuda()
y_tensor = torch.tensor(y).long().cuda()
X_train, X_test, y_train, y_test = train_test_split(X_tensor, y_tensor)
model_gpu = MLP().cuda()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model_gpu.parameters(), lr=0.01)
losses = []
accs = []
start_time = time.time()
for epoch in range(1, epochs+1):
    optimizer.zero_grad()
    output = model_gpu(X_train)
    loss = criterion(output, y_train)
    losses.append(loss.item())
    pred = torch.argmax(output, dim=1)
    accs.append(torch.mean(pred.eq(y_train).float()))
    loss.backward()
    optimizer.step()
    if epoch % 10 == 0:
        print(f'epoch: {epoch}, loss: {losses[-1]}, acc: {accs[-1]}')
print(f'処理時間: {time.time() - start_time}')

epochs = 1000

#cpu
X_tensor = torch.tensor(X).float()[:100]
y_tensor = torch.tensor(y).long()[:100]
X_train, X_test, y_train, y_test = train_test_split(X_tensor, y_tensor)
model_cpu = MLP()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model_cpu.parameters(), lr=0.01)
losses = []
accs = []
start_time = time.time()
for epoch in range(1, epochs+1):
    optimizer.zero_grad()
    output = model_cpu(X_train)
    loss = criterion(output, y_train)
    losses.append(loss.item())
    pred = torch.argmax(output, dim=1)
    accs.append(torch.mean(pred.eq(y_train).float()))
    loss.backward()
    optimizer.step()
    if epoch % 100 == 0:
        print(f'epoch: {epoch}, loss: {losses[-1]}, acc: {accs[-1]}')
print(f'処理時間: {time.time() - start_time}')

#gpu
X_tensor = torch.tensor(X).float().cuda()[:100]
y_tensor = torch.tensor(y).long().cuda()[:100]
X_train, X_test, y_train, y_test = train_test_split(X_tensor, y_tensor)
model_gpu = MLP().cuda()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model_gpu.parameters(), lr=0.01)
losses = []
accs = []
start_time = time.time()
for epoch in range(1, epochs+1):
    optimizer.zero_grad()
    output = model_gpu(X_train)
    loss = criterion(output, y_train)
    losses.append(loss.item())
    pred = torch.argmax(output, dim=1)
    accs.append(torch.mean(pred.eq(y_train).float()))
    loss.backward()
    optimizer.step()
    if epoch % 100 == 0:
        print(f'epoch: {epoch}, loss: {losses[-1]}, acc: {accs[-1]}')
print(f'処理時間: {time.time() - start_time}')
epoch: 10, loss: 2.2980220317840576, acc: 0.10609523952007294
epoch: 20, loss: 2.2948496341705322, acc: 0.10419047623872757
epoch: 30, loss: 2.27293062210083, acc: 0.12912380695343018
epoch: 40, loss: 2.2414379119873047, acc: 0.14672380685806274
epoch: 50, loss: 2.1989901065826416, acc: 0.16807618737220764
epoch: 60, loss: 2.14331316947937, acc: 0.18889524042606354
epoch: 70, loss: 2.0700113773345947, acc: 0.21306666731834412
epoch: 80, loss: 2.0059452056884766, acc: 0.22329524159431458
epoch: 90, loss: 1.9317630529403687, acc: 0.23043809831142426
epoch: 100, loss: 1.8302979469299316, acc: 0.2701904773712158
処理時間: 34.18277430534363
epoch: 10, loss: 2.111504316329956, acc: 0.21573331952095032
epoch: 20, loss: 1.7925028800964355, acc: 0.36239999532699585
epoch: 30, loss: 1.645011305809021, acc: 0.39912378787994385
epoch: 40, loss: 1.502682089805603, acc: 0.4375428557395935
epoch: 50, loss: 1.3159321546554565, acc: 0.4746476113796234
epoch: 60, loss: 1.1520136594772339, acc: 0.5290666222572327
epoch: 70, loss: 0.9430268406867981, acc: 0.5978666543960571
epoch: 80, loss: 0.8281697034835815, acc: 0.6283047199249268
epoch: 90, loss: 0.7572763562202454, acc: 0.6612190008163452
epoch: 100, loss: 0.6676157712936401, acc: 0.7142666578292847
処理時間: 7.353309154510498

GPUの方が圧倒的に早いです

結論

  • 回帰問題と分類問題の違い
    • とくに違いはない
  • ミニバッチによる学習をしているかどうか
    • 関係しそうだが、一応GPUの方が早いので関係ないとする
  • MLPはシングルスレッド性能の高い方が強い
    • MLPでGPUの方が早いケースがあるため関係ない
  • 単純にデータが少ないとシングルスレッド性能が高い方が強い
    • 特徴量やデータ量が多いMNISTのケースで早くなったので関係する

という事で、データ量が一応違いとしてはあるのかなと思いました。
もう少し検証する余地はありますが、今回はこのくらいで。

参考

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0