Help us understand the problem. What is going on with this article?

PyTorchで巨大テンソルのLossを計算していたらメモリ不足になった

More than 1 year has passed since last update.

事の経緯

動画を処理する機械学習コードで

巨大なループの中で誤差を蓄積したかった

for n in range(len(datalist)):

     (省略)
     loss = loss_fn(GroundTruth, output)
     loss.backward()
     losssum += loss
print(losssum/len(datelist))

するどどんどんメモリが圧迫されていき

out of memoryエラーに。。。。

原因

lossテンソルをそのままコピーしていたために

losssum内に勾配データが蓄積されてしまったのが原因みたいだった

正しくは

for n in range(len(datalist)):

     (省略)
     loss = loss_fn(GroundTruth, output)
     loss.backward()
     losssum += loss.detach()
print(losssum/len(datelist))

メモリ問題も解消

教訓

学習に関係ないところにテンソルをコピーするときは

tensor.detach()
#どうやらdetach()はコピー元と同じ記憶領域を共有しているみたいなので
tensor.clone()
#を推奨します!
Why do not you register as a user and use Qiita more conveniently?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
Comments
Sign up for free and join this conversation.
If you already have a Qiita account
Why do not you register as a user and use Qiita more conveniently?
You need to log in to use this function. Qiita can be used more conveniently after logging in.
You seem to be reading articles frequently this month. Qiita can be used more conveniently after logging in.
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away