Help us understand the problem. What is going on with this article?

多次元の勾配でAutogradを実行する

More than 1 year has passed since last update.

事の経緯

基本的なネットワークであれば

output = net(input)
loss = loss_fn(output)
optimizer.zero_grad()
loss.backward()
optimizer.step()

このように、5行で終わってしまう誤差逆伝播ですが、このbackward()が使えるのは

Tensorがスカラーの時です。

諸事情があり、多次元のTensorを使って勾配を繋げ、Autogradしたいとします。(笑)

(すなわち、勾配の一部を外部ライブラリに依存する形で求めたがために、Autogardの始点がスカラーでなくなってしまった時)

チェインルールで計算するため、勾配をAutogradに渡すやり方を説明します。

色々調べた結果以下

やりかた

output = net(input)

##(なんらかの処理で、勾配を求める)##
##(some_function内では外部ライブラリに依存した処理が行われており)##
##(autogradが適用出来ない)##
outputGrad = some_function(output, ground_Truth)

optimizer.zero_grad()
torch.autograd.backward([output], [outputGrad])
optimizer.step()

これで勾配を渡すことが出来ます

事情が分かる人には分かると思います。。。(笑)

Why do not you register as a user and use Qiita more conveniently?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
Comments
Sign up for free and join this conversation.
If you already have a Qiita account
Why do not you register as a user and use Qiita more conveniently?
You need to log in to use this function. Qiita can be used more conveniently after logging in.
You seem to be reading articles frequently this month. Qiita can be used more conveniently after logging in.
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away