101
68

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

【seed、本当に固定できた?】PyTorchの再現性に関して公式資料を読む

Last updated at Posted at 2021-08-07

内容

機械学習をやっていて、再現性に困ることは多々ありますね(?)。
論文や自身の実験の再現はもちろんのこと、実装のチェックをする際にも有効です。
今回はPyTorchの再現性に関して、PyTorchのofficialな文書を基にGPU環境下での再現を目指していきます。

  • 環境
    • PyTorch: Version 1.9.0
    • GPU: NVIDIA RTX5000
    • cuda: 11.4
    • cuNN: 7.6.5

参考文献:REPRODUCIBILITY

Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds.

However, there are some steps you can take to limit the number of sources of nondeterministic behavior for a specific platform, device, and PyTorch release. First, you can control sources of randomness that can cause multiple executions of your application to behave differently. Second, you can configure PyTorch to avoid using nondeterministic algorithms for some operations, so that multiple calls to those operations, given the same inputs, will produce the same result.

また、これらの手順を踏むと再現性は実現できるものの、速度は落ちてしまいます。その点は注意しましょう。
ただ、デバッグ時には時間の節約になります。

  • WARNING

Deterministic operations are often slower than nondeterministic operations, so single-run performance may decrease for your model. However, determinism may save time in development by facilitating experimentation, debugging, and regression testing.

結局何をすればいいの?

結論これを書けば余分なことがあっても、不足はほとんど 1 ないはずです。

  • seedの指定
  • cudaの設定
  • dataloaderの設定
import random
import numpy as np
import torch

# parserなどで指定
seed = 0

random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True

def seed_worker(worker_id):
    worker_seed = torch.initial_seed() % 2**32
    np.random.seed(worker_seed)
    random.seed(worker_seed)

g = torch.Generator()
g.manual_seed(seed)

DataLoader(
    train_dataset,
    batch_size=batch_size,
    num_workers=num_workers,
    worker_init_fn=seed_worker
    generator=g,
)

Controlling sources of randomness

PyTorch random number generator

まずはPyTorchから。とてもシンプル。

You can use torch.manual_seed() to seed the RNG for all devices (both CPU and CUDA):

import torch
torch.manual_seed(0)

これはCPUにもGPUにも効くようです。

Python

お次は組み込みのseed設定。

For custom operators, you might need to set python seed as well:

import random
random.seed(0)

Random number generators in other libraries

ほとんどの場合で使うであろうnumpyのseed設定。

If you or any of the libraries you are using rely on NumPy, you can seed the global NumPy RNG with:

import numpy as np
np.random.seed(0)

However, some applications and libraries may use NumPy Random Generator objects, not the global RNG (https://numpy.org/doc/stable/reference/random/generator.html), and those will need to be seeded consistently as well.

If you are using any other libraries that use random number generators, refer to the documentation for those libraries to see how to set consistent seeds for them.

CUDA convolution benchmarking

GPUの場合はさらに設定が必要です。どうやらパフォーマンス向上のために畳み込み演算の箇所で何かしらのアルゴリズムがあり、それによって再現性がなくなってしまうようです。
再現性を無視してでも速度の点でベンチマークを測る際に必要なオプション torch.backends.cudnn.benchmarkFalse にしましょう。

The cuDNN library, used by CUDA convolution operations, can be a source of nondeterminism across multiple executions of an application. When a cuDNN convolution is called with a new set of size parameters, an optional feature can run multiple convolution algorithms, benchmarking them to find the fastest one. Then, the fastest algorithm will be used consistently during the rest of the process for the corresponding set of size parameters. Due to benchmarking noise and different hardware, the benchmark may select different algorithms on subsequent runs, even on the same machine.

Disabling the benchmarking feature with torch.backends.cudnn.benchmark = False causes cuDNN to deterministically select an algorithm, possibly at the cost of reduced performance.

However, if you do not need reproducibility across multiple executions of your application, then performance might improve if the benchmarking feature is enabled with torch.backends.cudnn.benchmark = True.

Note that this setting is different from the torch.backends.cudnn.deterministic setting discussed below.

ちなみに、後述する torch.backends.cudnn.deterministic とは話が異なります。

Avoiding nondeterministic algorithms

PyTorchの操作の中には非決定的なものがあります。それらを決定的なものにします。

torch.use_deterministic_algorithms() lets you configure PyTorch to use deterministic algorithms instead of nondeterministic ones where available, and to throw an error if an operation is known to be nondeterministic (and without a deterministic alternative).

Please check the documentation for torch.use_deterministic_algorithms() for a full list of affected operations. If an operation does not act correctly according to the documentation, or if you need a deterministic implementation of an operation that does not have one, please submit an issue: https://github.com/pytorch/pytorch/issues?q=label:%22topic:%20determinism%22

と言われましても、どんなものが普段は非決定的で、今回決定的にする操作として値するのだろうと疑問に思ってしまうという方はTORCH.USE_DETERMINISTIC_ALGORITHMSをチェックしてみましょう。
すると、以下が該当するようです。

  • torch.nn.Conv1d when called on CUDA tensor
  • torch.nn.Conv2d when called on CUDA tensor
  • torch.nn.Conv3d when called on CUDA tensor
  • torch.nn.ConvTranspose1d when called on CUDA tensor
  • torch.nn.ConvTranspose2d when called on CUDA tensor
  • torch.nn.ConvTranspose3d when called on CUDA tensor
  • torch.bmm() when called on sparse-dense CUDA tensors
  • torch.Tensor.getitem() when attempting to differentiate a CPU tensor and the index is a list of tensors
  • torch.Tensor.index_put() with accumulate=False
  • torch.Tensor.index_put() with accumulate=True when called on a CPU tensor
  • torch.Tensor.put_() with accumulate=True when called on a CPU tensor
  • torch.gather() when input dimension is one and called on a CUDA tensor that requires grad
  • torch.index_add() when called on CUDA tensor
  • torch.index_select() when attempting to differentiate a CUDA tensor
  • torch.repeat_interleave() when attempting to differentiate a CUDA tensor
  • torch.Tensor.index_copy() when called on a CPU or CUDA tensor

そして、決定的にできない場合は RuntimeError が出るようです。
例えば、torch.Tensor.index_add_(dim, index, tensor) というメソッドは、dimに沿って/indexに/tensorを挿入するというものですが、エラーが出ています2

For example, running the nondeterministic CUDA implementation of torch.Tensor.index_add_() will throw an error:

>>> import torch
>>> torch.use_deterministic_algorithms(True)
>>> torch.randn(2, 2).cuda().index_add_(0, torch.tensor([0, 1]), torch.randn(2, 2))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: index_add_cuda_ does not have a deterministic implementation, but you set
'torch.use_deterministic_algorithms(True)'. ...

この例では、サイズ(2,2)の正規分布による乱数をcudaに転送し、0番目と1番目の箇所にサイズ(2,2)の正規分布による乱数を挿入しようとしています。正直この操作のどこに非決定的要素があるのかよくわかりません3

次に上手く行く例です。

When torch.bmm() is called with sparse-dense CUDA tensors it typically uses a nondeterministic algorithm, but when the deterministic flag is turned on, its alternate deterministic implementation will be used:

>>> import torch
>>> torch.use_deterministic_algorithms(True)
>>> torch.bmm(torch.randn(2, 2, 2).to_sparse().cuda(), torch.randn(2, 2, 2).cuda())
tensor([[[ 1.1900, -2.3409],
         [ 0.4796,  0.8003]],
        [[ 0.1509,  1.8027],
         [ 0.0333, -1.1444]]], device='cuda:0')

Furthermore, if you are using CUDA tensors, and your CUDA version is 10.2 or greater, you should set the environment variable CUBLAS_WORKSPACE_CONFIG according to CUDA documentation: https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility

CUDA convolution determinism

torch.use_deterministic_algorithms(True)torch.backends.cudnn.deterministic = True は何が異なるのかといえば、前者の方がより幅広く決定的な操作にする設定のようです。幅広く、とはどの程度のことを指すのか具体的なことは書いてありませんでしたし、パッと調べる限りその点に触れた記事はありませんでした。
調べる限りでは、後者の方が使われている気がします。

While disabling CUDA convolution benchmarking (discussed above) ensures that CUDA selects the same algorithm each time an application is run, that algorithm itself may be nondeterministic, unless either torch.use_deterministic_algorithms(True) or torch.backends.cudnn.deterministic = True is set. The latter setting controls only this behavior, unlike torch.use_deterministic_algorithms() which will make other PyTorch operations behave deterministically, too.

CUDA RNN and LSTM

RNNやLSTMもバージョンによっては非決定的な操作を含むようです。

In some versions of CUDA, RNNs and LSTM networks may have non-deterministic behavior. See torch.nn.RNN() and torch.nn.LSTM() for details and workarounds.

DataLoader

忘れがちなdataloaderの設定。場合によってはこれをしないと望む再現性が得られません。
これが問題となるのは複数スレッドでdataloaderを使う場合です。あるいは、経験上、複数のdataloaderを使用する場合(複数のdatasetを用いるなど)も該当すると思います。

その複数スレッドのseedを全て同じにしてしまう操作を seed_worker で定義している感じでしょうか。
g が必要な経緯や何をやっているかは分かりませんでした。

DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use worker_init_fn() and generator to preserve reproducibility:

def seed_worker(worker_id):
    worker_seed = torch.initial_seed() % 2**32
    numpy.random.seed(worker_seed)
    random.seed(worker_seed)

g = torch.Generator()
g.manual_seed(0)

DataLoader(
    train_dataset,
    batch_size=batch_size,
    num_workers=num_workers,
    worker_init_fn=seed_worker
    generator=g,
)

再現性が保てない時に確認すべきこと

私の経験を基に...

  • dataloaderのidx
  • 分布からサンプリングした変数(VAEやGANの$z$の計算時など)
  • lossの値

などなど

  1. 他のライブラリを使う場合は、更なる指定が必要な場合もあります。例えば、PyBulletであれば、さらに env.observation_space(seed)env.action_space(seed) が必要です。

  2. 私の環境下では torch.randn(2, 2).cuda().index_add_(0, torch.tensor([0, 1]).cuda(), torch.randn(2, 2).cuda()) のように全てcudaに転送しないと、 torch.use_deterministic_algorithms(True) にしなくともデバイス関係でエラーが出てしまいます。

  3. そして私の環境下(GPU)では、公式文書にあるようなエラーは出ませんでした。

101
68
3

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
101
68

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?