0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

【今日のアブストラクト】Proving the Lottery Ticket Hypothesis: Pruning is All You Need【論文 DeepL 翻訳】

Last updated at Posted at 2020-03-20

1 日 1 回 (努力目標) 論文の Abstract を DeepL 翻訳の力を借りて読んでいきます.

この記事は自分用のメモみたいなものです.
ほぼ DeepL 翻訳でお送りします.
間違いがあれば指摘していだだけると嬉しいです.

翻訳元
Proving the Lottery Ticket Hypothesis: Pruning is All You Need

Abstract

訳文

宝くじ仮説 (Frankle and Carbin, 2018) は, ランダムに初期化されたネットワークが, 分離して訓練された場合, 元のネットワークの性能と競合することができるような小さなサブネットワークを含んでいると述べている. 我々は, さらに強い仮説を証明する (Ramanujan et al., 2019 でも推測されていたように), すべての有界分布と有界重みを持つすべてのターゲットネットワークに対して, ランダムな重みを持つ十分にオーバーパラメータ化されたニューラルネットワークは, それ以上の訓練を行わなくても, ターゲットネットワークとほぼ同じ精度を持つサブネットワークを含むことを示す.

原文

The lottery ticket hypothesis (Frankle and Carbin, 2018), states that a randomly-initialized network contains a small subnetwork such that, when trained in isolation, can compete with the performance of the original network. We prove an even stronger hypothesis (as was also conjectured in Ramanujan et al., 2019), showing that for every bounded distribution and every target network with bounded weights, a sufficiently over-parameterized neural network with random weights contains a subnetwork with roughly the same accuracy as the target network, without any further training.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?