0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

Deep Double Descent: Where Bigger Models and More Data Hurt 【3 RELATED WORK】【論文 DeepL 翻訳】

Last updated at Posted at 2020-06-13

この記事は自分用のメモみたいなものです.
ほぼ DeepL 翻訳でお送りします.
間違いがあれば指摘していだだけると嬉しいです.

翻訳元
[Deep Double Descent: Where Bigger Models and More Data Hurt ]
(https://arxiv.org/abs/1912.02292)

前: 【2 OUR RESULTS】
次: 【4 EXPERIMENTAL SETUP】

3 RELATED WORK

訳文

Model-wise double descent は, Belkin et al. (2018) によって一般的な現象として最初に提案された. 同様の挙動は, Opper (1995; 2001), Advani & Saxe (2017), Spigler et al. (2018), and Geiger et al. (2019b). その後, double descent 現象の研究が大量に行われてきた. 線形最小二乗回帰の扱いやすい設定で理論的に分析する論文のリストは, Belkin et al. (2019); Hastie et al. (2019); Bartlett et al. (2019); Muthukumar et al. (2019); Bibas et al. (2019); Mitra (2019); Mei & Montanari (2019) を含む. さらに, Geiger et al. (2019a) は, CIFAR-10 上で訓練された畳み込みネットワークにおけるモデルごとの二重降下の予備的な結果を提供する. 我々の研究は, 2 つの重要な点で上記の論文とは異なる. 第一に, 我々は二重降下の概念をパラメータ数を超えて拡張し, “Effective Model Complexity” という統一的な概念の下で訓練手順を組み込むことで, epoch-wise double descent や sample non-monotonicity のような新規な洞察を導く. 訓練時間の増加が複雑さの増加に対応するという考え方は, Nakkiran et al. (2019) でも提示されている. 第二に, 我々は, 様々なアーキテクチャ, データセット最適化手順にまたがる現代のプラクティスのための double-descent の広範かつ厳密なデモンストレーションを提供する. 関連研究の拡張された議論は, 付録 C で提供する。

原文

Model-wise double descent was first proposed as a general phenomenon by Belkin et al. (2018). Similar behavior had been observed in Opper (1995; 2001), Advani & Saxe (2017), Spigler et al. (2018), and Geiger et al. (2019b). Subsequently, there has been a large body of work studying the double descent phenomenon. A growing list of papers that theoretically analyze it in the tractable setting of linear least squares regression includes Belkin et al. (2019); Hastie et al. (2019); Bartlett et al. (2019); Muthukumar et al. (2019); Bibas et al. (2019); Mitra (2019); Mei & Montanari (2019). Moreover, Geiger et al. (2019a) provide preliminary results for model-wise double descent in convolutional networks trained on CIFAR-10. Our work differs from the above papers in two crucial aspects: First, we extend the idea of double-descent beyond the number of parameters to incorporate the training procedure under a unified notion of “Effective Model Complexity”, leading to novel insights like epoch-wise double descent and sample non-monotonicity. The notion that increasing train time corresponds to increasing complexity was also presented in Nakkiran et al. (2019). Second, we provide an extensive and rigorous demonstration of double-descent for modern practices spanning a variety of architectures, datasets optimization procedures. An extended discussion of the related work is provided in Appendix C.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?