0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

【今日のアブストラクト】Deep Double Descent: Where Bigger Models and More Data Hurt【論文 DeepL 翻訳】

Last updated at Posted at 2020-04-02

1 日 1 回 論文の Abstract を DeepL 翻訳の力を借りて読んでいきます.

この記事は自分用のメモみたいなものです.
ほぼ DeepL 翻訳でお送りします.
間違いがあれば指摘していだだけると嬉しいです.

翻訳元
Deep Double Descent: Where Bigger Models and More Data Hurt

今日のアブストラクトまとめ

前: 無し
次: 【1 INTRODUCTION】

Abstract

訳文

現代の様々な深層学習課題において, モデルサイズを大きくすると, まずパフォーマンスが悪くなり, 次に良くなるという "二重下降 (double-descent)" 現象が起こることを我々は示す. さらに, double descent はモデルサイズの関数としてだけでなく, 訓練エポック数の関数としても起こることを示す. 我々が effective model complexity と呼ぶ新しい複雑度の尺度を定義し, この尺度に関して一般化された double descent を仮定することで, 上記の現象を統一する. さらに, モデル複雑度の概念は, 訓練サンプル数を増加させる (4倍にしても) ことが実際にテスト性能に悪影響を与える特定の領域を特定することを可能にする.

原文

We show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?