0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

【今日のアブストラクト】Squeeze-and-Excitation Networks【論文 DeepL 翻訳】

Posted at

1 日 1 回 論文の Abstract を DeepL 翻訳の力を借りて読んでいきます.

この記事は自分用のメモみたいなものです.
ほぼ DeepL 翻訳でお送りします.
間違いがあれば指摘していだだけると嬉しいです.

翻訳元
Squeeze-and-Excitation Networks

Abstract

訳文

畳み込みニューラルネットワーク (CNNs) の中心的な構成要素は畳み込み演算子であり, 各層の局所的な受容野内の空間情報とチャネル情報の両方を融合させることで, ネットワークが情報量の多い特徴を構築することを可能にしている. 先行研究では, この関係の空間的要素を幅広く研究し, 特徴階層全体の空間エンコーディングの質を高めることで CNN の表現力を強化しようとしてきた. 本研究では, チャネル間の相互依存性を明示的にモデル化することで, チャネルごとの特徴応答を適応的に再調整する "Squeeze-and-Excitation" (SE) ブロックと呼ばれる新しいアーキテクチャユニットを提案する. これらのブロックを一緒に積み重ねて, 異なるデータセット間で非常に効果的に一般化する SENet アーキテクチャを形成できることを示す. さらに, SE ブロックがわずかな計算コストで既存の最先端 CNN の性能を大幅に向上させることを示す. Squeeze-and-Excitation Networks は, ILSVRC 2017 年分類の我々の提出物の基盤を形成し, $1$ 位を獲得し, トップ $5$ の誤差を $2.251%$ に減少させ, 2016 年の受賞作品を $25%$ 程度の相対的な改善で上回った. モデルとコードはこちらの https URLで公開されています.

原文

The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to $2.251%$, surpassing the winning entry of 2016 by a relative improvement of ~$25%$. Models and code are available at this https URL.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?