0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

CVPR2025の量子化論文 (2)

Posted at

概要

この記事では、CVPR2025の量子化論文を紹介します。

その1

Enhancing Diversity for Data-free Quantization

  • 概要:データなし量子化で、多様なキャリブデータを生成できる方法
  • キモ:Multi-layer features Mixerはclass embeddingをMixup的に混ぜる((6)式)
  • Normalization-flow based attentionは(8)式でクラス間でランダムな画像を作る

FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix Approximation

  • 概要:ViTのHessianベースのPTQ
  • キモ:
    1. HessianはFIM(フィッシャー情報行列)におきかえられる(定理3.1)
    2. FIMはKL距離の勾配に比例する((11)式)
    3. 2よりFIMをrank-1行列で近似でき、低ランク行列、対角+低ランク行列で近似できる

APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers

  • 概要:ViTのPTQの改善
  • キモ:GELUの後の量子化で精度が落ちる課題があるため、MLP中のGELUをReLUに置き換え、MLPをFine-tuneする。
    Fine-tuneではAverage Perturbation Hessian Lossを使う(Fig. 2)

MBQ: Modality-Balanced Quantization for Large Vision-Language Models

  • 概要:VLMの量子化
  • キモ:SmoothQuantのパラメータ学習において、Visionのロスへの寄与は言語のロスへの寄与と違う(小さい)ことを考慮した学習をする
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?