1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

[memo]ML.線形回帰(式)

Posted at

単回帰

  • 線形モデル

\hat{y}=w_1x + w_0


- 残差(residual)

>```math
\hat{y_i}-y_i
  • コスト関数(二乗誤差)

J(w_0, w_1)=\frac{\sum_{i=1}^{m}{(\hat{y_i}-y_i})^2}{{2m}}\
・斜面の傾きを計算する偏微分

 
- 最急降下法
    - 最適化アルゴリズム
        - コストを最も低くする$w_0$,$w_1$を導き出す
        - 最急降下法(Gradient Decent)が一般的
            - $w_0$,$w_1$の最適化
            - $J$の最小化

>```math
w_0=:w_0 - \alpha \frac{\partial}{\partial w_0}J(w_0, w_1)=w_0 - \alpha\frac{1}{m}\sum_{i=0}^{m}(\hat{y} - y_i)\\
w_1=:w_1 - \alpha \frac{\partial}{\partial w_1}J(w_0, w_1)=w_1 - \alpha\frac{1}{m}\sum_{i=0}^{m}(\hat{y} - y_i)x_i\\
=:は値を更新する\\
\alphaは学習率(learning\hspace{5pt}rate)\\

重回帰

  • 線形モデル

Y = XW


- コスト関数

>```math
J(W) = \frac{1}{2m}\sum_{i=0}^n(\hat{y}-y_i)^2 \\
= \frac{1}{2m}(XW-y)^T(XW-y)\\
  • 最急降下法

W=:W - \alpha\frac{1}{m}X^T(XW-y)\


- 正規化(feature scaling/normalization)

1) zero score normalization

>```math
x_1 = \frac{x_1-\bar{x}}{\sigma}
  1. min-max normalization

x_1 = \frac{x_1-x_{min}}{x_{max}-x_{min}}



疑問:
学習率が高すぎると最小値を通り過ぎて、だんだん離れていく???

ref:
- [Qiitaの数式チートシート](https://qiita.com/PlanetMeron/items/63ac58898541cbe81ada)
- [線形回帰 入門](https://student.codexa.net/contents/view/72)
- [Introduction to residuals and least-squares regression](https://www.khanacademy.org/math/ap-statistics/bivariate-data-ap/least-squares-regression/v/regression-residual-intro)
1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?