0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

AI要素① 1個のパーセプトロン

Posted at

AIの要素技術について記述します。単純パーセプトロンは、訓練データを用いて学習し、学習後は新たな入力データに対して出力を予測します。
 

サンプルプログラム

perceptron.py

① AND

訓練データ(2入力、1出力のデータセットを4個)

[[0, 0], [0, 1], [1, 0], [1, 1]]

[0, 0 ,0, 1]

活性化関数:ステップ関数

学習後
重み [0.2012573, 0.19867895]
バイアス -0.30000000000000004

予測値(入力は上記と同じ4個)
[0, 0, 0, 1]

② OR

訓練データ

[[0, 0], [0, 1], [1, 0], [1, 1]]

[0, 1 ,1, 1]

学習後
重み [0.1012573, 0.19867895]
バイアス -0.1

活性化関数:ステップ関数

予測値(入力は上記と同じ4個)
[0, 1, 1, 1]

③ XOR 線形分離不可のため失敗する

訓練データ

[[0, 0], [0, 1], [1, 0], [1, 1]]

[0, 1 ,1, 0]

活性化関数:ステップ関数

学習後
重み [-0.0987427, -0.00132105]
バイアス 0.0

予測値(入力は上記と同じ4個)
[1, 0, 0, 0] 正しくない

 

AND の4番目の計算:
線形結合
1 × 0.2012573 + 1 × 0.19867895 - 0.30000000000000004 = 0.09993625
活性化関数(ステップ関数)
0 以上なら 1、0 未満なら 0、よって 1  

 

010_perceptron.py
import numpy as np

class Perceptron:
    """
    単純パーセプトロン(2値分類, ラベルは {0,1})
    - 活性化: ステップ関数 (>=0 で 1, それ以外 0)
    - 最急降下に相当する重み更新: w += lr * (y - y_pred) * x, b += lr * (y - y_pred)
    """
    def __init__(self, lr: float = 0.1, epochs: int = 20, random_state: int | None = 0):
        self.lr = lr
        self.epochs = epochs
        self.random_state = random_state
        self.w: np.ndarray | None = None
        self.b: float | None = None
        self.errors_: list[int] = []

    def _net_input(self, X: np.ndarray) -> np.ndarray:
        return X @ self.w + self.b  # 線形結合

    def predict(self, X: np.ndarray) -> np.ndarray:
        X = np.asarray(X)
        return (self._net_input(X) >= 0.0).astype(int)  # ステップ関数

    def fit(self, X: np.ndarray, y: np.ndarray) -> "Perceptron":
        """
        X: (n_samples, n_features)
        y: (n_samples,) で {0,1}
        """
        X = np.asarray(X, dtype=float)
        y = np.asarray(y, dtype=int)

        n_samples, n_features = X.shape
        rng = np.random.default_rng(self.random_state)
        self.w = rng.normal(scale=0.01, size=n_features)
        self.b = 0.0
        self.errors_.clear()

        for _ in range(self.epochs):
            errors = 0
            for xi, target in zip(X, y):
                y_pred = 1 if (xi @ self.w + self.b) >= 0.0 else 0
                update = self.lr * (target - y_pred)
                if update != 0.0:
                    self.w += update * xi
                    self.b += update
                    errors += 1
            self.errors_.append(errors)
        return self


if __name__ == "__main__":
    # 学習用データ(2次元, 0/1)
    X = np.array([
        [0, 0],
        [0, 1],
        [1, 0],
        [1, 1],
    ])

    y_and = np.array([0, 0, 0, 1])  # AND
    y_or  = np.array([0, 1, 1, 1])  # OR
    y_xor = np.array([0, 1, 1, 0])  # XOR(線形分離不可 → 失敗例)

    print("=== AND ===")
    pp_and = Perceptron(lr=0.1, epochs=10, random_state=0).fit(X, y_and)
    print("weights:", pp_and.w, "bias:", pp_and.b, "errors per epoch:", pp_and.errors_)
    print("pred:", pp_and.predict(X), "true:", y_and)

    print("\n=== OR ===")
    pp_or = Perceptron(lr=0.1, epochs=10, random_state=0).fit(X, y_or)
    print("weights:", pp_or.w, "bias:", pp_or.b, "errors per epoch:", pp_or.errors_)
    print("pred:", pp_or.predict(X), "true:", y_or)

    print("\n=== XOR (fail expected) ===")
    pp_xor = Perceptron(lr=0.1, epochs=20, random_state=0).fit(X, y_xor)
    print("weights:", pp_xor.w, "bias:", pp_xor.b, "errors per epoch:", pp_xor.errors_)
    print("pred:", pp_xor.predict(X), "true:", y_xor)

結果
=== AND ===
weights: [0.2012573  0.19867895] bias: -0.30000000000000004 errors per epoch: [2, 3, 2, 3, 2, 1, 0, 0, 0, 0]
pred: [0 0 0 1] true: [0 0 0 1]

=== OR ===
weights: [0.1012573  0.19867895] bias: -0.1 errors per epoch: [2, 2, 2, 1, 0, 0, 0, 0, 0, 0]
pred: [0 1 1 1] true: [0 1 1 1]

=== XOR (fail expected) ===
weights: [-0.0987427  -0.00132105] bias: 0.0 errors per epoch: [3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4]
pred: [1 0 0 0] true: [0 1 1 0]

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?