1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

コンピュータとオセロ対戦45 ~最適化関数~

Last updated at Posted at 2022-02-24

前回

今回の目標

勾配降下法以外の最適化関数も実装していきます。

ここから本編

本編に入る前に、前回、実際にテストとして作成した足し算人工知能について追記。
あれから精度を上げるべくいろいろ試したところ、誤差0のネットワークが作れてしまいました。

test.java
import optimizer.*;
import network.*;
import layer.*;
import nodes.activationFunction.*;
import costFunction.*;
import matrix.*;

public class test {
    public static void main(String[] str){
        Network net = new Network(
            2,
            new Input(4, AF.RELU),
            new Output(1, AF.LINER)
        );
        GradientDescent GD = new GradientDescent(
            net,
            new MeanSquaredError()
        );

        Matrix X = new Matrix(new double[10][2]);
        Matrix T = new Matrix(new double[10][1]);
        for (int i = 0; i < X.row; i++){
            X.matrix[i][0] = i * 0.1;
            X.matrix[i][1] = i * 0.2;
            T.matrix[i][0] = X.matrix[i][0] + X.matrix[i][1];
        }
        MeanSquaredError f = new MeanSquaredError();

        Matrix Y = GD.forward(X);

        System.out.println(f.calcurate(Y, T));

        for (int i = 0; i < 30; i++){
            GD.back(X, Y, T);
            Y = GD.forward(X);
            System.out.println(f.calcurate(Y, T));
        }
        System.out.println(Matrix.hstack(Y, T));

        for (int i = 0; i < X.row; i++){
            X.matrix[i][0] = i * 0.15;
            X.matrix[i][1] = i * 0.12;
            T.matrix[i][0] = X.matrix[i][0] + X.matrix[i][1];
        }
        Y = GD.forward(X);
        System.out.println("score: ");
        System.out.println(f.calcurate(Y, T));
        System.out.println(Matrix.hstack(Y, T));
    }
}

実行結果はこちら。

[[2.0757 ]]

[[0.0124 ]]

[[0.0010 ]]

[[0.0001 ]]

[[0.0000 ]]

~略(この間ずっと0)~

[[0.0000 ]]

[[0.0037 0.0000 ]
 [0.3029 0.3000 ]
 [0.6021 0.6000 ]
 [0.9012 0.9000 ]
 [1.2004 1.2000 ]
 [1.4996 1.5000 ]
 [1.7988 1.8000 ]
 [2.0979 2.1000 ]
 [2.3971 2.4000 ]
 [2.6963 2.7000 ]]

score:
[[0.0010 ]]

[[0.0037 0.0000 ]
 [0.2673 0.2700 ]
 [0.5309 0.5400 ]
 [0.7945 0.8100 ]
 [1.0581 1.0800 ]
 [1.3217 1.3500 ]
 [1.5853 1.6200 ]
 [1.8488 1.8900 ]
 [2.1124 2.1600 ]
 [2.3760 2.4300 ]]

層の数やノード数をべらぼうに増やしてみたりもしたのですが、誤差0.7や0.5から先へ入ってくれませんでした。そして結局、二層合計ノード数5で平均二乗誤差を採用したところかなり高い精度で足し算してくれました。
学習履歴を見ればわかる通り、かなり早い段階で誤差0に到達していますね。確認データでも非常に高い精度です。
以上、本筋と関係ないですが前回のオマケでした。

修正点

Matrixクラスに以下のメソッドを追加。

Matrix.java
    /**
     * Stack matrices vertical.
     * @param matrices Matrices to stack.
     *                 These should not have more than two rows.
     * @return New Matrix instance stacked.
     */
    public static Matrix vstack(Matrix ... matrices){
        Matrix rtn = new Matrix(new double[matrices.length][matrices[0].col]);

        for (int i = 0; i < rtn.row; i++){
            for (int j = 0; j < rtn.col; j++){
                rtn.matrix[i][j] = matrices[i].matrix[0][j];
            }
        }

        return rtn;
    }

    /**
     * Split a matrix vertically.
     * @param in Matrix to be split.
     * @param num Number of split.
     * @return Array of Matrix instance.
     */
    public static Matrix[] vsplit(Matrix in, int num){
        Matrix[] rtn = new Matrix[num];
        int size = in.row / num;
        // 過不足なく分けきれることを確認
        if (size * num != in.row){
            System.out.println("vsplit error");
            System.exit(-1);
        }

        for (int i = 0; i < num; i++){
            rtn[i] = new Matrix(new double[size][in.col]);
            for (int j = 0; j < size; j++){
                for (int k = 0; k < in.col; k++){
                    rtn[i].matrix[j][k] = in.matrix[i*size+j][k];
                }
            }
        }

        return rtn;
    }

    /**
     * Sort a matrix vertically.
     * @param in Matrix to be sort.
     * @param order Order of sort.
     * @return Matrix instance.
     */
    public static Matrix vsort(Matrix in, int[] order){
        Matrix rtn = new Matrix(new double[order.length][in.col]);

        // order.lengthがin.rowより大きくても問題ない。
        // 下のvsortメソッドも同様。
        // 詳しくは後述。
        for (int i = 0; i < order.length; i++){
            for (int j = 0; j < in.col; j++){
                rtn.matrix[i][j] = in.matrix[order[i]][j];
            }
        }

        return rtn;
    }

    /**
     * Sort a matrix vertically.
     * @param in Matrix to be sort.
     * @param order Order of sort.
     * @return Matrix instance.
     */
    public static Matrix vsort(Matrix in, ArrayList<Integer> order){
        Matrix rtn = new Matrix(new double[order.size()][in.col]);

        for (int i = 0; i < order.size(); i++){
            for (int j = 0; j < in.col; j++){
                rtn.matrix[i][j] = in.matrix[order.get(i)][j];
            }
        }

        return rtn;
    }

    /**
     * Calcrate sum.
     * @return Result of sum.
     */
    public double sum(){
        double sum = 0.;

        for (int i = 0; i < this.row; i++){
            for (int j = 0; j < this.col; j++){
                sum += this.matrix[i][j];
            }
        }

        return sum;
    }

    /**
     * Calcrate sum.
     * @param in matrix to investigate.
     * @return Result of sum.
     */
    public static double sum(Matrix in){
        double sum = 0.;

        for (int i = 0; i < in.row; i++){
            for (int j = 0; j < in.col; j++){
                sum += in.matrix[i][j];
            }
        }

        return sum;
    }

Optimizer.java

最適化関数たちの親クラス。
まず、GradientDescentクラスに置いていたcalA、calW、forwardメソッドを移動してきました。

GradientDescent.java
    /**
     * Calcurate output of a layer.
     * @param nodes Nodes in the layer.
     * @return Output of the layer.
     */
    public Matrix calA(Node[] nodes){
        Matrix rtn = new Matrix(new double[nodes[0].a.row][nodes.length]);

        for (int i = 0; i < rtn.row; i++){
            for (int j = 0; j < rtn.col; j++){
                rtn.matrix[i][j] = nodes[j].a.matrix[i][0];
            }
        }

        return Matrix.appendCol(rtn, 1.0);
    }

    /**
     * Get a matrix of weights related to the output of a node.
     * @param nodes Nodes of next layer.
     * @param num Number of the node.
     * @return Matrix instance.
     */
    public Matrix calW(Node[] nodes, int num){
        Matrix rtn = new Matrix(new double[nodes.length][1]);

        for (int i = 0; i < nodes.length; i++){
            rtn.matrix[i][0] = nodes[i].w.matrix[num][0];
        }

        return rtn;
    }

    /**
     * Doing forward propagation.
     * @param in input matrix.
     * @return Matrix instance of output.
     */
    public Matrix forward(Matrix in){
        return this.net.forward(in);
    }

ミニバッチ学習のため、以下のメソッドを追加。
学習用データを分割して返します。

Optimizer.java
    /**
     * Make data for mini batch learning.
     * @param x Input data.
     * @param t Answer.
     * @param batchSize Number of batch size.
     * @param rand Random instance.
     * @return Splited input data and answer.
     */
    public Matrix[][] makeMiniBatch(Matrix x, Matrix t, int batchSize, Random rand){
        // 分ける数
        int rtnSize = (int)(x.row / batchSize) + 1;
        int num, i;
        ArrayList<Integer> order = new ArrayList<Integer>(rtnSize);
        ArrayList<Integer> check = new ArrayList<Integer>(rtnSize);

        for (i = 0; i < x.row; i++){
            check.add(i);
        }
        // 並べ替えの番号
        for (i = 0; i < x.row; i++){
            num = rand.nextInt(x.row - order.size());
            order.add(check.get(num));
            check.remove(num);
        }
        // バッチサイズできれいに割れなかった場合、データは重複するが追加
        for (; i < rtnSize*batchSize; i++){
            order.add(rand.nextInt(x.row));
        }

        // 並べ替えの番号通りに並べ替え
        Matrix x_ = Matrix.vsort(x, order);
        Matrix t_ = Matrix.vsort(t, order);
        // バッチサイズごとに分割
        Matrix[][] rtn = {Matrix.vsplit(x_, rtnSize), Matrix.vsplit(t_, rtnSize)};
        return rtn;
    }

ここでは、総データ量がバッチサイズで割り切れなかった場合、データの重複を許し水増ししています。実際にどうしているかは分かりませんが、楽に実装できるのでこうしました。

実際に使ってみると、こんな感じ。
プログラムと実行結果を載せます。

test.java
import optimizer.*;
import network.*;
import layer.*;
import nodes.activationFunction.*;
import costFunction.*;
import matrix.*;

import java.util.*;

public class test {
    public static void main(String[] str){
        Network net = new Network(
            2,
            new Input(4, AF.RELU),
            // new Dense(10, AF.RELU),
            // new Dense(5, AF.RELU),
            new Output(1, AF.LINER)
        );
        GradientDescent GD = new GradientDescent(
            net,
            new MeanSquaredError()
        );

        Matrix X = new Matrix(new double[10][2]);
        Matrix T = new Matrix(new double[10][1]);
        for (int i = 0; i < X.row; i++){
            X.matrix[i][0] = i * 0.1;
            X.matrix[i][1] = i * 0.2;
            T.matrix[i][0] = X.matrix[i][0] + X.matrix[i][1];
        }
        // 入力データ(無加工)
        System.out.println(X);
        // 正解データ(無加工)
        System.out.println(T);
        System.out.println("----------------");

        // さっき作ったメソッド
        Matrix a[][] = GD.makeMiniBatch(X, T, 3, new Random());
        // 分割した入力データと正解データ出力
        for (int i = 0; i < a[0].length; i++){
            System.out.println(a[0][i]);
            System.out.println(a[1][i]);
        }
    }
}
[[0.0000 0.0000 ]
 [0.1000 0.2000 ]
 [0.2000 0.4000 ]
 [0.3000 0.6000 ]
 [0.4000 0.8000 ]
 [0.5000 1.0000 ]
 [0.6000 1.2000 ]
 [0.7000 1.4000 ]
 [0.8000 1.6000 ]
 [0.9000 1.8000 ]]

[[0.0000 ]
 [0.3000 ]
 [0.6000 ]
 [0.9000 ]
 [1.2000 ]
 [1.5000 ]
 [1.8000 ]
 [2.1000 ]
 [2.4000 ]
 [2.7000 ]]

----------------
[[0.7000 1.4000 ]
 [0.0000 0.0000 ]
 [0.2000 0.4000 ]]

[[2.1000 ]
 [0.0000 ]
 [0.6000 ]]

[[0.4000 0.8000 ]
 [0.8000 1.6000 ]
 [0.9000 1.8000 ]]

[[1.2000 ]
 [2.4000 ]
 [2.7000 ]]

[[0.1000 0.2000 ]
 [0.6000 1.2000 ]
 [0.5000 1.0000 ]]

[[0.3000 ]
 [1.8000 ]
 [1.5000 ]]

[[0.3000 0.6000 ]
 [0.4000 0.8000 ]
 [0.7000 1.4000 ]]

[[0.9000 ]
 [1.2000 ]
 [2.1000 ]]

入力データが分割出来ていて、足し算すると正解と同じになっていることが分かります。

GradientDescent

勾配降下法。前回作成したものです。

$$
w_{new} = w_{old} - \eta \frac{\partial E}{\partial w_{old}}
$$

fitメソッド追加

kerasの真似をしてfitメソッドを追加しました。

GradientDescent.java
    /**
     * Run learning.
     * @param x Input layer.
     * @param t Answer.
     * @param nEpoch Number of epoch.
     * @return Output of this network.
     */
    public Matrix fit(Matrix x, Matrix t, int nEpoch){
        Matrix y = this.forward(x);

        for (int i = 0; i < nEpoch; i++){
            System.out.printf("Epoch %d/%d\n", i+1, nEpoch);
            this.back(x, y, t);
            y = this.forward(x);
            System.out.printf("loss: %.4f\n", this.cFunc.calcurate(y, t).matrix[0][0]);
        }

        return y;
    }

    /**
     * Run learning.
     * @param x Input layer.
     * @param t Answer.
     * @param nEpoch Number of epoch.
     * @param valX Input layer for validation.
     * @param valT Answer for validation.
     * @return Output of this network.
     */
    public Matrix fit(Matrix x, Matrix t, int nEpoch, Matrix valX, Matrix valT){
        Matrix y = this.forward(x);
        Matrix valY;

        for (int i = 0; i < nEpoch; i++){
            System.out.printf("Epoch %d/%d\n", i+1, nEpoch);
            this.back(x, y, t);
            valY = this.forward(valX);
            y = this.forward(x);
            System.out.printf(
                "loss: %.4f - valLoss: %.4f\n",
                this.cFunc.calcurate(y, t).matrix[0][0],
                this.cFunc.calcurate(valY, valT).matrix[0][0]
            );
        }

        return y;
    }

以下のプログラムで実際に動かしてみます。
ネットワークの設定は最初のオマケと全く同じで、今度は掛け算をやってみます。

test.java
import optimizer.*;
import network.*;
import layer.*;
import nodes.activationFunction.*;
import costFunction.*;
import matrix.*;

public class test {
    public static void main(String[] str){
        Network net = new Network(
            2,
            new Input(4, AF.RELU),
            new Output(1, AF.LINER)
        );
        GradientDescent Opt = new GradientDescent(
            net,
            new MeanSquaredError()
        );

        Matrix X = new Matrix(new double[10][2]);
        Matrix T = new Matrix(new double[10][1]);
        for (int i = 0; i < X.row; i++){
            X.matrix[i][0] = i * 0.1;
            X.matrix[i][1] = i * 0.2;
            T.matrix[i][0] = X.matrix[i][0] * X.matrix[i][1];
        }

        Matrix valX = new Matrix(new double[10][2]);
        Matrix valT = new Matrix(new double[10][1]);
        for (int i = 0; i < valX.row; i++){
            valX.matrix[i][0] = i * 0.15;
            valX.matrix[i][1] = i * 0.1;
            valT.matrix[i][0] = valX.matrix[i][0] * valX.matrix[i][1];
        }

        Opt.fit(X, T, 5, valX, valT);
    }
}

実行結果。

Epoch 1/5
loss: 0.0408 - valLoss: 0.0374
Epoch 2/5
loss: 0.0302 - valLoss: 0.0299
Epoch 3/5
loss: 0.0307 - valLoss: 0.0304
Epoch 4/5
loss: 0.0308 - valLoss: 0.0304
Epoch 5/5
loss: 0.0308 - valLoss: 0.0304

足し算と比べると非常に低い精度となりました。

SGD.java

確率的勾配降下法。式自体は勾配降下法と全く同じですが、1エポック内で複数回パラメータを更新するという違いがあります。

$$
w_{new} = w_{old} - \eta \frac{\partial E}{\partial w_{old}}
$$

特筆すべきことも特にないので全文一気に載せます。

SGD.java
package optimizer;

import java.util.Random;
import network.*;
import costFunction.*;
import matrix.*;
import layer.*;
import nodes.*;

/**
 * Class for Stochastic Gradient Descent.
 */
public class SGD extends Optimizer{
    Random rand;

    /**
     * Constructor for this class.
     */
    public SGD(){
        ;
    }

    /**
     * Constructor for this class.
     * @param net Network to which optimization is applied.
     * @param f Cost function in this net.
     */
    public SGD(Network net, CostFunction f){
        this.net = net;
        this.cFunc = f;
        rand = new Random(0);
    }

    /**
     * Constructor for this class.
     * @param net Network to which optimization is applied.
     * @param f Cost function in this net.
     * @param eta Learning rate.
     */
    public SGD(Network net, CostFunction f, double eta){
        this.net = net;
        this.cFunc = f;
        this.eta = eta;
        rand = new Random(0);
    }

    /**
     * Constructor for this class.
     * @param net Network to which optimization is applied.
     * @param f Cost function in this net.
     * @param seed Seed of random.
     */
    public SGD(Network net, CostFunction f, int seed){
        this.net = net;
        this.cFunc = f;
        rand = new Random(seed);
    }

    /**
     * Constructor for this class.
     * @param net Network to which optimization is applied.
     * @param f Cost function in this net.
     * @param eta Learning rate.
     * @param seed Seed of random.
     */
    public SGD(Network net, CostFunction f, double eta, int seed){
        this.net = net;
        this.cFunc = f;
        this.eta = eta;
        rand = new Random(seed);
    }

    /**
     * Run learning.
     * @param x Input layer.
     * @param t Answer.
     * @param nEpoch Number of epoch.
     * @param batchSize Size of batch.
     * @return Output of this network.
     */
    public Matrix fit(Matrix x, Matrix t, int nEpoch, int batchSize){
        Matrix[][] xt = this.makeMiniBatch(x, t, batchSize, rand);
        Matrix[] xs = xt[0];
        Matrix[] ts = xt[1];
        Matrix y = ts[0].clone();
        int backNum = (int)(x.row / batchSize) + 1;

        for (int i = 0; i < nEpoch; i++){
            System.out.printf("Epoch %d/%d\n", i+1, nEpoch);
            for (int j = 0; j < backNum; j++){
                y = this.forward(xs[j]);
                this.back(xs[j], y, ts[j]);
                System.out.printf("\rloss: %.4f", this.cFunc.calcurate(y, t).matrix[0][0]);
            }
            System.out.println();
        }

        return y;
    }

    /**
     * Run learning.
     * @param x Input layer.
     * @param t Answer.
     * @param nEpoch Number of epoch.
     * @param batchSize Size of batch.
     * @param valX Input layer for validation.
     * @param valT Answer for validation.
     * @return Output of this network.
     */
    public Matrix fit(Matrix x, Matrix t, int nEpoch, int batchSize,
                      Matrix valX, Matrix valT){
        Matrix[][] xt = this.makeMiniBatch(x, t, batchSize, rand);
        Matrix[] xs = xt[0];
        Matrix[] ts = xt[1];
        Matrix[][] valxt = this.makeMiniBatch(valX, valT, batchSize, rand);
        Matrix[] valxs = valxt[0];
        Matrix[] valts = valxt[1];
        Matrix y = ts[0].clone();
        Matrix valY;
        int backNum = (int)(x.row / batchSize) + 1;

        for (int i = 0; i < nEpoch; i++){
            System.out.printf("Epoch %d/%d\n", i+1, nEpoch);
            for (int j = 0; j < backNum; j++){
                valY = this.forward(valxs[j]);
                y = this.forward(xs[j]);
                this.back(xs[j], y, ts[j]);
                System.out.printf(
                    "\rloss: %.4f - valLoss: %.4f",
                    this.cFunc.calcurate(y, ts[j]).matrix[0][0],
                    this.cFunc.calcurate(valY, valts[j]).matrix[0][0]
                );
            }
            System.out.println();
        }

        return y;
    }

    /**
     * Doing back propagation.
     * @param x Input layer.
     * @param y Result of forward propagation.
     * @param t Answer.
     */
    public void back(Matrix x, Matrix y, Matrix t){
        // last layer
        Layer nowLayer = this.net.layers[this.net.layers_num-1];
        Layer preLayer = this.net.layers[this.net.layers_num-2];
        for (int i = 0; i < nowLayer.nodes.length; i++){
            Node nowNode = nowLayer.nodes[i];
            Matrix cal;

            cal = this.cFunc.differential(nowNode.a, t.getCol(i));
            cal = Matrix.dot(cal.T(), nowNode.aFunc.differential(nowNode.x));
            nowNode.delta = cal.matrix[0][0];
            cal = Matrix.mult(this.calA(preLayer.nodes), nowNode.delta);
            cal.mult(-this.eta);
            nowNode.w.add(cal.meanCol().T());
        }

        // middle layer and input layer
        for (int i = this.net.layers_num-2; i >= 0; i--){
            Node[] nextNodes = this.net.layers[i+1].nodes;
            Node[] nowNodes = this.net.layers[i].nodes;
            Node[] preNodes;
            Matrix deltas = new Matrix(new double[1][nextNodes.length]);
            Matrix preA;

            if (i != 0){
                // middle layer
                preNodes = this.net.layers[i-1].nodes;
                preA = this.calA(preNodes);
            }else{
                // input layer
                preA = Matrix.appendCol(x, 1.0);
            }

            for (int j = 0; j < nextNodes.length; j++){
                deltas.matrix[0][j] = nextNodes[j].delta;
            }

            for (int j = 0; j < nowNodes.length; j++){
                Node nowNode = nowNodes[j];
                Matrix cal;

                nowNode.delta = Matrix.dot(deltas, this.calW(nextNodes, j)).matrix[0][0]
                                * nowNode.aFunc.differential(nowNode.x.meanCol()).matrix[0][0];
                cal = Matrix.mult(preA.meanCol(), -this.eta*nowNode.delta);
                nowNode.w.add(cal.T());
            }
        }
    }
}

GradientDescentで試したネットワークを、設定そのままでバッチ数2で動かしてみました。
結果はこちら。

Epoch 1/5
loss: 0.0424 - valLoss: 0.0216
Epoch 2/5
loss: 0.0000 - valLoss: 0.0005
Epoch 3/5
loss: 0.0038 - valLoss: 0.0045
Epoch 4/5
loss: 0.0054 - valLoss: 0.0065
Epoch 5/5
loss: 0.0054 - valLoss: 0.0073

一瞬誤差0になってるのが気になりますね。GDと比べ非常に精度の高い結果になりました。
長いので載せませんでしたが、GDで30エポックやっても誤差はあまり(というか全く)変わらなかったので、単純に更新回数が違うからという理由ではなさそうです。

MomentumSGD.java

SGDに慣性項を追加したもの。ここで$\alpha$は慣性項のパラメータ。前回の更新量にかける倍数。

$$
w_{new} = w_{old} - \eta \frac{\partial E}{\partial w_{old}}+\alpha\Delta w_{old}
$$

こちらも式通りにしただけで、特筆すべきことはないので全文一気に載せます。ただし、forwardメソッドは全く変えていないので省略しました。

MomentumSGD.java
package optimizer;

import java.util.Random;
import java.util.ArrayList;
import network.*;
import costFunction.*;
import matrix.*;
import layer.*;
import nodes.*;

/**
 * Class for Stochastic Gradient Descent.
 */
public class MomentumSGD extends Optimizer{
    Random rand;
    /** Value of momentum */
    double alpha = 0.9;
    /** Amount of change in weight */
    ArrayList<ArrayList<Matrix>> dw;

    /**
     * Constructor for this class.
     */
    public MomentumSGD(){
        ;
    }

    /**
     * Constructor for this class.
     * @param net Network to which optimization is applied.
     * @param f Cost function in this net.
     */
    public MomentumSGD(Network net, CostFunction f){
        this.net = net;
        this.cFunc = f;
        rand = new Random(0);
        this.setDw();
    }

    /**
     * Constructor for this class.
     * @param net Network to which optimization is applied.
     * @param f Cost function in this net.
     * @param eta Learning rate.
     * @param alpha Value of momentum.
     */
    public MomentumSGD(Network net, CostFunction f, double eta, double alpha){
        this.net = net;
        this.cFunc = f;
        this.eta = eta;
        this.alpha = alpha;
        rand = new Random(0);
        this.setDw();
    }

    /**
     * Constructor for this class.
     * @param net Network to which optimization is applied.
     * @param f Cost function in this net.
     * @param seed Seed of random.
     */
    public MomentumSGD(Network net, CostFunction f, int seed){
        this.net = net;
        this.cFunc = f;
        rand = new Random(seed);
        this.setDw();
    }

    /**
     * Constructor for this class.
     * @param net Network to which optimization is applied.
     * @param f Cost function in this net.
     * @param eta Learning rate.
     * @param alpha Value of momentum.
     * @param seed Seed of random.
     */
    public MomentumSGD(Network net, CostFunction f,
                       double eta, double alpha, int seed){
        this.net = net;
        this.cFunc = f;
        this.eta = eta;
        this.alpha = alpha;
        rand = new Random(seed);
        this.setDw();
    }

    /**
     * Set dw field.
     */
    private void setDw(){
        this.dw = new ArrayList<ArrayList<Matrix>>();
        for (int i = 0; i < this.net.layers_num; i++){
            this.dw.add(new ArrayList<Matrix>());
            for (int j = 0; j < this.net.layers[i].nodes_num; j++){
                this.dw.get(i).add(this.net.layers[i].nodes[j].w.clone());
                this.dw.get(i).get(j).fillNum(0.);
            }
        }
    }

    /**
     * Doing back propagation.
     * @param x Input layer.
     * @param y Result of forward propagation.
     * @param t Answer.
     */
    public void back(Matrix x, Matrix y, Matrix t){
        // last layer
        Layer nowLayer = this.net.layers[this.net.layers_num-1];
        Layer preLayer = this.net.layers[this.net.layers_num-2];
        ArrayList<Matrix> dw = this.dw.get(this.net.layers_num-1);
        for (int i = 0; i < nowLayer.nodes.length; i++){
            Node nowNode = nowLayer.nodes[i];
            Matrix cal;

            cal = this.cFunc.differential(nowNode.a, t.getCol(i));
            cal = Matrix.dot(cal.T(), nowNode.aFunc.differential(nowNode.x));
            nowNode.delta = cal.matrix[0][0];
            cal = Matrix.mult(this.calA(preLayer.nodes), nowNode.delta);
            cal.mult(-this.eta);
            dw.get(i).mult(this.alpha);
            dw.get(i).add(cal.meanCol().T());
            nowNode.w.add(dw.get(i));
        }

        // middle layer and input layer
        for (int i = this.net.layers_num-2; i >= 0; i--){
            Node[] nextNodes = this.net.layers[i+1].nodes;
            Node[] nowNodes = this.net.layers[i].nodes;
            Node[] preNodes;
            Matrix deltas = new Matrix(new double[1][nextNodes.length]);
            Matrix preA;

            if (i != 0){
                // middle layer
                preNodes = this.net.layers[i-1].nodes;
                preA = this.calA(preNodes);
            }else{
                // input layer
                preA = Matrix.appendCol(x, 1.0);
            }

            for (int j = 0; j < nextNodes.length; j++){
                deltas.matrix[0][j] = nextNodes[j].delta;
            }

            dw = this.dw.get(i);
            for (int j = 0; j < nowNodes.length; j++){
                Node nowNode = nowNodes[j];
                Matrix cal;

                nowNode.delta = Matrix.dot(deltas, this.calW(nextNodes, j)).matrix[0][0]
                                * nowNode.aFunc.differential(nowNode.x.meanCol()).matrix[0][0];
                cal = Matrix.mult(preA.meanCol(), -this.eta*nowNode.delta);
                dw.get(j).mult(this.alpha);
                dw.get(j).add(cal.T());
                nowNode.w.add(dw.get(j));
            }
        }
    }
}

また、GDとSGDでやった学習をMomentumSGDでもやってみました。バッチサイズは2。

Epoch 1/5
loss: 0.2173 - valLoss: 0.1467
Epoch 2/5
loss: 0.0106 - valLoss: 0.0004
Epoch 3/5
loss: 0.0000 - valLoss: 0.0092
Epoch 4/5
loss: 0.0181 - valLoss: 0.0426
Epoch 5/5
loss: 0.0105 - valLoss: 0.0021

訓練データの誤差はSGDより大きいですが、確認データでの誤差は小さいですね。

AdaGrad.java

学習係数を自動で調整する。少しずつ小さくすることで、振動することなく解へ向かうことを目的とします。

$$
h_{new}=h_{old}+\left(\frac{\partial E}{\partial w_{old}}\right)^2
$$$$
w_{new} = w_{old} - \frac{\eta}{\sqrt{h_{new}}} \frac{\partial E}{\partial w_{old}}
$$

backメソッドのみ載せます。
コンストラクタなどは今までと似たようなものなのです。

AdaGrad.java
    /**
     * Doing back propagation.
     * @param x Input layer.
     * @param y Result of forward propagation.
     * @param t Answer.
     */
    public void back(Matrix x, Matrix y, Matrix t){
        double sum = 0.;
        double eta = this.eta / Math.sqrt(this.h);

        // last layer
        Layer nowLayer = this.net.layers[this.net.layers_num-1];
        Layer preLayer = this.net.layers[this.net.layers_num-2];
        for (int i = 0; i < nowLayer.nodes.length; i++){
            Node nowNode = nowLayer.nodes[i];
            Matrix cal;

            cal = this.cFunc.differential(nowNode.a, t.getCol(i));
            cal = Matrix.dot(cal.T(), nowNode.aFunc.differential(nowNode.x));
            nowNode.delta = cal.matrix[0][0];
            cal = Matrix.mult(this.calA(preLayer.nodes), nowNode.delta);
            cal = cal.meanCol();
            sum += Matrix.sum(Matrix.pow(cal));
            cal.mult(-eta);
            nowNode.w.add(cal.T());
        }

        // middle layer and input layer
        for (int i = this.net.layers_num-2; i >= 0; i--){
            Node[] nextNodes = this.net.layers[i+1].nodes;
            Node[] nowNodes = this.net.layers[i].nodes;
            Node[] preNodes;
            Matrix deltas = new Matrix(new double[1][nextNodes.length]);
            Matrix preA;

            if (i != 0){
                // middle layer
                preNodes = this.net.layers[i-1].nodes;
                preA = this.calA(preNodes);
            }else{
                // input layer
                preA = Matrix.appendCol(x, 1.0);
            }

            for (int j = 0; j < nextNodes.length; j++){
                deltas.matrix[0][j] = nextNodes[j].delta;
            }

            for (int j = 0; j < nowNodes.length; j++){
                Node nowNode = nowNodes[j];
                Matrix cal;

                nowNode.delta = Matrix.dot(deltas, this.calW(nextNodes, j)).matrix[0][0]
                                * nowNode.aFunc.differential(nowNode.x.meanCol()).matrix[0][0];
                cal = Matrix.mult(preA.meanCol(), nowNode.delta);
                sum += Matrix.sum(Matrix.pow(cal));
                cal.mult(-eta);
                nowNode.w.add(cal.T());
            }
        }

        this.h += sum;
    }
}

例の掛け算学習実行結果はこちら。バッチ数2、イータ初期値0.001、h初期値10e-8です。

Epoch 1/5
loss: 2473214.3959 - valLoss: 1357515.9110
Epoch 2/5
loss: 2471777.0569 - valLoss: 1356725.6043
Epoch 3/5
loss: 2470778.9335 - valLoss: 1356176.7903
Epoch 4/5
loss: 2469965.5666 - valLoss: 1355729.5627
Epoch 5/5
loss: 2469261.4202 - valLoss: 1355342.3890

誤差の大きさが半端じゃないです。
h初期値を色々変えてみた結果、0.0005に変更することで非常に高い精度が出せました。

Epoch 1/5
loss: 0.0016 - valLoss: 0.0001
Epoch 2/5
loss: 0.0014 - valLoss: 0.0002
Epoch 3/5
loss: 0.0012 - valLoss: 0.0002
Epoch 4/5
loss: 0.0011 - valLoss: 0.0003
Epoch 5/5
loss: 0.0010 - valLoss: 0.0003

訓練データ、確認データ共に今までで最も低い誤差です。

RMSprop.java

AdaGradを改良したもの。ある程度過去の情報はあまり参考にしないアルゴリズムです。

$$
h_{new}=\alpha h_{old}+(1-\alpha)\left(\frac{\partial E}{\partial w_{old}}\right)^2
$$$$
w_{new} = w_{old} - \frac{\eta}{\sqrt{h_{new}}} \frac{\partial E}{\partial w_{old}}
$$

backメソッドのみ載せます。といっても、AdaGradのものと最後の一行しか変わりません。

RMSprop.java
    /**
     * Doing back propagation.
     * @param x Input layer.
     * @param y Result of forward propagation.
     * @param t Answer.
     */
    public void back(Matrix x, Matrix y, Matrix t){
        double sum = 0.;
        double eta = this.eta / Math.sqrt(this.h);

        // last layer
        Layer nowLayer = this.net.layers[this.net.layers_num-1];
        Layer preLayer = this.net.layers[this.net.layers_num-2];
        for (int i = 0; i < nowLayer.nodes.length; i++){
            Node nowNode = nowLayer.nodes[i];
            Matrix cal;

            cal = this.cFunc.differential(nowNode.a, t.getCol(i));
            cal = Matrix.dot(cal.T(), nowNode.aFunc.differential(nowNode.x));
            nowNode.delta = cal.matrix[0][0];
            cal = Matrix.mult(this.calA(preLayer.nodes), nowNode.delta);
            cal = cal.meanCol();
            sum += Matrix.sum(Matrix.pow(cal));
            cal.mult(-eta);
            nowNode.w.add(cal.T());
        }

        // middle layer and input layer
        for (int i = this.net.layers_num-2; i >= 0; i--){
            Node[] nextNodes = this.net.layers[i+1].nodes;
            Node[] nowNodes = this.net.layers[i].nodes;
            Node[] preNodes;
            Matrix deltas = new Matrix(new double[1][nextNodes.length]);
            Matrix preA;

            if (i != 0){
                // middle layer
                preNodes = this.net.layers[i-1].nodes;
                preA = this.calA(preNodes);
            }else{
                // input layer
                preA = Matrix.appendCol(x, 1.0);
            }

            for (int j = 0; j < nextNodes.length; j++){
                deltas.matrix[0][j] = nextNodes[j].delta;
            }

            for (int j = 0; j < nowNodes.length; j++){
                Node nowNode = nowNodes[j];
                Matrix cal;

                nowNode.delta = Matrix.dot(deltas, this.calW(nextNodes, j)).matrix[0][0]
                                * nowNode.aFunc.differential(nowNode.x.meanCol()).matrix[0][0];
                cal = Matrix.mult(preA.meanCol(), nowNode.delta);
                sum += Matrix.sum(Matrix.pow(cal));
                cal.mult(-eta);
                nowNode.w.add(cal.T());
            }
        }

        this.h = this.alpha * this.h + (1 - this.alpha) * sum;
    }

学習率0.01、アルファ0.99、h初期値10e-8で掛け算したら以下のようになりました。

Epoch 1/5
loss: 9264281289.1895 - valLoss: 5405078184.33163
Epoch 2/5
loss: 9261064856.0862 - valLoss: 5403223274.55897
Epoch 3/5
loss: 9258833049.5323 - valLoss: 5401935750.16145
Epoch 4/5
loss: 9256995415.2350 - valLoss: 5400875516.32297
Epoch 5/5
loss: 9255384580.6089 - valLoss: 5399946089.83136

すごい誤差です。
学習率0.01、アルファ0.99、h初期値0.00005でかなり学習できました。

Epoch 1/5
loss: 0.1589 - valLoss: 0.0898
Epoch 2/5
loss: 0.0586 - valLoss: 0.0326
Epoch 3/5
loss: 0.0072 - valLoss: 0.0039
Epoch 4/5
loss: 0.0041 - valLoss: 0.0032
Epoch 5/5
loss: 0.0016 - valLoss: 0.0031

Adam.java

現在、最も広く使われているものです。

$$
m_{new}=\beta_1m_{old}+(1-\beta_1)\frac{\partial E}{\partial w_{old}}
$$$$
v_{new}=\beta_2v_{old}+(1-\beta_2)\left(\frac{\partial E}{\partial w_{old}}\right)^2
$$$$
\tilde{m}=\frac{m_{new}}{1-\beta_1}
$$$$
\tilde{v}=\frac{v_{new}}{1-\beta_2}
$$$$
w_{new}=w_{old}-\eta\frac{\tilde{m}}{\sqrt{\tilde{v}}}
$$

Adam.java
    /**
     * Doing back propagation.
     * @param x Input layer.
     * @param y Result of forward propagation.
     * @param t Answer.
     */
    public void back(Matrix x, Matrix y, Matrix t){
        double sum = 0.;
        double v = 1 / Math.sqrt(this.v / (1-this.beta2));

        // last layer
        Layer nowLayer = this.net.layers[this.net.layers_num-1];
        Layer preLayer = this.net.layers[this.net.layers_num-2];
        ArrayList<Matrix> m = this.m.get(this.net.layers_num-1);
        for (int i = 0; i < nowLayer.nodes.length; i++){
            Node nowNode = nowLayer.nodes[i];
            Matrix cal;

            cal = this.cFunc.differential(nowNode.a, t.getCol(i));
            cal = Matrix.dot(cal.T(), nowNode.aFunc.differential(nowNode.x));
            nowNode.delta = cal.matrix[0][0];
            cal = Matrix.mult(this.calA(preLayer.nodes), nowNode.delta);
            cal = cal.meanCol();
            sum += Matrix.sum(Matrix.pow(cal));
            
            m.get(i).mult(this.beta1);
            m.get(i).add(Matrix.mult(cal.T(), (1-this.beta1)));

            nowNode.w.add(Matrix.mult(m.get(i), -this.eta*v/(1-this.beta1)));
        }

        // middle layer and input layer
        for (int i = this.net.layers_num-2; i >= 0; i--){
            Node[] nextNodes = this.net.layers[i+1].nodes;
            Node[] nowNodes = this.net.layers[i].nodes;
            Node[] preNodes;
            Matrix deltas = new Matrix(new double[1][nextNodes.length]);
            Matrix preA;

            if (i != 0){
                // middle layer
                preNodes = this.net.layers[i-1].nodes;
                preA = this.calA(preNodes);
            }else{
                // input layer
                preA = Matrix.appendCol(x, 1.0);
            }

            for (int j = 0; j < nextNodes.length; j++){
                deltas.matrix[0][j] = nextNodes[j].delta;
            }

            m = this.m.get(i);
            for (int j = 0; j < nowNodes.length; j++){
                Node nowNode = nowNodes[j];
                Matrix cal;

                nowNode.delta = Matrix.dot(deltas, this.calW(nextNodes, j)).matrix[0][0]
                                * nowNode.aFunc.differential(nowNode.x.meanCol()).matrix[0][0];
                cal = Matrix.mult(preA.meanCol(), nowNode.delta);
                sum += Matrix.sum(Matrix.pow(cal));

                m.get(j).mult(this.beta1);
                m.get(j).add(Matrix.mult(cal.T(), (1-this.beta1)));

                nowNode.w.add(Matrix.mult(m.get(j), -this.eta*v/(1-this.beta1)));
            }
        }

        this.v = this.beta2 * this.v + (1 - this.beta2) * sum;
    }

掛け算機を実行するとこうなりました。

Epoch 1/5
loss: 0.8741 - valLoss: 0.5682
Epoch 2/5
loss: 0.8119 - valLoss: 0.5318
Epoch 3/5
loss: 0.7491 - valLoss: 0.4948
Epoch 4/5
loss: 0.6919 - valLoss: 0.4610
Epoch 5/5
loss: 0.6414 - valLoss: 0.4310

微妙。
学習率0.001、ベータ1が0.9、ベータ2が0.999、vが0.0000006で以下の結果になりました。

Epoch 1/5
loss: 0.0079 - valLoss: 0.0008
Epoch 2/5
loss: 0.0045 - valLoss: 0.0002
Epoch 3/5
loss: 0.0020 - valLoss: 0.0001
Epoch 4/5
loss: 0.0006 - valLoss: 0.0005
Epoch 5/5
loss: 0.0000 - valLoss: 0.0012

確認データの誤差はAdaGradに及びませんでしたが、訓練データの方は驚異の0でした。

次回は

最適化関数によって学習の推移が全く違って面白かったです。
次回はモデルの保存やロードができるようにしたいと思います。

次回

参考文献

  1. Optimizer : 深層学習における勾配法について - Qiita
  2. 【前編】Pytorchの様々な最適化手法(torch.optim.Optimizer)の更新過程や性能を比較検証してみた! – 株式会社ライトコード
  3. 【決定版】スーパーわかりやすい最適化アルゴリズム -損失関数からAdamとニュートン法- - Qiita
1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?