1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

敢えてRubyで学ぶ「ゼロから作るDeep Learning」第4章 偏微分での二層ニューラルネット

Last updated at Posted at 2020-02-24

CPUがガリガリ回る計算が遅い偏微分での二層ニューラルネットを実装。次回は誤差伝搬法で高速実装の予定

基本関数

数値偏微分

二層のニューラルネット

two_layer_neuralnet.rb
require 'numo/narray'
require './functions'
require './numerical_gradient'

class TwoLayerNeuralNet
  # 初期化
  def initialize(input_size, hidden_size, output_size, weight_init_std = 0.01)
    @params = {}
    @params['w1'] = weight_init_std * Numo::DFloat.new(input_size, hidden_size).rand_norm
    @params['b1'] = Numo::DFloat.zeros(hidden_size)
    @params['w2'] = weight_init_std * Numo::DFloat.new(hidden_size, output_size).rand_norm
    @params['b2'] = Numo::DFloat.zeros(output_size)
  end

  # self.paramsで参照できるように
  def params
    @params
  end

  # ニューラルネットの行列計算
  def predict(x)
    w1, w2 = self.params['w1'], self.params['w2']
    b1, b2 = self.params['b1'], self.params['b2']

    a1 = x.dot(w1) + b1
    z1 = sigmoid(a1)
    a2 = z1.dot(w2) + b2
    softmax(a2)
  end

  # 誤差伝搬関数
  def loss(x, t)
    y = self.predict(x)
    cross_entropy_error(y, t)
  end

  # 精度計算
  def accuracy(x, t)
    y = predict(x)
    y = y.max_index(1) % 10

    y.eq(t).cast_to(Numo::UInt32).sum / y.shape[0].to_f
  end

  # 該当の数値微分、最適化
  def numerical_gradients(x,t)
    loss_w = lambda {|w| loss(x, t) }

    grads = {}
    grads['w1'] = numerical_gradient(loss_w, self.params['w1'])
    grads['b1'] = numerical_gradient(loss_w, self.params['b1'])
    grads['w2'] = numerical_gradient(loss_w, self.params['w2'])
    grads['b2'] = numerical_gradient(loss_w, self.params['b2'])

    grads
  end
end

実行関数

exec_numerical_gradent_neuralnet.rb
require 'numo/narray'
require './functions'
require './numerical_gradient'
require './two_layer_neural_net'
require 'datasets'
require 'mini_magick'

train_size = 60000
test_size = 10000
batch_size = 100
iters_num = 10000
learning_rate = 0.1

train = Datasets::MNIST.new(type: :train)
x_train = Numo::NArray.concatenate(train.map{|t| t.pixels }).reshape(train_size,784)
t_train = Numo::NArray.concatenate(train.map{|t| t.label })
test = Datasets::MNIST.new(type: :test)
x_test = Numo::NArray.concatenate(test.map{|t| t.pixels }).reshape(test_size,784)
t_test = Numo::NArray.concatenate(test.map{|t| t.label })

train_loss_list = []
train_acc_list = []
test_acc_list = []
keys = ['w1', 'b1', 'w2', 'b2']

iter_per_epoch = [train_size / batch_size,1].max

network = TwoLayerNeuralNet.new(input_size=784, hidden_size=50, output_size=10)

iters_num.times do |i|
  batch_mask = Array.new(batch_size){ rand test_size }
  x_batch = x_train[batch_mask, true]
  t_batch = t_train[batch_mask]

  grad = network.numerical_gradients(x_batch, t_batch)

  keys.each do |key|
    network.params[key] -= learning_rate * grad[key]
  end

  loss = network.loss(x_batch, t_batch)
  train_loss_list.append(loss)

  if i % iter_per_epoch == 0
    train_acc = network.accuracy(x_train, t_train)
    test_acc = network.accuracy(x_test, t_test)
    train_acc_list.append(train_acc)
    test_acc_list.append(test_acc)
    p "train acc, test acc | " + train_acc.to_s + ", " + test_acc.to_s
  end
end
1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?