LoginSignup
0
0

More than 5 years have passed since last update.

Neural Network > Neural Computation: Lecture 7 by John A. Bullinaria, 2015 > cost function | activation function | normalization of outputs

Last updated at Posted at 2017-08-13
動作環境
GeForce GTX 1070 (8GB)
ASRock Z170M Pro4S [Intel Z170chipset]
Ubuntu 16.04 LTS desktop amd64
TensorFlow v1.1.0
cuDNN v5.1 for Linux
CUDA v8.0
Python 3.5.2
IPython 6.0.0 -- An enhanced Interactive Python.
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)

TensorFlowでの5次元関数の学習に関して。

cost functionやactivation functionsはどういうものが良いか?

Neural Computation: Lecture 7
@ John A. Bullinaria, 2015
http://www.cs.bham.ac.uk/~jxb/INC/l7.pdf
@ L7-9

Regression/Function Approximation Problems
SSE cost function, linear output activations, sigmoid hidden activations

こちらでは現在、上記のcost function, output activation, hidden activationsを使用中。

In each case, application of the gradient descent learning algorithm (by computing the
partial derivatives) leads to appropriate back-propagation weight update equations.

現在AdamOptimizerを使用中。SGDにするとlossの経過は良くなかった。パラメータの選択によるかもしれない。

...

@ L7-16

For regression problems, the SSE cost function itself is often the most useful measure of performance. Sometimes it is helpful to divide that by the number of patterns and take the square root to give the Root-Mean-Squared Error (RMSE). It might also help to normalize the outputs by dividing by the target mean or standard-deviation.

出力層のnormalizeについて書かれていた。

output layerのデータ標準化も試してみたが、まだ誤差は大きい。

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0