GeForce GTX 1070 (8GB)
ASRock Z170M Pro4S [Intel Z170chipset]
Ubuntu 16.04 LTS desktop amd64
TensorFlow v1.1.0
cuDNN v5.1 for Linux
CUDA v8.0
Python 3.5.2
IPython 6.0.0 -- An enhanced Interactive Python.
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
学習コードv0.1 http://qiita.com/7of9/items/5819f36e78cc4290614e
http://qiita.com/7of9/items/fce03c7bf508661de0da
の続き。
This article is related to ADDA (light scattering simulator based on the discrete dipole approximation).
- TFRecordsを読込んで学習する
- input: 5 nodes
- output: 6 nodes
- サンプル数: 223,872
- 学習データ: ADDAにより計算した値
- #input
- x,y,z: dipole position
- refractive index: real and imaginary part
- #output
- initial values for linear equation solution for (x,y,z),(real,imaginary)
出力層の変更 > データの標準化
6つの値を出力層にとってネットワークの学習をしていたが、学習後の誤差をさらに小さくしたかった。
1つの値だけを出力層にしてloss経過の様子を見ている。
http://qiita.com/7of9/items/fce03c7bf508661de0da
の結果を見たところ、EXRの結果だけlossが減少していなかった。
Matplotlib | numpy > データの標準化(standardization)と結果の図示 | Error:AttributeError while adding colorbar in matplotlib
に記載したように「データの標準化」を行ってみた。
学習コード v0.13
import numpy as np
import tensorflow as tf
import tensorflow.contrib.slim as slim
import sys
"""
v0.13 Aug. 12, 2017
- standardize output with (mean, stddev)
+ read_and_decode() handles standardization
+ add standardize_data()
v0.12 Aug. 08, 2017
- handles only one output
- delete dropout
- add dropout
v0.3 - v0.11 Jul. 22 - Aug. 08, 2017
- play around with network parameters
+ batch size
+ learning rate
+ hidden layer nodes' numbers
v0.2 Jul. 22, 2017
- increase step from [30000] to [90000]
- change [capacity]
v0.1 Jul. 22, 2017
- increase network structure from [7,7,7] to [100,100,100]
- increase dimension of [input_ph], [output_ph]
- alter read_and_decode() to treat 5 input-, 6 output- nodes
- alter [IN_FILE] to the symbolic linked file
:reference: [learnExr_170504.py] to expand dimensions to [input:3,output:6]
=== branched from [learn_sineCurve_170708.py] ===
v0.6 Jul. 09, 2017
- modify for PEP8
- print prediction after learning
v0.5 Jul. 09, 2017
- fix bug > [Attempting to use uninitialized value hidden/hidden_1/weights]
v0.4 Jul. 09, 2017
- fix bug > stops only for one epoch
+ set [num_epochs=None] for string_input_producer()
- change parameters for shuffle_batch()
- implement training
v0.3 Jul. 09, 2017
- fix warning > use tf.local_variables_initializer() instead of
initialize_local_variables()
- fix warning > use tf.global_variables_initializer() instead of
initialize_all_variables()
v0.2 Jul. 08, 2017
- fix bug > OutOfRangeError (current size 0)
+ use [tf.initialize_local_variables()]
v0.1 Jul. 08, 2017
- only read [.tfrecords]
+ add inputs_xy()
+ add read_and_decode()
"""
# codingrule: PEP8
IN_FILE = 'LN-IntField-Y_170722.tfrecords'
def standardize_data(ax, mean, stddev):
return (ax - mean) / stddev
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
features={
'xpos_raw': tf.FixedLenFeature([], tf.string),
'ypos_raw': tf.FixedLenFeature([], tf.string),
'zpos_raw': tf.FixedLenFeature([], tf.string),
'mr_raw': tf.FixedLenFeature([], tf.string),
'mi_raw': tf.FixedLenFeature([], tf.string),
'exr_raw': tf.FixedLenFeature([], tf.string),
'exi_raw': tf.FixedLenFeature([], tf.string),
'eyr_raw': tf.FixedLenFeature([], tf.string),
'eyi_raw': tf.FixedLenFeature([], tf.string),
'ezr_raw': tf.FixedLenFeature([], tf.string),
'ezi_raw': tf.FixedLenFeature([], tf.string),
})
xpos_raw = tf.decode_raw(features['xpos_raw'], tf.float32)
ypos_raw = tf.decode_raw(features['ypos_raw'], tf.float32)
zpos_raw = tf.decode_raw(features['zpos_raw'], tf.float32)
mr_raw = tf.decode_raw(features['mr_raw'], tf.float32)
mi_raw = tf.decode_raw(features['mi_raw'], tf.float32)
exr_raw = tf.decode_raw(features['exr_raw'], tf.float32)
exi_raw = tf.decode_raw(features['exi_raw'], tf.float32)
eyr_raw = tf.decode_raw(features['eyr_raw'], tf.float32)
eyi_raw = tf.decode_raw(features['eyi_raw'], tf.float32)
ezr_raw = tf.decode_raw(features['ezr_raw'], tf.float32)
ezi_raw = tf.decode_raw(features['ezi_raw'], tf.float32)
xpos_org = tf.reshape(xpos_raw, [1])
ypos_org = tf.reshape(ypos_raw, [1])
zpos_org = tf.reshape(zpos_raw, [1])
mr_org = tf.reshape(mr_raw, [1])
mi_org = tf.reshape(mi_raw, [1])
exr_org = tf.reshape(exr_raw, [1])
exi_org = tf.reshape(exi_raw, [1])
eyr_org = tf.reshape(eyr_raw, [1])
eyi_org = tf.reshape(eyi_raw, [1])
ezr_org = tf.reshape(ezr_raw, [1])
ezi_org = tf.reshape(ezi_raw, [1])
# input
wrk = [xpos_org[0], ypos_org[0], zpos_org[0], mr_org[0], mi_org[0]]
inputs = tf.stack(wrk)
# for standardization
# obtained from [calc_mean_std_170812.py]
means = (0.000000, 0.000000, 0.018333, 0.011257, 0.000000, 0.000000)
stddevs = (0.108670, 0.079414, 0.704788, 0.579868, 0.167189, 0.271590)
# --- w/o standardization
# out_exr = exr_org[0]
# --- w/ standardization
out_exr = standardize_data(exr_org[0], means[0], stddevs[0])
out_exi = standardize_data(exi_org[0], means[1], stddevs[1])
out_eyr = standardize_data(eyr_org[0], means[2], stddevs[2])
out_eyi = standardize_data(eyi_org[0], means[3], stddevs[3])
out_ezr = standardize_data(ezr_org[0], means[4], stddevs[4])
out_ezi = standardize_data(ezi_org[0], means[5], stddevs[5])
# --- six outputs
# wrk = [out_exr, out_exi,
# out_eyr, out_eyi,
# out_ezr, out_ezi
# --- single output
wrk = [out_ezi]
#
outputs = tf.stack(wrk)
return inputs, outputs
def inputs_xy():
filename = IN_FILE
filequeue = tf.train.string_input_producer(
[filename], num_epochs=None)
in_org, out_org = read_and_decode(filequeue)
return in_org, out_org
in_orgs, out_orgs = inputs_xy()
batch_size = 2 # [2]
# Ref: cifar10_input.py
min_fraction_of_examples_in_queue = 0.2 # 0.4
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 223872 # 223872 or 9328
min_queue_examples = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN *
min_fraction_of_examples_in_queue)
cpcty = min_queue_examples + 3 * batch_size
in_batch, out_batch = tf.train.shuffle_batch([in_orgs, out_orgs],
batch_size,
capacity=cpcty,
min_after_dequeue=batch_size)
input_ph = tf.placeholder("float", [None, 5])
# output_ph = tf.placeholder("float", [None, 6]) # [6]
output_ph = tf.placeholder("float", [None, 1]) # [6]
# network
hiddens = slim.stack(input_ph, slim.fully_connected, [30, 100, 100],
activation_fn=tf.nn.sigmoid, scope="hidden")
# --- six outputs
# prediction = slim.fully_connected(hiddens, 6,
# activation_fn=None, scope="output")
# --- only one output
prediction = slim.fully_connected(hiddens, 1,
activation_fn=None, scope="output")
loss = tf.contrib.losses.mean_squared_error(prediction, output_ph)
train_op = slim.learning.create_train_op(loss, tf.train.AdamOptimizer())
init_op = [tf.global_variables_initializer(), tf.local_variables_initializer()]
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
for idx in range(3000000): # 3000000
inpbt, outbt = sess.run([in_batch, out_batch])
# print(outbt) # debug
_, t_loss = sess.run([train_op, loss],
feed_dict={input_ph: inpbt, output_ph: outbt})
if (idx + 1) % 100 == 0:
print("%d,%f" % (idx+1, t_loss))
# sys.stdout.flush() # not good for Matplotlib drawing
finally:
coord.request_stop()
# output the model
model_variables = slim.get_model_variables()
res = sess.run(model_variables)
np.save('model_variables_170722.npy', res)
coord.join(threads)
lossの経過
6つの結果に関して、すべてlossが減少するようになった。