Help us understand the problem. What is going on with this article?

TensorFlow v1.1 / 移行 > tf.pack()はtf.stack()になった

More than 3 years have passed since last update.
動作環境
GeForce GTX 1070 (8GB)
ASRock Z170M Pro4S [Intel Z170chipset]
Ubuntu 16.04 LTS desktop amd64
TensorFlow v1.1.0 (以下TF)
cuDNN v5.1 for Linux
CUDA v8.0
Python 3.5.2

TensorFlow / ADDA > 線形方程式の初期値用データの学習 > 学習コード:v0.3 / 学習結果

Ubuntu 14.04 + TensorFlow v0.8の環境からUbuntu 16.04 + TensorFlow v1.1.0に移行した。

TF v0.8用のコードを実行しようとすると以下のようになる。

$ python3 learnExr_170422.py 
Traceback (most recent call last):
  File "learnExr_170422.py", line 53, in <module>
    inputs = tf.pack([xpos, ypos, zpos])
AttributeError: module 'tensorflow' has no attribute 'pack'

https://github.com/tensorflow/tensorflow/issues/7550

As far as I know, tf.pack has been renamed as tf.stack.

tf.pack()からtf.stack()に変わったとのこと。

tf.stack()に変更すると動作した。

learnExr_170504.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np

'''
v0.5 MAr. 04, 2017
  - use [tf.stack] instead of [tf.pack]
=== on Ubuntu 16.04 / CUDA8 / cuDNN5.1 / Python 3 ===
v0.4 Mar. 03, 2017
  - learn [Exr, Exi, Eyr, Eyi, Ezr, Ezi]
v0.3 Mar. 03, 2017
  - learn [Exr] and [Exi]
  - add [Eyr, Eri, Ezr, Ezi] for decode_csv()
v0.2 Apr. 29, 2017
  - save to [model_variables_170429.npy]
  - learn [Exr] only, instead of [Exr, Exi]
v0.1 Apr. 23, 2017
  - change [NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN] from [100] to [9328]
  - change input layer's node from [2] to [3]
  - [input.csv] has 9 columns
=== branched from [learn_xxyyfunc_170321.py] to [learnExr_170422.py] ===
v0.5 Apr. 01, 2017
  - change network from [7,7,7] to [100, 100, 100]
v0.4 Mar. 31, 2017
  - calculate [capacity] from [min_queue_examples] and [batch_size]
v0.3 Mar. 24, 2017
  - change [capacity] from 100 to 40
v0.2 Mar. 24, 2017
  - change [capacity] from 40 to 100
  - output [model_variables] after training
v0.1 Mar. 22, 2017
  - learn mapping of R^2 input to R^2 output
     + using data prepared by [prep_data_170321.py]
  - branched from sine curve learning at
    http://qiita.com/7of9/items/ce58e66b040a0795b2ae
'''

# codingrule:PEP8


filename_queue = tf.train.string_input_producer(["input.csv"])

# prase CSV
reader = tf.TextLineReader()
key, value = reader.read(filename_queue)
def_rec = [[0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.]]
wrk = tf.decode_csv(value, record_defaults=def_rec)
xpos, ypos, zpos, Exr, Exi, Eyr, Eyi, Ezr, Ezi = wrk
inputs = tf.stack([xpos, ypos, zpos])
output = tf.stack([Exr, Exi, Eyr, Eyi, Ezr, Ezi])

batch_size = 4  # [4]
# Ref: cifar10_input.py
min_fraction_of_examples_in_queue = 0.2  # 0.4
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 9328
min_queue_examples = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN *
                         min_fraction_of_examples_in_queue)
#
inputs_batch, output_batch = tf.train.shuffle_batch(
    [inputs, output], batch_size, capacity=min_queue_examples + 3 * batch_size,
    min_after_dequeue=batch_size)

input_ph = tf.placeholder("float", [None, 3])
output_ph = tf.placeholder("float", [None, 6])

## network
hiddens = slim.stack(input_ph, slim.fully_connected, [100, 100, 100],
                     activation_fn=tf.nn.sigmoid, scope="hidden")
prediction = slim.fully_connected(
    hiddens, 6, activation_fn=None, scope="output")
loss = tf.contrib.losses.mean_squared_error(prediction, output_ph)

train_op = slim.learning.create_train_op(loss, tf.train.AdamOptimizer(0.001))

init_op = tf.initialize_all_variables()

with tf.Session() as sess:
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess, coord=coord)

    try:
        sess.run(init_op)
        for i in range(90000):  # 30000
            inpbt, outbt = sess.run([inputs_batch, output_batch])
            _, t_loss = sess.run([train_op, loss],
                                 feed_dict={input_ph: inpbt, output_ph: outbt})

            if (i+1) % 100 == 0:
                print("%d,%f" % (i+1, t_loss))
                sys.stdout.flush()

    finally:
        coord.request_stop()

    # output the model
    model_variables = slim.get_model_variables()
    res = sess.run(model_variables)
    np.save('model_variables_170429.npy', res)

    coord.join(threads)
7of9
セブンオブナインです。Unimatrix 01の第三付属物 9の7という識別番号です。Star trek Voyagerの好きなキャラクターです。まとめ記事は後日タイトルから内容がわからなくなるため、title検索で見つかるよう個々の記事にしてます。いわゆるBorg集合体の有名なセリフから「お前たち(の知識)を吸収する。抵抗は無意味だ」。Thanks in advance.
qiitadon
Qiitadon(β)から生まれた Qiita ユーザー・コミュニティです。
https://qiitadon.com/
Why not register and get more from Qiita?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
Comments
No comments
Sign up for free and join this conversation.
If you already have a Qiita account
Why do not you register as a user and use Qiita more conveniently?
You need to log in to use this function. Qiita can be used more conveniently after logging in.
You seem to be reading articles frequently this month. Qiita can be used more conveniently after logging in.
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
ユーザーは見つかりませんでした