動作環境
GeForce GTX 1070 (8GB)
ASRock Z170M Pro4S [Intel Z170chipset]
Ubuntu 14.04 LTS desktop amd64
TensorFlow v0.11
cuDNN v5.1 for Linux
CUDA v8.0
Python 2.7.6
IPython 5.1.0 -- An enhanced Interactive Python.
gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
GNU bash, version 4.3.8(1)-release (x86_64-pc-linux-gnu)
test_numpy_170318a.py
import numpy as np
MAXVAL_PLUS_ONE = 5
ints = np.random.randint(MAXVAL_PLUS_ONE, size=(5,2))
print(ints)
flts = ints / float(MAXVAL_PLUS_ONE)
print(flts)
for xval,yval in flts:
print(xval, yval)
結果
$ python test_numpy_170318a.py
[[3 2]
[0 4]
[4 0]
[1 4]
[4 1]]
[[ 0.6 0.4]
[ 0. 0.8]
[ 0.8 0. ]
[ 0.2 0.8]
[ 0.8 0.2]]
(0.59999999999999998, 0.40000000000000002)
(0.0, 0.80000000000000004)
(0.80000000000000004, 0.0)
(0.20000000000000001, 0.80000000000000004)
(0.80000000000000004, 0.20000000000000001)
np.random.randint()にfloatバージョンは未確認。