1.すぐに利用したい方へ(as soon as)
「Python Deep Learning」 By Valentino Zocca, Gianmario Spacagna, Daniel Slater, Peter Roelants
http://shop.oreilly.com/product/9781786464453.do
docker
dockerを導入し、Windows, Macではdockerを起動しておいてください。
Windowsでは、BiosでIntel Virtualizationをenableにしないとdockerが起動しない場合があります。
また、セキュリティの警告などが出ることがあります。
docker pull and run
$ docker pull kaizenjapan/anaconda-valentino
$ docker run -it -p 8888:8888 kaizenjapan/anaconda-valentino /bin/bash
以下のshell sessionでは
(base) root@f19e2f06eabb:/#は入力促進記号(comman prompt)です。実際には数字の部分が違うかもしれません。この行の#の右側を入力してください。
それ以外の行は出力です。出力にエラー、違いがあれば、コメント欄などでご連絡くださると幸いです。
それぞれの章のフォルダに移動します。
dockerの中と、dockerを起動したOSのシェルとが表示が似ている場合には、どちらで捜査しているか間違えることがあります。dockerの入力促進記号(comman prompt)に気をつけてください。
ファイル共有または複写
dockerとdockerを起動したOSでは、ファイル共有をするか、ファイル複写するかして、生成したファイルをブラウザ等表示させてください。参考文献欄にやり方のURLを記載しています。
複写の場合は、dockerを起動したOS側コマンドを実行しました。お使いのdockerの番号で置き換えてください。複写したファイルをブラウザで表示し内容確認しました。
plt.show()
はコメントにしている。
import matplotlib as mpl
mpl.use('Agg')
fig = plt.figure()
fig.savefig('img.png')
4行を追加している。ただし、ファイルが2kで中身不明。
Chapter 01
(base) root@b350954ba6b4:/# cd Python-Deep-Learning/
(base) root@b350954ba6b4:/Python-Deep-Learning# ls
Chapter 01 Chapter 02 Chapter 03 Chapter 04 Chapter 05 Chapter 06 Chapter 07 Chapter 08 Chapter 09 LICENSE README.md
(base) root@b350954ba6b4:/Python-Deep-Learning# cd Chapter\ 01
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 01# ls
Chapter1_ex1_v2.py Chapter1_ex2_v2.py Chapter1_ex3_v2.py
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 01# python Chapter1_ex1_v2.py
Accuracy: 0.98
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 01# python Chapter1_ex2_v2.py
Misclassified samples: 2
Accuracy: 0.97
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 01# python Chapter1_ex3_v2.py
Misclassified samples: 3
Accuracy: 0.96
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
/opt/conda/lib/python3.6/site-packages/matplotlib/figure.py:448: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
% get_backend())
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 01# vi Chapter1_ex3_v2.py
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 01# python Chapter1_ex3_v2.py
Misclassified samples: 3
Accuracy: 0.96
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 01# ls
Chapter1_ex1_v2.py Chapter1_ex2_v2.py Chapter1_ex3_v2.py ex3.png
Chapter 02
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 02# python Ch2Example.py
File "Ch2Example.py", line 139
print "Final prediction"
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("Final prediction")?
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 02# vi Ch2Example.py
print "Final prediction"
を
print("Final prediction")
に変更。
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 02# python Ch2Example.py
epochs: 0.0
[0 0] 0.31634987228520156
[0 1] 0.38455314510086014
[1 0] 0.49960366414001517
[1 1] 0.5470092417007291
epochs: 1.0
[0 0] 0.10110562119575021
[0 1] 0.4983062530300437
[1 0] 0.5483740117095983
[1 1] 0.6358128781126651
epochs: 2.0
[0 0] 0.07164948329787552
[0 1] 0.8610758132814952
[1 0] 0.8502850626450229
[1 1] 0.07158530421971639
epochs: 3.0
[0 0] 0.01758656789925367
[0 1] 0.966663773401989
[1 0] 0.9651222166853127
[1 1] 0.011468668342141863
epochs: 4.0
[0 0] -0.0017118993569207345
[0 1] 0.9815292663780985
[1 0] 0.9828812324283913
[1 1] -0.0003037091869647862
epochs: 5.0
[0 0] 0.0026985374081322524
[0 1] 0.9885083594808965
[1 0] 0.9891298042443863
[1 1] 0.015552778145753128
epochs: 6.0
[0 0] 0.005625211435815758
[0 1] 0.9922099656276941
[1 0] 0.9915443580479175
[1 1] 0.01694332658239157
epochs: 7.0
[0 0] 0.0019544398675402134
[0 1] 0.9934850143000605
[1 0] 0.9934672674082785
[1 1] 0.0007886110283729943
epochs: 8.0
[0 0] 0.0036493566842660274
[0 1] 0.9950489745378326
[1 0] 0.9943655959618727
[1 1] 0.003149359618503074
epochs: 9.0
[0 0] 0.0023304944233086483
[0 1] 0.9958242355532402
[1 0] 0.9953474336963716
[1 1] 0.006106035948582399
Final prediction
[0 0] 0.003032173692500074
[0 1] 0.9963860761357731
[1 0] 0.9959034563937058
[1 1] 0.0006386449217581
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
/opt/conda/lib/python3.6/site-packages/matplotlib/figure.py:448: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
% get_backend())
Chapter 02
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 01# cd ..
(base) root@b350954ba6b4:/Python-Deep-Learning# ls
Chapter 01 Chapter 02 Chapter 03 Chapter 04 Chapter 05 Chapter 06 Chapter 07 Chapter 08 Chapter 09 LICENSE README.md
(base) root@b350954ba6b4:/Python-Deep-Learning# cd Chapter\ 02
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 02# ls
Ch2Example.py
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 02# python Ch2Example.py
File "Ch2Example.py", line 139
print "Final prediction"
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("Final prediction")?
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 02# vi Ch2Example.py
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 02# python Ch2Example.py
epochs: 0.0
[0 0] 0.31634987228520156
[0 1] 0.38455314510086014
[1 0] 0.49960366414001517
[1 1] 0.5470092417007291
epochs: 1.0
[0 0] 0.10110562119575021
[0 1] 0.4983062530300437
[1 0] 0.5483740117095983
[1 1] 0.6358128781126651
epochs: 2.0
[0 0] 0.07164948329787552
[0 1] 0.8610758132814952
[1 0] 0.8502850626450229
[1 1] 0.07158530421971639
epochs: 3.0
[0 0] 0.01758656789925367
[0 1] 0.966663773401989
[1 0] 0.9651222166853127
[1 1] 0.011468668342141863
epochs: 4.0
[0 0] -0.0017118993569207345
[0 1] 0.9815292663780985
[1 0] 0.9828812324283913
[1 1] -0.0003037091869647862
epochs: 5.0
[0 0] 0.0026985374081322524
[0 1] 0.9885083594808965
[1 0] 0.9891298042443863
[1 1] 0.015552778145753128
epochs: 6.0
[0 0] 0.005625211435815758
[0 1] 0.9922099656276941
[1 0] 0.9915443580479175
[1 1] 0.01694332658239157
epochs: 7.0
[0 0] 0.0019544398675402134
[0 1] 0.9934850143000605
[1 0] 0.9934672674082785
[1 1] 0.0007886110283729943
epochs: 8.0
[0 0] 0.0036493566842660274
[0 1] 0.9950489745378326
[1 0] 0.9943655959618727
[1 1] 0.003149359618503074
epochs: 9.0
[0 0] 0.0023304944233086483
[0 1] 0.9958242355532402
[1 0] 0.9953474336963716
[1 1] 0.006106035948582399
Final prediction
[0 0] 0.003032173692500074
[0 1] 0.9963860761357731
[1 0] 0.9959034563937058
[1 1] 0.0006386449217581
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
/opt/conda/lib/python3.6/site-packages/matplotlib/figure.py:448: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
% get_backend())
Chapter 03
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 03# vi mnist_chapter3_example.py
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 03# python mnist_chapter3_example.py
Using TensorFlow backend.
Epoch 1/30
2018-10-22 07:27:32.670071: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-10-22 07:27:32.673396: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
60000/60000 [==============================] - 2s 39us/step - loss: 0.9433 - acc: 0.7644
Epoch 2/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.4844 - acc: 0.8830
Epoch 3/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.3917 - acc: 0.9008
Epoch 4/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.3442 - acc: 0.9106
Epoch 5/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.3137 - acc: 0.9171
Epoch 6/30
60000/60000 [==============================] - 2s 36us/step - loss: 0.2917 - acc: 0.9224
Epoch 7/30
60000/60000 [==============================] - 2s 37us/step - loss: 0.2732 - acc: 0.9262
Epoch 8/30
60000/60000 [==============================] - 2s 36us/step - loss: 0.2603 - acc: 0.9296
Epoch 9/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.2478 - acc: 0.9317
Epoch 10/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.2396 - acc: 0.9347
Epoch 11/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.2291 - acc: 0.9377
Epoch 12/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.2248 - acc: 0.9369
Epoch 13/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.2145 - acc: 0.9417
Epoch 14/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.2116 - acc: 0.9418
Epoch 15/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.2029 - acc: 0.9440
Epoch 16/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.1992 - acc: 0.9449
Epoch 17/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.1918 - acc: 0.9469
Epoch 18/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.1866 - acc: 0.9479
Epoch 19/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.1835 - acc: 0.9493
Epoch 20/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.1777 - acc: 0.9511
Epoch 21/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.1747 - acc: 0.9525
Epoch 22/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.1700 - acc: 0.9524
Epoch 23/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.1688 - acc: 0.9530
Epoch 24/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.1647 - acc: 0.9549
Epoch 25/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.1612 - acc: 0.9556
Epoch 26/30
60000/60000 [==============================] - 2s 34us/step - loss: 0.1613 - acc: 0.9553
Epoch 27/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.1558 - acc: 0.9571
Epoch 28/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.1543 - acc: 0.9570
Epoch 29/30
60000/60000 [==============================] - 2s 35us/step - loss: 0.1513 - acc: 0.9583
Epoch 30/30
60000/60000 [==============================] - 2s 33us/step - loss: 0.1473 - acc: 0.9586
10000/10000 [==============================] - 0s 39us/step
Test accuracy: 0.9523
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 03# ls
ch3.png mnist_chapter3_example.py neuron_images.png
Chapter 04
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 03# cd ../Chapter\ 04
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 04# ls
restricted_boltzmann_machine.py
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 04# python restricted_boltzmann_machine.py
WARNING:tensorflow:From restricted_boltzmann_machine.py:8: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:252: _internal_retry.<locals>.wrap.<locals>.wrapped_fn (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please use urllib or similar directly.
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
2018-10-22 07:33:19.940632: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-10-22 07:33:19.941045: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py:189: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
epochs 0 loss 1108.4028
epochs 1 loss 713.5559
epochs 2 loss 647.7294
epochs 3 loss 620.2671
epochs 4 loss 484.40912
epochs 5 loss 500.04822
epochs 6 loss 394.70355
epochs 7 loss 344.99576
epochs 8 loss 386.39786
epochs 9 loss 331.01086
epochs 10 loss 314.53247
epochs 11 loss 318.10516
epochs 12 loss 354.9399
epochs 13 loss 282.9333
epochs 14 loss 302.8286
epochs 15 loss 289.76477
epochs 16 loss 316.52426
epochs 17 loss 301.83133
epochs 18 loss 318.17334
epochs 19 loss 280.11926
Chapter 05
``
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 05# python astro_chapter5.py
/opt/conda/lib/python3.6/site-packages/theano/tensor/nnet/conv.py:98: UserWarning: theano.tensor.nnet.conv.conv2d is deprecated. Use theano.tensor.nnet.conv2d instead.
warnings.warn("theano.tensor.nnet.conv.conv2d is deprecated."
/opt/conda/lib/python3.6/site-packages/matplotlib/figure.py:448: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
% get_backend())
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 05# vi astro_chapter5.py
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 05# python astro_chapter5.py
/opt/conda/lib/python3.6/site-packages/theano/tensor/nnet/conv.py:98: UserWarning: theano.tensor.nnet.conv.conv2d is deprecated. Use theano.tensor.nnet.conv2d instead.
warnings.warn("theano.tensor.nnet.conv.conv2d is deprecated."
(base) root@b350954ba6b4:/Python-Deep-Learning/Chapter 05# ls
astro0.png astro2.png astro_chapter5.py img.png mnist_chapter5_example_convolution.py
astro1.png astro3.png cifar_chapter5_example_convolution.py mnist_chapter5_example.py
``
2. dockerを自力で構築する方へ
ここから下は、上記のpullしていただいたdockerをどういう方針で、どういう手順で作ったかを記録します。
上記のdockerを利用する上での参考資料です。本の続きを実行する上では必要ありません。
自力でdocker/anacondaを構築する場合の手順になります。
dockerfileを作る方法ではありません。ごめんなさい。
docker
ubuntu, debianなどのLinuxを、linux, windows, mac osから共通に利用できる仕組み。
利用するOSの設定を変更せずに利用できるのがよい。
同じ仕様で、大量の人が利用することができる。
ソフトウェアの開発元が公式に対応しているものと、利用者が便利に仕立てたものの両方が利用可能である。今回は、公式に配布しているものを、自分で仕立てて、他の人にも利用できるようにする。
python
DeepLearningの実習をPhthonで行って来た。
pythonを使う理由は、多くの機械学習の仕組みがpythonで利用できることと、Rなどの統計解析の仕組みもpythonから容易に利用できることがある。
anaconda
pythonには、2と3という版の違いと、配布方法の違いなどがある。
Anacondaでpython3をこの1年半利用してきた。
Anacondaを利用した理由は、統計解析のライブラリと、JupyterNotebookが初めから入っているからである。
docker公式配布
ubuntu, debianなどのOSの公式配布,gcc, anacondaなどの言語の公式配布などがある。
これらを利用し、docker-hubに登録することにより、公式配布の質の確認と、変更権を含む幅広い情報の共有ができる。dockerが公式配布するものではなく、それぞれのソフト提供者の公式配布という意味。
docker pull
docker公式配布の利用は、URLからpullすることで実現する。
docker Anaconda
anacondaが公式配布しているものを利用。
$ docker pull kaizenjapan/anaconda-keras
Using default tag: latest
latest: Pulling from continuumio/anaconda3
Digest: sha256:e07b9ca98ac1eeb1179dbf0e0bbcebd87701f8654878d6d8ce164d71746964d1
Status: Image is up to date for continuumio/anaconda3:latest
$ docker run -it -p 8888:8888 continuumio/anaconda3 /bin/bash
実際にはkeras, tensorflow を利用していた他のpushをpull
apt
(base) root@d8857ae56e69:/# apt update; -y upgrade
(base) root@d8857ae56e69:/# apt install -y procps vim apt-utils sudo
ソース git
(base) root@f19e2f06eabb:/# git clone https://github.com/PacktPublishing/Python-Deep-Learning
conda
# conda update --prefix /opt/conda anaconda
Solving environment: done
# conda install theano
pip
(base) root@f19e2f06eabb:/deep-learning-from-scratch-2/ch01# pip install --upgrade pip
Collecting pip
Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
100% |████████████████████████████████| 1.3MB 2.0MB/s
distributed 1.21.8 requires msgpack, which is not installed.
Installing collected packages: pip
Found existing installation: pip 10.0.1
Uninstalling pip-10.0.1:
Successfully uninstalled pip-10.0.1
Successfully installed pip-18.0
(
docker hubへの登録
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
caef766a99ff continuumio/anaconda3 "/usr/bin/tini -- /b…" 10 hours ago Up 10 hours 0.0.0.0:8888->8888/tcp sleepy_bassi
$ docker commit caef766a99ff kaizenjapan/anaconda-valentino
$ docker push kaizenjapan/anaconda-valentino
参考資料(reference)
なぜdockerで機械学習するか 書籍・ソース一覧作成中 (目標100)
https://qiita.com/kaizen_nagoya/items/ddd12477544bf5ba85e2
dockerで機械学習(1) with anaconda(1)「ゼロから作るDeep Learning - Pythonで学ぶディープラーニングの理論と実装」斎藤 康毅 著
https://qiita.com/kaizen_nagoya/items/a7e94ef6dca128d035ab
dockerで機械学習(2)with anaconda(2)「ゼロから作るDeep Learning2自然言語処理編」斎藤 康毅 著
https://qiita.com/kaizen_nagoya/items/3b80dfc76933cea522c6
dockerで機械学習(3)with anaconda(3)「直感Deep Learning」Antonio Gulli、Sujit Pal 第1章,第2章
https://qiita.com/kaizen_nagoya/items/483ae708c71c88419c32
dockerで機械学習(71) 環境構築(1) docker どっかーら、どーやってもエラーばっかり。
https://qiita.com/kaizen_nagoya/items/690d806a4760d9b9e040
dockerで機械学習(72) 環境構築(2) Docker for Windows
https://qiita.com/kaizen_nagoya/items/c4daa5cf52e9f0c2c002
dockerで機械学習(73) 環境構築(3) docker/linux/macos bash スクリプト, ms-dos batchファイル
https://qiita.com/kaizen_nagoya/items/3f7b39110b7f303a5558
dockerで機械学習(74) 環境構築(4) R 難関いくつ?
https://qiita.com/kaizen_nagoya/items/5fb44773bc38574bcf1c
dockerで機械学習(75)環境構築(5)docker関連ファイルの管理
https://qiita.com/kaizen_nagoya/items/4f03df9a42c923087b5d
OpenCVをPythonで動かそうとしてlibGL.soが無いって言われたけど解決した。
https://qiita.com/toshitanian/items/5da24c0c0bd473d514c8
サーバサイドにおけるmatplotlibによる作図Tips
https://qiita.com/TomokIshii/items/3a26ee4453f535a69e9e
Dockerでホストとコンテナ間でのファイルコピー
https://qiita.com/gologo13/items/7e4e404af80377b48fd5
Docker for Mac でファイル共有を利用する
https://qiita.com/seijimomoto/items/1992d68de8baa7e29bb5
「名古屋のIoTは名古屋のOSで」Dockerをどっかーらどうやって使えばいいんでしょう。TOPPERS/FMP on RaspberryPi with Macintosh編 5つの関門
https://qiita.com/kaizen_nagoya/items/9c46c6da8ceb64d2d7af
64bitCPUへの道 and/or 64歳の決意
https://qiita.com/kaizen_nagoya/items/cfb5ffa24ded23ab3f60
ゼロから作るDeepLearning2自然言語処理編 読書会の進め方(例)
https://qiita.com/kaizen_nagoya/items/025eb3f701b36209302e
Ubuntu 16.04 LTS で NVIDIA Docker を使ってみる
https://blog.amedama.jp/entry/2017/04/03/235901
Ethernet 記事一覧 Ethernet(0)
https://qiita.com/kaizen_nagoya/items/88d35e99f74aefc98794
Wireshark 一覧 wireshark(0)、Ethernet(48)
https://qiita.com/kaizen_nagoya/items/fbed841f61875c4731d0
線網(Wi-Fi)空中線(antenna)(0) 記事一覧(118/300目標)
https://qiita.com/kaizen_nagoya/items/5e5464ac2b24bd4cd001
C++ Support(0)
https://qiita.com/kaizen_nagoya/items/8720d26f762369a80514
Coding Rules(0) C Secure , MISRA and so on
https://qiita.com/kaizen_nagoya/items/400725644a8a0e90fbb0
Autosar Guidelines C++14 example code compile list(1-169)
https://qiita.com/kaizen_nagoya/items/8ccbf6675c3494d57a76
Error一覧(C/C++, python, bash...) Error(0)
https://qiita.com/kaizen_nagoya/items/48b6cbc8d68eae2c42b8
なぜdockerで機械学習するか 書籍・ソース一覧作成中 (目標100)
https://qiita.com/kaizen_nagoya/items/ddd12477544bf5ba85e2
言語処理100本ノックをdockerで。python覚えるのに最適。:10+12
https://qiita.com/kaizen_nagoya/items/7e7eb7c543e0c18438c4
プログラムちょい替え(0)一覧:4件
https://qiita.com/kaizen_nagoya/items/296d87ef4bfd516bc394
一覧の一覧( The directory of directories of mine.) Qiita(100)
https://qiita.com/kaizen_nagoya/items/7eb0e006543886138f39
官公庁・学校・公的団体(NPOを含む)システムの課題、官(0)
https://qiita.com/kaizen_nagoya/items/04ee6eaf7ec13d3af4c3
プログラマが知っていると良い「公序良俗」
https://qiita.com/kaizen_nagoya/items/9fe7c0dfac2fbd77a945
LaTeX(0) 一覧
https://qiita.com/kaizen_nagoya/items/e3f7dafacab58c499792
自動制御、制御工学一覧(0)
https://qiita.com/kaizen_nagoya/items/7767a4e19a6ae1479e6b
Rust(0) 一覧
https://qiita.com/kaizen_nagoya/items/5e8bb080ba6ca0281927
小川清最終講義、最終講義(再)計画, Ethernet(100) 英語(100) 安全(100)
https://qiita.com/kaizen_nagoya/items/e2df642e3951e35e6a53
文書履歴(document history)
ver. 0.10 初稿 20181022
最後までおよみいただきありがとうございました。
いいね 💚、フォローをお願いします。
Thank you very much for reading to the last sentence.
Please press the like icon 💚 and follow me for your happy life.