LoginSignup
1
0

dockerで機械学習(30) with anaconda(30)「Advanced Deep Learning with Keras」 By Philippe Remy

Last updated at Posted at 2018-10-22

1.すぐに利用したい方へ(as soon as)

「Advanced Deep Learning with Keras」 By Philippe Remy

cat30.gif

docker

dockerを導入し、Windows, Macではdockerを起動しておいてください。
Windowsでは、BiosでIntel Virtualizationをenableにしないとdockerが起動しない場合があります。
また、セキュリティの警告などが出ることがあります。

docker pull and run

$ docker pull kaizenjapan/anaconda-philippe

$ docker run -it -p 8888:8888 kaizenjapan/anaconda-philippe /bin/bash

以下のshell sessionでは
(base) root@f19e2f06eabb:/#は入力促進記号(comman prompt)です。実際には数字の部分が違うかもしれません。この行の#の右側を入力してください。
それ以外の行は出力です。出力にエラー、違いがあれば、コメント欄などでご連絡くださると幸いです。
それぞれの章のフォルダに移動します。

dockerの中と、dockerを起動したOSのシェルとが表示が似ている場合には、どちらで捜査しているか間違えることがあります。dockerの入力促進記号(comman prompt)に気をつけてください。

##ファイル共有または複写

dockerとdockerを起動したOSでは、ファイル共有をするか、ファイル複写するかして、生成したファイルをブラウザ等表示させてください。参考文献欄にやり方のURLを記載しています。

複写の場合は、dockerを起動したOS側コマンドを実行しました。お使いのdockerの番号で置き換えてください。複写したファイルをブラウザで表示し内容確認しました。

plt.show()
はコメントにしている。

import matplotlib as mpl
mpl.use('Agg')
fig = plt.figure()
fig.savefig('img.png')
4行を追加している。ただし、ファイルが2kで中身不明。

Chapter 01

(base) root@b350954ba6b4:/# ls
DLwithPyTorch				 boot			      home					   mnt				run   usr
Practical-Convolutional-Neural-Networks  deep-learning-with-keras-ja  lib					   opt				sbin  var
Python-Deep-Learning			 dev			      lib64					   proc				srv
advanced-deep-learning-keras		 etc			      machine-learning-with-python-cookbook-notes  pytorch-nlp-tutorial-eu2018	sys
bin					 feature-engineering-book     media					   root				tmp
(base) root@b350954ba6b4:/# cd advanced-deep-learning-keras/
(base) root@b350954ba6b4:/advanced-deep-learning-keras# ls
README.md  s1  s2  s3  s4  s5  s6  s7
(base) root@b350954ba6b4:/advanced-deep-learning-keras# cd s1
(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1# ls
1.2  1.3  1.4
(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1# cd 1.2
(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1/1.2# ls
1_linear_regression.py	2_cost_function.py  3_underfitting_overfitting.py  4_hyper_parameters.py
(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1/1.2# python 1_linear_regression.py 

Using TensorFlow backend.
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 1)                 2         
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
Train on 128 samples, validate on 128 samples
Epoch 1/100
2018-10-22 10:06:24.431168: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-10-22 10:06:24.435316: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
128/128 [==============================] - 0s 3ms/step - loss: 104.1362 - val_loss: 90.5734
Epoch 2/100
128/128 [==============================] - 0s 15us/step - loss: 92.5201 - val_loss: 80.6777
Epoch 3/100
128/128 [==============================] - 0s 16us/step - loss: 81.9747 - val_loss: 71.7360
Epoch 4/100
128/128 [==============================] - 0s 14us/step - loss: 72.4919 - val_loss: 63.6906
Epoch 5/100
128/128 [==============================] - 0s 15us/step - loss: 64.0199 - val_loss: 56.4262
Epoch 6/100
128/128 [==============================] - 0s 16us/step - loss: 56.4457 - val_loss: 49.7797
Epoch 7/100
128/128 [==============================] - 0s 19us/step - loss: 49.6001 - val_loss: 43.5884
Epoch 8/100
128/128 [==============================] - 0s 17us/step - loss: 43.3045 - val_loss: 37.7491
Epoch 9/100
128/128 [==============================] - 0s 13us/step - loss: 37.4325 - val_loss: 32.2385
Epoch 10/100
128/128 [==============================] - 0s 15us/step - loss: 31.9378 - val_loss: 27.0911
Epoch 11/100
128/128 [==============================] - 0s 13us/step - loss: 26.8356 - val_loss: 22.3654
Epoch 12/100
128/128 [==============================] - 0s 13us/step - loss: 22.1704 - val_loss: 18.1185
Epoch 13/100
128/128 [==============================] - 0s 39us/step - loss: 17.9898 - val_loss: 14.3914
Epoch 14/100
128/128 [==============================] - 0s 16us/step - loss: 14.3275 - val_loss: 11.2009
Epoch 15/100
128/128 [==============================] - 0s 28us/step - loss: 11.1955 - val_loss: 8.5368
Epoch 16/100
128/128 [==============================] - 0s 12us/step - loss: 8.5796 - val_loss: 6.3636
Epoch 17/100
128/128 [==============================] - 0s 15us/step - loss: 6.4412 - val_loss: 4.6260
Epoch 18/100
128/128 [==============================] - 0s 13us/step - loss: 4.7238 - val_loss: 3.2594
Epoch 19/100
128/128 [==============================] - 0s 14us/step - loss: 3.3627 - val_loss: 2.2007
Epoch 20/100
128/128 [==============================] - 0s 13us/step - loss: 2.2967 - val_loss: 1.3961
Epoch 21/100
128/128 [==============================] - 0s 14us/step - loss: 1.4755 - val_loss: 0.8049
Epoch 22/100
128/128 [==============================] - 0s 16us/step - loss: 0.8626 - val_loss: 0.3979
Epoch 23/100
128/128 [==============================] - 0s 14us/step - loss: 0.4333 - val_loss: 0.1532
Epoch 24/100
128/128 [==============================] - 0s 14us/step - loss: 0.1697 - val_loss: 0.0522
Epoch 25/100
128/128 [==============================] - 0s 16us/step - loss: 0.0560 - val_loss: 0.0746
Epoch 26/100
128/128 [==============================] - 0s 22us/step - loss: 0.0731 - val_loss: 0.1966
Epoch 27/100
128/128 [==============================] - 0s 17us/step - loss: 0.1971 - val_loss: 0.3903
Epoch 28/100
128/128 [==============================] - 0s 22us/step - loss: 0.3984 - val_loss: 0.6251
Epoch 29/100
128/128 [==============================] - 0s 17us/step - loss: 0.6441 - val_loss: 0.8716
Epoch 30/100
128/128 [==============================] - 0s 15us/step - loss: 0.9017 - val_loss: 1.1042
Epoch 31/100
128/128 [==============================] - 0s 13us/step - loss: 1.1432 - val_loss: 1.3047
Epoch 32/100
128/128 [==============================] - 0s 13us/step - loss: 1.3488 - val_loss: 1.4628
Epoch 33/100
128/128 [==============================] - 0s 17us/step - loss: 1.5073 - val_loss: 1.5758
Epoch 34/100
128/128 [==============================] - 0s 35us/step - loss: 1.6162 - val_loss: 1.6459
Epoch 35/100
128/128 [==============================] - 0s 15us/step - loss: 1.6789 - val_loss: 1.6780
Epoch 36/100
128/128 [==============================] - 0s 18us/step - loss: 1.7017 - val_loss: 1.6772
Epoch 37/100
128/128 [==============================] - 0s 14us/step - loss: 1.6911 - val_loss: 1.6472
Epoch 38/100
128/128 [==============================] - 0s 15us/step - loss: 1.6521 - val_loss: 1.5899
Epoch 39/100
128/128 [==============================] - 0s 29us/step - loss: 1.5875 - val_loss: 1.5064
Epoch 40/100
128/128 [==============================] - 0s 30us/step - loss: 1.4989 - val_loss: 1.3980
Epoch 41/100
128/128 [==============================] - 0s 15us/step - loss: 1.3877 - val_loss: 1.2676
Epoch 42/100
128/128 [==============================] - 0s 18us/step - loss: 1.2565 - val_loss: 1.1205
Epoch 43/100
128/128 [==============================] - 0s 33us/step - loss: 1.1101 - val_loss: 0.9638
Epoch 44/100
128/128 [==============================] - 0s 16us/step - loss: 0.9554 - val_loss: 0.8059
Epoch 45/100
128/128 [==============================] - 0s 16us/step - loss: 0.8001 - val_loss: 0.6548
Epoch 46/100
128/128 [==============================] - 0s 15us/step - loss: 0.6520 - val_loss: 0.5170
Epoch 47/100
128/128 [==============================] - 0s 15us/step - loss: 0.5170 - val_loss: 0.3966
Epoch 48/100
128/128 [==============================] - 0s 16us/step - loss: 0.3990 - val_loss: 0.2952
Epoch 49/100
128/128 [==============================] - 0s 15us/step - loss: 0.2992 - val_loss: 0.2121
Epoch 50/100
128/128 [==============================] - 0s 15us/step - loss: 0.2170 - val_loss: 0.1455
Epoch 51/100
128/128 [==============================] - 0s 93us/step - loss: 0.1505 - val_loss: 0.0935
Epoch 52/100
128/128 [==============================] - 0s 39us/step - loss: 0.0978 - val_loss: 0.0542
Epoch 53/100
128/128 [==============================] - 0s 12us/step - loss: 0.0574 - val_loss: 0.0263
Epoch 54/100
128/128 [==============================] - 0s 35us/step - loss: 0.0282 - val_loss: 0.0091
Epoch 55/100
128/128 [==============================] - 0s 44us/step - loss: 0.0099 - val_loss: 0.0015
Epoch 56/100
128/128 [==============================] - 0s 34us/step - loss: 0.0016 - val_loss: 0.0023
Epoch 57/100
128/128 [==============================] - 0s 17us/step - loss: 0.0023 - val_loss: 0.0098
Epoch 58/100
128/128 [==============================] - 0s 20us/step - loss: 0.0100 - val_loss: 0.0216
Epoch 59/100
128/128 [==============================] - 0s 38us/step - loss: 0.0224 - val_loss: 0.0355
Epoch 60/100
128/128 [==============================] - 0s 28us/step - loss: 0.0369 - val_loss: 0.0493
Epoch 61/100
128/128 [==============================] - 0s 24us/step - loss: 0.0512 - val_loss: 0.0615
Epoch 62/100
128/128 [==============================] - 0s 26us/step - loss: 0.0636 - val_loss: 0.0713
Epoch 63/100
128/128 [==============================] - 0s 14us/step - loss: 0.0734 - val_loss: 0.0785
Epoch 64/100
128/128 [==============================] - 0s 16us/step - loss: 0.0802 - val_loss: 0.0832
Epoch 65/100
128/128 [==============================] - 0s 15us/step - loss: 0.0844 - val_loss: 0.0857
Epoch 66/100
128/128 [==============================] - 0s 15us/step - loss: 0.0863 - val_loss: 0.0860
Epoch 67/100
128/128 [==============================] - 0s 14us/step - loss: 0.0861 - val_loss: 0.0842
Epoch 68/100
128/128 [==============================] - 0s 16us/step - loss: 0.0839 - val_loss: 0.0802
Epoch 69/100
128/128 [==============================] - 0s 41us/step - loss: 0.0797 - val_loss: 0.0741
Epoch 70/100
128/128 [==============================] - 0s 22us/step - loss: 0.0735 - val_loss: 0.0663
Epoch 71/100
128/128 [==============================] - 0s 15us/step - loss: 0.0657 - val_loss: 0.0573
Epoch 72/100
128/128 [==============================] - 0s 15us/step - loss: 0.0568 - val_loss: 0.0477
Epoch 73/100
128/128 [==============================] - 0s 19us/step - loss: 0.0474 - val_loss: 0.0383
Epoch 74/100
128/128 [==============================] - 0s 35us/step - loss: 0.0382 - val_loss: 0.0297
Epoch 75/100
128/128 [==============================] - 0s 18us/step - loss: 0.0298 - val_loss: 0.0222
Epoch 76/100
128/128 [==============================] - 0s 28us/step - loss: 0.0224 - val_loss: 0.0160
Epoch 77/100
128/128 [==============================] - 0s 19us/step - loss: 0.0163 - val_loss: 0.0109
Epoch 78/100
128/128 [==============================] - 0s 48us/step - loss: 0.0113 - val_loss: 0.0070
Epoch 79/100
128/128 [==============================] - 0s 16us/step - loss: 0.0073 - val_loss: 0.0040
Epoch 80/100
128/128 [==============================] - 0s 15us/step - loss: 0.0042 - val_loss: 0.0019
Epoch 81/100
128/128 [==============================] - 0s 13us/step - loss: 0.0020 - val_loss: 5.7623e-04
Epoch 82/100
128/128 [==============================] - 0s 19us/step - loss: 6.2660e-04 - val_loss: 6.0980e-05
Epoch 83/100
128/128 [==============================] - 0s 41us/step - loss: 6.6936e-05 - val_loss: 2.1531e-04
Epoch 84/100
128/128 [==============================] - 0s 15us/step - loss: 2.1649e-04 - val_loss: 8.7550e-04
Epoch 85/100
128/128 [==============================] - 0s 15us/step - loss: 9.0487e-04 - val_loss: 0.0018
Epoch 86/100
128/128 [==============================] - 0s 14us/step - loss: 0.0019 - val_loss: 0.0029
Epoch 87/100
128/128 [==============================] - 0s 26us/step - loss: 0.0030 - val_loss: 0.0039
Epoch 88/100
128/128 [==============================] - 0s 17us/step - loss: 0.0040 - val_loss: 0.0047
Epoch 89/100
128/128 [==============================] - 0s 28us/step - loss: 0.0049 - val_loss: 0.0054
Epoch 90/100
128/128 [==============================] - 0s 19us/step - loss: 0.0055 - val_loss: 0.0058
Epoch 91/100
128/128 [==============================] - 0s 14us/step - loss: 0.0059 - val_loss: 0.0060
Epoch 92/100
128/128 [==============================] - 0s 13us/step - loss: 0.0061 - val_loss: 0.0061
Epoch 93/100
128/128 [==============================] - 0s 16us/step - loss: 0.0061 - val_loss: 0.0060
Epoch 94/100
128/128 [==============================] - 0s 46us/step - loss: 0.0059 - val_loss: 0.0056
Epoch 95/100
128/128 [==============================] - 0s 16us/step - loss: 0.0056 - val_loss: 0.0052
Epoch 96/100
128/128 [==============================] - 0s 14us/step - loss: 0.0051 - val_loss: 0.0045
Epoch 97/100
128/128 [==============================] - 0s 14us/step - loss: 0.0045 - val_loss: 0.0038
Epoch 98/100
128/128 [==============================] - 0s 13us/step - loss: 0.0038 - val_loss: 0.0030
Epoch 99/100
128/128 [==============================] - 0s 17us/step - loss: 0.0030 - val_loss: 0.0023
Epoch 100/100
128/128 [==============================] - 0s 15us/step - loss: 0.0023 - val_loss: 0.0017
[array([[3.003418]], dtype=float32), array([10.041711], dtype=float32)]

##1.2

(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1/1.2# python 2_cost_function.py 
targets             = [1 2 3]
predictions         = [0 1 8]
Regression cost =  9.0
targets              = [0 1 1]
good predictions     = [0.1 0.9 0.9]
bad predictions      = [0.1 0.9 0.9]
Classification cost (good) =  0.07024034377188419
Classification cost (bad)  =  1.3040076684760484

3_underfitting_overfitting.py

(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1/1.2# python 3_underfitting_overfitting.py 
None
/opt/conda/lib/python3.6/site-packages/matplotlib/figure.py:448: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
  % get_backend())

edit 3_underfitting_overfitting.py

(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1/1.2# vi 3_underfitting_overfitting.py 
(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1/1.2# python 3_underfitting_overfitting.py 
None
(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1/1.2# ls
1_linear_regression.py	2_cost_function.py  3_underfitting_overfitting.py  4_hyper_parameters.py  img.png

(base) root@b350954ba6b4:/advanced-deep-learning-keras/s1/1.2# python 4_hyper_parameters.py 
(442, 10)
(442,)
Score with default parameters =  0.4512313946799056
Score with Grid Search parameters = 0.48879020446060156 best alpha = 0.001
Score with Random Search parameters = 0.48905379594162485 best alpha = 0.04036024496265811

image_ocr.py

(base) root@b350954ba6b4:/advanced-deep-learning-keras/s2/1.1/keras/examples# python image_ocr.py 
Traceback (most recent call last):
  File "image_ocr.py", line 40, in <module>
    import cairocffi as cairo
  File "/opt/conda/lib/python3.6/site-packages/cairocffi/__init__.py", line 41, in <module>
    cairo = dlopen(ffi, 'cairo', 'cairo-2', 'cairo-gobject-2')
  File "/opt/conda/lib/python3.6/site-packages/cairocffi/__init__.py", line 38, in dlopen
    raise OSError("dlopen() failed to load a library: %s" % ' / '.join(names))
OSError: dlopen() failed to load a library: cairo / cairo-2 / cairo-gobject-2

(base) root@b350954ba6b4:/advanced-deep-learning-keras/s3/1.1# python img_classification_example.py 
Traceback (most recent call last):
  File "img_classification_example.py", line 7, in <module>
    import matplotlib.pyplot as plt
  File "/opt/conda/lib/python3.6/site-packages/matplotlib/pyplot.py", line 2371, in <module>
    switch_backend(rcParams["backend"])
  File "/opt/conda/lib/python3.6/site-packages/matplotlib/__init__.py", line 892, in __getitem__
    plt.switch_backend(rcsetup._auto_backend_sentinel)
  File "/opt/conda/lib/python3.6/site-packages/matplotlib/pyplot.py", line 196, in switch_backend
    switch_backend(candidate)
  File "/opt/conda/lib/python3.6/site-packages/matplotlib/pyplot.py", line 207, in switch_backend
    backend_mod = importlib.import_module(backend_name)
  File "/opt/conda/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "/opt/conda/lib/python3.6/site-packages/matplotlib/backends/backend_gtk3agg.py", line 6, in <module>
    from . import backend_agg, backend_cairo, backend_gtk3
  File "/opt/conda/lib/python3.6/site-packages/matplotlib/backends/backend_cairo.py", line 19, in <module>
    import cairocffi as cairo
  File "/opt/conda/lib/python3.6/site-packages/cairocffi/__init__.py", line 41, in <module>
    cairo = dlopen(ffi, 'cairo', 'cairo-2', 'cairo-gobject-2')
  File "/opt/conda/lib/python3.6/site-packages/cairocffi/__init__.py", line 38, in dlopen
    raise OSError("dlopen() failed to load a library: %s" % ' / '.join(names))
OSError: dlopen() failed to load a library: cairo / cairo-2 / cairo-gobject-2
``
## jupyter notebook

jupyter notebook --ip=0.0.0.0 --allow-root


<img width="994" alt="py30-1.png" src="https://qiita-image-store.s3.amazonaws.com/0/51423/f6e0fb31-6c23-fb6c-bdc4-6fc2a572746c.png">

<img width="980" alt="py30-2.png" src="https://qiita-image-store.s3.amazonaws.com/0/51423/6b9fefa1-6033-111b-d0b5-de7f0d5bfbea.png">

![py30-3.png](https://qiita-image-store.s3.amazonaws.com/0/51423/d29e15e3-ca1b-ace2-28b6-ac2fbb806b2e.png)

<img width="978" alt="py30-4.png" src="https://qiita-image-store.s3.amazonaws.com/0/51423/448ecc02-c1c3-9203-ff36-eda781c3d45d.png">

<img width="971" alt="py30-5.png" src="https://qiita-image-store.s3.amazonaws.com/0/51423/8aeb7a2d-0248-71c3-3c5c-7f885c87b4a2.png">



# 2. dockerを自力で構築する方へ

ここから下は、上記のpullしていただいたdockerをどういう方針で、どういう手順で作ったかを記録します。
上記のdockerを利用する上での参考資料です。本の続きを実行する上では必要ありません。
自力でdocker/anacondaを構築する場合の手順になります。
dockerfileを作る方法ではありません。ごめんなさい。
##docker

ubuntu, debianなどのLinuxを、linux, windows, mac osから共通に利用できる仕組み。
利用するOSの設定を変更せずに利用できるのがよい。
同じ仕様で、大量の人が利用することができる。
ソフトウェアの開発元が公式に対応しているものと、利用者が便利に仕立てたものの両方が利用可能である。今回は、公式に配布しているものを、自分で仕立てて、他の人にも利用できるようにする。
##python

DeepLearningの実習をPhthonで行って来た。
pythonを使う理由は、多くの機械学習の仕組みがpythonで利用できることと、Rなどの統計解析の仕組みもpythonから容易に利用できることがある。
###anaconda

pythonには、2と3という版の違いと、配布方法の違いなどがある。
Anacondaでpython3をこの1年半利用してきた。
Anacondaを利用した理由は、統計解析のライブラリと、JupyterNotebookが初めから入っているからである。
##docker公式配布

ubuntu, debianなどのOSの公式配布,gcc, anacondaなどの言語の公式配布などがある。
これらを利用し、docker-hubに登録することにより、公式配布の質の確認と、変更権を含む幅広い情報の共有ができる。dockerが公式配布するものではなく、それぞれのソフト提供者の公式配布という意味。
###docker pull

docker公式配布の利用は、URLからpullすることで実現する。
###docker Anaconda

anacondaが公式配布しているものを利用。

$ docker pull kaizenjapan/anaconda-keras
Using default tag: latest
latest: Pulling from continuumio/anaconda3
Digest: sha256:e07b9ca98ac1eeb1179dbf0e0bbcebd87701f8654878d6d8ce164d71746964d1
Status: Image is up to date for continuumio/anaconda3:latest

$ docker run -it -p 8888:8888 continuumio/anaconda3 /bin/bash

実際にはkeras, tensorflow を利用していた他のpushをpull

## apt

```shell-session
(base) root@d8857ae56e69:/# apt update; apt -y upgrade

(base) root@d8857ae56e69:/# apt install -y procps vim apt-utils sudo
```

## ソース git

```
(base) root@f19e2f06eabb:/# git clone https://github.com/philipperemy/advanced-deep-learning-keras
```

## conda

```
# conda update --prefix /opt/conda anaconda
Solving environment: done

# conda install theano
```

## pip

```
(base) root@f19e2f06eabb:/deep-learning-from-scratch-2/ch01# pip install --upgrade pip
Collecting pip
  Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
    100% |████████████████████████████████| 1.3MB 2.0MB/s 
distributed 1.21.8 requires msgpack, which is not installed.
Installing collected packages: pip
  Found existing installation: pip 10.0.1
    Uninstalling pip-10.0.1:
      Successfully uninstalled pip-10.0.1
Successfully installed pip-18.0
(
```

# docker hubへの登録

```
$ docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                    NAMES
caef766a99ff        continuumio/anaconda3   "/usr/bin/tini -- /b…"   10 hours ago        Up 10 hours         0.0.0.0:8888->8888/tcp   sleepy_bassi

$ docker commit caef766a99ff kaizenjapan/anaconda-philippe

$ docker push kaizenjapan/anaconda-philippe
```

# 参考資料(reference)

なぜdockerで機械学習するか 書籍・ソース一覧作成中 (目標100)
https://qiita.com/kaizen_nagoya/items/ddd12477544bf5ba85e2

dockerで機械学習(1) with anaconda(1)「ゼロから作るDeep Learning - Pythonで学ぶディープラーニングの理論と実装」斎藤 康毅 著
https://qiita.com/kaizen_nagoya/items/a7e94ef6dca128d035ab

dockerで機械学習(2)with anaconda(2)「ゼロから作るDeep Learning2自然言語処理編」斎藤 康毅 著
https://qiita.com/kaizen_nagoya/items/3b80dfc76933cea522c6

dockerで機械学習(3)with anaconda(3)「直感Deep Learning」Antonio Gulli、Sujit Pal 第1章,第2章
https://qiita.com/kaizen_nagoya/items/483ae708c71c88419c32

dockerで機械学習(71) 環境構築(1) docker どっかーら、どーやってもエラーばっかり。
https://qiita.com/kaizen_nagoya/items/690d806a4760d9b9e040

dockerで機械学習(72) 環境構築(2) Docker for Windows
https://qiita.com/kaizen_nagoya/items/c4daa5cf52e9f0c2c002

dockerで機械学習(73) 環境構築(3) docker/linux/macos bash スクリプト, ms-dos batchファイル
https://qiita.com/kaizen_nagoya/items/3f7b39110b7f303a5558

dockerで機械学習(74) 環境構築(4) R 難関いくつ?
https://qiita.com/kaizen_nagoya/items/5fb44773bc38574bcf1c

dockerで機械学習(75)環境構築(5)docker関連ファイルの管理
https://qiita.com/kaizen_nagoya/items/4f03df9a42c923087b5d

OpenCVをPythonで動かそうとしてlibGL.soが無いって言われたけど解決した。
https://qiita.com/toshitanian/items/5da24c0c0bd473d514c8

サーバサイドにおけるmatplotlibによる作図Tips
https://qiita.com/TomokIshii/items/3a26ee4453f535a69e9e

Dockerでホストとコンテナ間でのファイルコピー
https://qiita.com/gologo13/items/7e4e404af80377b48fd5

Docker for Mac でファイル共有を利用する
https://qiita.com/seijimomoto/items/1992d68de8baa7e29bb5


「名古屋のIoTは名古屋のOSで」Dockerをどっかーらどうやって使えばいいんでしょう。TOPPERS/FMP on RaspberryPi with Macintosh編 5つの関門
https://qiita.com/kaizen_nagoya/items/9c46c6da8ceb64d2d7af

64bitCPUへの道 and/or 64歳の決意
https://qiita.com/kaizen_nagoya/items/cfb5ffa24ded23ab3f60

ゼロから作るDeepLearning2自然言語処理編 読書会の進め方(例)
https://qiita.com/kaizen_nagoya/items/025eb3f701b36209302e


Ubuntu 16.04 LTS で NVIDIA Docker を使ってみる
https://blog.amedama.jp/entry/2017/04/03/235901

# 文書履歴(document history)
ver. 0.10 初稿 20181022
### 最後までおよみいただきありがとうございました。

いいね 💚、フォローをお願いします。
#### Thank you very much for reading to the last sentence.

Please press the like icon 💚 and follow me for your happy life.
1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0