LoginSignup
4
6

More than 5 years have passed since last update.

Tensorflow環境再構築メモ (Ubuntu 16.04, CUDA9.0, ソースビルド)

Last updated at Posted at 2018-05-05

2018.10.31 ハロウィン現在のtensorflowビルド

Ver.1.11.0のインストールを記録しておきます。

$ git checkout refs/tags/v1.11.0
$ ./configure
(後述)
$ bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package 
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ pip install /tmp/tensorflow_pkg/tensorflow-1.10.0-cp36-cp36m-linux_x86_64.whl 

./configureでは下記のように設定、GPUを使うがその他GCPやAWSは使わない場合となります。

$ ./configure 
WARNING: Running Bazel server needs to be killed, because the startup options are different.
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.16.0 installed.
Please specify the location of python. [Default is /home/niz/anaconda3/bin/python]: (リターン)


Found possible Python library paths:
  /home/niz/anaconda3/lib/python3.6/site-packages
  /home/niz/lab/ext/tensorflow/models/research
  /home/niz/lab/ext/tensorflow/models/research/slim
Please input the desired Python library path to use.  Default is [/home/niz/anaconda3/lib/python3.6/site-packages]

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: (リターン)
jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n
No Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n
No Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]: n
No Amazon AWS Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n
No Apache Kafka Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]: (リターン)
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with GDR support? [y/N]: (リターン)
No GDR support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]: (リターン)
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with nGraph support? [y/N]: (リターン)
No nGraph support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: (リターン)
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Do you wish to build TensorFlow with TensorRT support? [y/N]: (リターン)
No TensorRT support will be enabled for TensorFlow.

Please specify the NCCL version you want to use. If NCCL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with mu
ltiple GPUs. [Default is 2.2]: 1.3
(ここで1時間程度スタックするケースがあった)

Do you want to use clang as CUDA compiler? [y/N]: (リターン)
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: (リターン)


Do you wish to build TensorFlow with MPI support? [y/N]: (リターン)
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: (リターン)


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: (リターン)
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
        --config=mkl            # Build with MKL support.
        --config=monolithic     # Config for mostly static monolithic build.
Configuration finished

旧) はじめに

日々発表される新しいモデルを試そうと、機械学習のフレームワークを更新してバージョンアップ、そんなときこんな呪いのような状態に陥りました:

2018-05-02 18:32:35.789145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0, 1
   : (中略)
2018-05-02 18:32:36.865023: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10222 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:17:00.0, compute capability: 6.1)
2018-05-02 18:32:36.913837: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10222 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1)
2018-05-02 18:32:36.914640: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 9.98G (10719158528 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-05-02 18:32:36.915037: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 8.98G (9647242240 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-05-02 18:32:36.915467: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 8.09G (8682517504 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-05-02 18:32:36.915867: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 7.28G (7814265344 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
   : (中略)
2018-05-02 18:32:36.925404: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 594.44M (623316480 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-05-02 18:32:36.928568: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 535.00M (560984832 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
INFO:tensorflow:Restoring parameters from model2/model.ckpt-215411
INFO:tensorflow:Restoring parameters from model2/model.ckpt-215411
   : (中略)
2018-05-02 18:33:06.947264: E tensorflow/stream_executor/cuda/cuda_dnn.cc:403] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2018-05-02 18:33:06.947325: E tensorflow/stream_executor/cuda/cuda_dnn.cc:370] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
2018-05-02 18:33:06.947337: F tensorflow/core/kernels/conv_ops.cc:712] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>(), &algorithms)

何をやっても「F」、Failで中断してしまう。

  • 再起動しても、
  • 「CUDA_ERROR_OUT_OF_MEMORY」が出てしまう原因は本来別にあるようで、その方法を試しても、
  • Tensorflowだけインストールし直しても(pip)、同じようにFailしてしまう…

ようやく

  • CUDA Toolkitをインストールし直して正常化

したのですが、いっそのこと

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.

このメッセージの示す通り、自分のマシンで最適な状態になるようソースからビルドしました。

この記事は、同様な方の参考になるよう記録したものです。

同様の先人たちの記録へのリンク:

スコープ

以下の3点をメモします。

  1. CUDA 9.0, cuDNN 7.0 インストール
  2. Tensorflowのソースからのビルド・インストール
  3. Tensorboardのビルド・簡易インストール

※ Tensorflow/ Tensorboardは安定したブランチではなく、その時の最新をビルドしたものです。

1. CUDA 9.0, cuDNN 7.0 インストール

# CUDAをダウンロード。
wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda_9.0.176_384.81_linux-run
wget https://developer.nvidia.com/compute/cuda/9.0/Prod/patches/1/cuda_9.0.176.1_linux-run
wget https://developer.nvidia.com/compute/cuda/9.0/Prod/patches/2/cuda_9.0.176.2_linux-run

# NVIDIAのドライバーをaptでインストールしている場合、事前にpurgeする。
sudo apt purge nvidia*
# Xを止める。
sudo service lightdm stop
# CUDA Toolkitをインストール。
sudo sh ./cuda_9.0.176_384.81_linux-run
sudo sh ./cuda_9.0.176.1_linux-run
sudo sh ./cuda_9.0.176.2_linux-run

〜ここまでの間数回再起動が必要な局面がありました〜

# 正常に表示されることを確認します。
nvidia-smi

# ドライバーがどうしてもインストールできないとき、
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-384
# 以上aptでインストール後、
nvidia-smi
# 正常な表示を確認したら、もう一度「Xを止める」からCUDA Toolkitをインストール。

# cuDNNをインストール。
tar xf cudnn-9.0-linux-x64-v7.tgz
sudo mv cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo mv cuda/include/cudnn.h /usr/local/cuda/include
rm -fr cuda
# ここはrootになる必要あり。
(sudo bash) root$ echo "/usr/local/cuda/lib64" > /etc/ld.so.conf.d/cuda.conf
# rootから戻り、
sudo ldconfig

# 確認 -> MNISTが正常にGPUで学習できたら問題ないことが確認できます。
cd to/your/tensorflow/tensorflow/examples/tutorials/mnist
python mnist_deep.py

旧) 2. Tensorflowのソースからのビルド・インストール

# bazelをインストール。
wget https://github.com/bazelbuild/bazel/releases/download/0.13.0/bazel-0.13.0-installer-linux-x86_64.sh
chmod +x bazel-0.13.0-installer-linux-x86_64.sh
./bazel-0.13.0-installer-linux-x86_64.sh --user

# tensorflowをインストール。
./configure
# 下記メッセージを参考に入力する。デフォルト以外を入力した箇所は「***」を参照して下さい。
You have bazel 0.13.0 installed.
Please specify the location of python. [Default is /home/foo/anaconda3/bin/python]:
Found possible Python library paths:
  /home/foo/lab/ext/tensorflow/models/research
  /home/foo/lab/ext/tensorflow/models/research/slim
  /home/foo/anaconda3/lib/python3.6/site-packages
Please input the desired Python library path to use.  Default is [/home/foo/lab/ext/tensorflow/models/research]
/home/foo/anaconda3/lib/python3.6/site-packages <=== *** 何故かこうなってしまうため
Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: 
jemalloc as malloc support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: 
Google Cloud Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: 
Hadoop File System support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n <=== *** 明らかに使わないので
No Amazon S3 File System support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n <=== *** 明らかに使わないので
No Apache Kafka Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with XLA JIT support? [y/N]: 
No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with GDR support? [y/N]: 
No GDR support will be enabled for TensorFlow.
Do you wish to build TensorFlow with VERBS support? [y/N]: 
No VERBS support will be enabled for TensorFlow.
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: 
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y <=== *** CUDAを使うので
CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]: 
Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]:
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Do you wish to build TensorFlow with TensorRT support? [y/N]: 
No TensorRT support will be enabled for TensorFlow.
Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,5.2]
Do you want to use clang as CUDA compiler? [y/N]: 
nvcc will be used as CUDA compiler.
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Do you wish to build TensorFlow with MPI support? [y/N]: 
No MPI support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: 
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
    --config=mkl             # Build with MKL support.
    --config=monolithic      # Config for mostly static monolithic build.
Configuration finished

# tensorflowをビルド、念のためcudaを指定しています。
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
〜下記はビルド終了時の様子〜
Starting local Bazel server and connecting to it...
......
    : (massive messages …)
Target //tensorflow/tools/pip_package:build_pip_package up-to-date:
  bazel-bin/tensorflow/tools/pip_package/build_pip_package
INFO: Elapsed time: 4412.763s, Critical Path: 227.12s
INFO: 10224 processes, local.
INFO: Build completed successfully, 13154 total actions

ビルド中、CUDAライブラリが見つからないエラーになることがあります。その場合下記で解消されました。※上記CUDAインストール手順に含めていますが、忘れるとエラーとなりました。

# ここはrootになる必要あり。
(sudo bash) root$ echo "/usr/local/cuda/lib64" > /etc/ld.so.conf.d/cuda.conf
# rootから戻り、
sudo ldconfig

ビルドが完了したら、パッケージを作成してpipでインストール出来ます。

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
pip install /tmp/tensorflow_pkg/tensorflow-1.8.0rc1-cp36-cp36m-linux_x86_64.whl

3. Tensorboardのビルド・簡易インストール

Tensorboardは単純にpipでインストールしてもよいのかもしれませんが、バージョンを合わせたほうがいいのではないかということで、ソースからビルドしたメモです。

git clone https://github.com/tensorflow/tensorboard.git
cd tensorboard/
bazel build //tensorboard
〜下記はビルド終了時の様子〜
tensorflow/tensorboard$ bazel build //tensorboard
Starting local Bazel server and connecting to it...
......................................
         : (log messages here)
Target //tensorboard:tensorboard up-to-date:
  bazel-bin/tensorboard/tensorboard
INFO: Elapsed time: 326.553s, Critical Path: 187.92s
INFO: 619 processes: 456 linux-sandbox, 12 local, 151 worker.
INFO: Build completed successfully, 1268 total actions

tensorflowがビルド・インストールできていれば、このビルドも問題ないでしょう。しかし、tensorflowと違い、標準でパッケージ化する方法が提供されていないようです。仕方ないので、下記のようにパスの通っている位置にリンクを作るなどして回避する方法があります。

ln -s `pwd`/bazel-bin/tensorboard/tensorboard ~/bin
ln -s `pwd`/bazel-bin/tensorboard/tensorboard.runfiles ~/bin

終わりに

以上で自分のマシンに環境に合わせた状態で運用することが出来ます。

また、tensorflowを使っているとチェックポイントを.pbファイルに変換したり、また現場での利用のためfreezeする必要が有るとき、bazelでツールをビルドする必要があります。

そのための準備も整うので、よく使う方は自分でビルドした方がいいのかもしれません。

4
6
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
4
6