AWS
CUDA
DeepLearning
深層学習
TensorFlow

Tensorflow 1.4.0 + CUDA 9.0 + cuDNN 7.0.3 [TensorFlowでDeep Learning 17]

(目次はこちら)

はじめに

ようやく、2017年9月にGCPでTesla P100が利用可能になり、
https://cloudplatform.googleblog.com/2017/09/introducing-faster-GPUs-for-Google-Compute-Engine.html
1カ月後の10月には、AWSでTesla V100が利用可能に。
https://aws.amazon.com/about-aws/whats-new/2017/10/introducing-amazon-ec2-p3-instances/

Tesla V100を使うためには、CUDA9.0が必要で、ってことは、TensorflowもBuildし直しなわけで、その記録。

環境等

  • Ubuntu 16.04
  • python 3.5.2
  • Tensorflow 1.4.0
  • CUDA 9.0
  • cuDNN 7.0.3
  • Bazel 0.7.0

CUDAのインストール

https://developer.nvidia.com/cuda-downloads から、
cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb
を落としてきて、

$ sudo dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb
$ sudo apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub
$ sudo apt-get update
$ sudo apt-get install cuda

cuDNNのインストール

https://developer.nvidia.com/cudnn から、
libcudnn7_7.0.3.11-1+cuda9.0_amd64.deb
libcudnn7-dev_7.0.3.11-1+cuda9.0_amd64.deb
を落としてきて、

$ sudo dpkg -i libcudnn7_7.0.3.11-1+cuda9.0_amd64.deb libcudnn7-dev_7.0.3.11-1+cuda9.0_amd64.deb

Bazelのインストール

https://docs.bazel.build/versions/master/install-ubuntu.html

$ sudo apt-get update && sudo apt-get install -y bazel

現時点では、v0.7.0

TensorflowのBuild

Compute capabilityどうしようか

GPU AWS Instance Type Compute Capability
Tesla M60 g3 5.2
Tesla K80 p2 3.7
Tesla V100 p3 7.0

しかたない、とりあえず3種類対応。

Configure

$ git clone git@github.com:tensorflow/tensorflow.git -b r1.4
$ cd tensorflow
$ ./configure
You have bazel 0.7.0 installed.
...
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]: 9.0
Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 6.0]: 7.0.3
Please specify the location where cuDNN 7.0.3 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,5.2]3.7,5.2,7.0
...

Build

$ bazel build -c opt --copt=-march="haswell" --config=cuda //tensorflow/tools/pip_package:build_pip_package
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ ls /tmp/tensorflow_pkg
tensorflow-1.4.0-cp35-cp35m-linux_x86_64.whl

ちなみに、Buildは、お高いGPUインスタンスでやる必要はない。

Build済みパッケージ

Buildが面倒な方はどうぞ。
https://drive.google.com/file/d/1I8dJQc0PCRHv_fWzFVPumI0zN2gbVLuv/view?usp=sharing

GPU使わない方も以下のようなメッセージ消えて、それなりに速度改善あります。

The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.