LoginSignup
7
5

More than 5 years have passed since last update.

Google ColaboratoryがCUDA9.2になってる

Last updated at Posted at 2018-10-05

言いたい事、タイトル通り

  • ここ数日すっかりUbuntu18、CUDA9.2にばかり繋がる
  • かつ、chainer公式で対応されました

!curl https://colab.chainer.org/install | sh -
  • これだけで、次にはもうGPUが有効なことが確認できます
import chainer
print('GPU availability:', chainer.cuda.available)
print('cuDNN availablility:', chainer.cuda.cudnn_enabled)

以上

あっという間にこの記事は古くなりました、以下は読まなくて良いです。

バージョン確認

os
cat /etc/issue
os_output
Ubuntu 18.04.1 LTS \n \l
ドライバ
ls -l /usr/lib/x86_64-linux-gnu/libcuda*
ドライバ_output
lrwxrwxrwx 1 root root       12 Aug 21 18:16 /usr/lib/x86_64-linux-gnu/libcuda.so -> libcuda.so.1
lrwxrwxrwx 1 root root       17 Aug 21 18:16 /usr/lib/x86_64-linux-gnu/libcuda.so.1 -> libcuda.so.396.54
-rw-r--r-- 1 root root 14074232 Aug 15 06:17 /usr/lib/x86_64-linux-gnu/libcuda.so.396.54
cuda
ls -l /usr/local/cuda
cuda_output
lrwxrwxrwx 1 root root 8 Sep 28 22:08 /usr/local/cuda -> cuda-9.2/

Chainer(というかCupy)使ってる人はおまじないを変更しましょう

ありがたいことにkmaehashi様のリポで案内が更新されていました(10/4に更新されたようす)

cupyをインストール
!apt -y -q install cuda-libraries-dev-9-2
!pip install -q cupy-cuda92 chainer
chainerからgpuが使えることを確認
import chainer
print('GPU availability:', chainer.cuda.available)
print('cuDNN availablility:', chainer.cuda.cudnn_enabled)
chainerからgpuが使えることを確認_output
GPU availability: True
cuDNN availablility: True

追記、以上の手順じゃ駄目みたいです

usiのgoでエラー
NVRTCError                                Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/cupy/cuda/compiler.py in compile(self, options)
    240         try:
--> 241             nvrtc.compileProgram(self.ptr, options)
    242             return nvrtc.getPTX(self.ptr)

cupy/cuda/nvrtc.pyx in cupy.cuda.nvrtc.compileProgram()

cupy/cuda/nvrtc.pyx in cupy.cuda.nvrtc.compileProgram()

cupy/cuda/nvrtc.pyx in cupy.cuda.nvrtc.check_status()

NVRTCError: NVRTC_ERROR_COMPILATION (6)

During handling of the above exception, another exception occurred:

CompileException                          Traceback (most recent call last)
<ipython-input-24-3641f643e1de> in <module>()
      3 
      4 player = PolicyPlayer()
----> 5 usi(player)

/content/drive/My Drive/python-dlshogi/pydlshogi/usi/usi.py in usi(player)
     17             player.position(moves)
     18         elif cmd[0] == 'go':
---> 19             player.go()
     20         elif cmd[0] == 'quit':
     21             player.quit()

/content/drive/My Drive/python-dlshogi/pydlshogi/player/policy_player.py in go(self)
     53 
     54         with chainer.no_backprop_mode():
---> 55             y = self.model(x)
     56 
     57             logits = cuda.to_cpu(y.data)[0]

/content/drive/My Drive/python-dlshogi/pydlshogi/network/policy.py in __call__(self, x)
     26 
     27     def __call__(self, x):
---> 28         h1 = F.relu(self.l1(x))
     29         h2 = F.relu(self.l2(h1))
     30         h3 = F.relu(self.l3(h2))

/usr/local/lib/python3.6/dist-packages/chainer/functions/activation/relu.py in relu(x)
    175 
    176     """
--> 177     y, = ReLU().apply((x,))
    178     return y

/usr/local/lib/python3.6/dist-packages/chainer/function_node.py in apply(self, inputs)
    261                 outputs = static_forward_optimizations(self, in_data)
    262             else:
--> 263                 outputs = self.forward(in_data)
    264 
    265         # Check for output array types

/usr/local/lib/python3.6/dist-packages/chainer/function_node.py in forward(self, inputs)
    367         assert len(inputs) > 0
    368         if isinstance(inputs[0], cuda.ndarray):
--> 369             return self.forward_gpu(inputs)
    370         return self.forward_cpu(inputs)
    371 

/usr/local/lib/python3.6/dist-packages/chainer/functions/activation/relu.py in forward_gpu(self, inputs)
     57             y = cudnn.activation_forward(x, _mode)
     58         else:
---> 59             y = cuda.cupy.maximum(x, 0)
     60         self.retain_outputs((0,))
     61         return y,

cupy/core/_kernel.pyx in cupy.core._kernel.ufunc.__call__()

cupy/util.pyx in cupy.util.memoize.decorator.ret()

cupy/core/_kernel.pyx in cupy.core._kernel._get_ufunc_kernel()

cupy/core/_kernel.pyx in cupy.core._kernel._get_simple_elementwise_kernel()

cupy/core/carray.pxi in cupy.core.core.compile_with_cache()

/usr/local/lib/python3.6/dist-packages/cupy/cuda/compiler.py in compile_with_cache(source, options, arch, cache_dir, extra_source)
    162                 return mod
    163 
--> 164     ptx = compile_using_nvrtc(source, options, arch)
    165     ls = function.LinkState()
    166     ls.add_ptr_data(ptx, six.u('cupy.ptx'))

/usr/local/lib/python3.6/dist-packages/cupy/cuda/compiler.py in compile_using_nvrtc(source, options, arch)
     80         prog = _NVRTCProgram(source, cu_path)
     81         try:
---> 82             ptx = prog.compile(options)
     83         except CompileException as e:
     84             dump = _get_bool_env_variable(

/usr/local/lib/python3.6/dist-packages/cupy/cuda/compiler.py in compile(self, options)
    243         except nvrtc.NVRTCError:
    244             log = nvrtc.getProgramLog(self.ptr)
--> 245             raise CompileException(log, self.src, self.name, options)
    246 
    247 

CompileException: /usr/local/lib/python3.6/dist-packages/cupy/core/include/cupy/carray.cuh(10): catastrophic error: cannot open source file "cuda_fp16.h"

1 catastrophic error detected in the compilation of "/tmp/tmpbymobvig/kern.cu".
Compilation terminated.

追記、の追記

  • やられた!
  • 何度かランタイムリセットしたら
  • Ubuntu17、CUDA8.0の環境につながった
  • そこが不安定だとライブラリのバージョン相性が担保できないからやめてくれ!
  • 無料だし大声で文句言えない

追記、の追記の追記

  • その後Ubuntu18の環境につながることがないので、おまじないを変更して将棋AIが動くのかは動作確認できていません。。

以上

7
5
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
7
5