LoginSignup
2
0

More than 1 year has passed since last update.

cuda 用XLAのビルド(未完成)

Last updated at Posted at 2022-10-09

ビルドエラーがでてうまく行っていません。

実行環境

Windows11+WSL2
Ubuntu 20.04 LTS
GPU RTX3060

WSL2にはvGPUドライバとcudaパッケージを導入 詳細

cudaパッケージは、11.8.0-1を導入した状態と11.1.0-1を導入した状態それぞれやってみましたが、同じ結果でした

$ apt -a  list cuda 
Listing... Done
cuda/unknown 11.8.0-1 amd64 [upgradable from: 11.1.0-1]
cuda/unknown 11.7.1-1 amd64
cuda/unknown 11.7.0-1 amd64
cuda/unknown 11.6.2-1 amd64
cuda/unknown 11.6.1-1 amd64
cuda/unknown 11.6.0-1 amd64
cuda/unknown 11.5.2-1 amd64
cuda/unknown 11.5.1-1 amd64
cuda/unknown 11.5.0-1 amd64
cuda/unknown 11.4.4-1 amd64
cuda/unknown 11.4.3-1 amd64
cuda/unknown 11.4.2-1 amd64
cuda/unknown 11.4.1-1 amd64
cuda/unknown 11.4.0-1 amd64
cuda/unknown 11.3.1-1 amd64
cuda/unknown 11.3.0-1 amd64
cuda/unknown 11.2.2-1 amd64
cuda/unknown 11.2.1-1 amd64
cuda/unknown 11.2.0-1 amd64
cuda/unknown 11.1.1-1 amd64
cuda/unknown,now 11.1.0-1 amd64 [installed,upgradable to: 11.8.0-1]

XLAのビルド

XLAのドキュメントに従って行う

asdf plugin-add bazel
asdf install bazel 4.2.1
asdf global bazel 4.2.1
asdf plugin-add python
asdf install python 3.10.7
asdf global python 3.10.7
pip install NumPy

XLAのドキュメントにnote the build process looks for python, not python3とあります。
pythonでPython3が起動する事を確認しておきます。

$ python
Python 3.10.7 (main, Oct  9 2022, 09:18:14) [GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 

次のコマンドを実行

export XLA_BUILD=true
export XLA_TARGET=cuda
mix deps.clean --all
mix deps.get

エラー発生

$ iex -S mix
Erlang/OTP 25 [erts-13.0.4] [source] [64-bit] [smp:16:16] [ds:16:16:10] [async-threads:1] [jit:ns]

==> xla
Compiling 2 files (.ex)
Generated xla app
mkdir -p /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e && \
        cd /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e && \
        git init && \
        git remote add origin https://github.com/tensorflow/tensorflow.git && \
        git fetch --depth 1 origin 3f878cff5b698b82eea85db2b60d65a2e320850e && \
        git checkout FETCH_HEAD && \
        rm /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelversion
Initialized empty Git repository in /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.git/
From https://github.com/tensorflow/tensorflow
 * branch              3f878cff5b698b82eea85db2b60d65a2e320850e -> FETCH_HEAD
Note: switching to 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 3f878cff Merge pull request #54226 from tensorflow-jenkins/version-numbers-2.8.0-22199
rm -f /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/compiler/xla/extension && \
        ln -s "/home/masa/deeplearning4/4.2/gridworld/deps/xla/extension" /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/compiler/xla/extension && \
        cd /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e && \
        bazel build --define "framework_shared_object=false" -c opt   --config=cuda //tensorflow/compiler/xla/extension:xla_extension && \
        mkdir -p /home/masa/.cache/xla/0.3.0/cache/build/ && \
        cp -f /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/bazel-bin/tensorflow/compiler/xla/extension/xla_extension.tar.gz /home/masa/.cache/xla/0.3.0/cache/build/xla_extension-x86_64-linux-cuda.tar.gz
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
  Inherited 'common' options: --isatty=0 --terminal_columns=80
INFO: Reading rc options for 'build' from /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc:
  Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc:
  'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/fallback,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
INFO: Found applicable config definition build:short_logs in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:cuda in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
INFO: Found applicable config definition build:linux in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
INFO: Repository local_config_cuda instantiated at:
  /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/WORKSPACE:15:14: in <toplevel>
  /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/workspace2.bzl:878:19: in workspace
  /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/workspace2.bzl:96:19: in _tf_toolchains
Repository rule cuda_configure defined at:
  /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl:1448:33: in <toplevel>
ERROR: An error occurred during the fetch of repository 'local_config_cuda':
   Traceback (most recent call last):
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
                _create_local_cuda_repository(repository_ctx)
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 978, column 35, in _create_local_cuda_repository
                cuda_config = _get_cuda_config(repository_ctx, find_cuda_config_script)
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 666, column 30, in _get_cuda_config
                config = find_cuda_config(repository_ctx, find_cuda_config_script, ["cuda", "cudnn"])
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 643, column 41, in find_cuda_config
                exec_result = _exec_find_cuda_config(repository_ctx, script_path, cuda_libraries)
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 637, column 19, in _exec_find_cuda_config
                return execute(repository_ctx, [python_bin, "-c", decompress_and_execute_cmd])
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/remote_config/common.bzl", line 230, column 13, in execute
                fail(
Error in fail: Repository command failed
Could not find any cudnn.h, cudnn_version.h matching version '' in any subdirectory:
        ''
        'include'
        'include/cuda'
        'include/*-linux-gnu'
        'extras/CUPTI/include'
        'include/cuda/CUPTI'
of:
        '/lib'
        '/lib/x86_64-linux-gnu'
        '/usr'
        '/usr/lib/wsl/lib'
        '/usr/lib/x86_64-linux-gnu/libfakeroot'
        '/usr/local/cuda'
        '/usr/local/cuda/targets/x86_64-linux/lib'
ERROR: Error fetching repository: Traceback (most recent call last):
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
                _create_local_cuda_repository(repository_ctx)
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 978, column 35, in _create_local_cuda_repository
                cuda_config = _get_cuda_config(repository_ctx, find_cuda_config_script)
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 666, column 30, in _get_cuda_config
                config = find_cuda_config(repository_ctx, find_cuda_config_script, ["cuda", "cudnn"])
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 643, column 41, in find_cuda_config
                exec_result = _exec_find_cuda_config(repository_ctx, script_path, cuda_libraries)
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 637, column 19, in _exec_find_cuda_config
                return execute(repository_ctx, [python_bin, "-c", decompress_and_execute_cmd])
        File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/remote_config/common.bzl", line 230, column 13, in execute
                fail(
Error in fail: Repository command failed
Could not find any cudnn.h, cudnn_version.h matching version '' in any subdirectory:
        ''
        'include'
        'include/cuda'
        'include/*-linux-gnu'
        'extras/CUPTI/include'
        'include/cuda/CUPTI'
of:
        '/lib'
        '/lib/x86_64-linux-gnu'
        '/usr'
        '/usr/lib/wsl/lib'
        '/usr/lib/x86_64-linux-gnu/libfakeroot'
        '/usr/local/cuda'
        '/usr/local/cuda/targets/x86_64-linux/lib'
INFO: Found applicable config definition build:cuda in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
ERROR: @local_config_cuda//:enable_cuda :: Error loading option @local_config_cuda//:enable_cuda: Repository command failed
Could not find any cudnn.h, cudnn_version.h matching version '' in any subdirectory:
        ''
        'include'
        'include/cuda'
        'include/*-linux-gnu'
        'extras/CUPTI/include'
        'include/cuda/CUPTI'
of:
        '/lib'
        '/lib/x86_64-linux-gnu'
        '/usr'
        '/usr/lib/wsl/lib'
        '/usr/lib/x86_64-linux-gnu/libfakeroot'
        '/usr/local/cuda'
        '/usr/local/cuda/targets/x86_64-linux/lib'

make: *** [Makefile:27: /home/masa/.cache/xla/0.3.0/cache/build/xla_extension-x86_64-linux-cuda.tar.gz] Error 2
could not compile dependency :xla, "mix compile" failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile xla", update it with "mix deps.update xla" or clean it with "mix deps.clean xla"
==> gridworld
** (Mix) Could not compile with "make" (exit status: 2).
You need to have gcc and make installed. If you are using
Ubuntu or any other Debian-based system, install the packages
"build-essential". Also install "erlang-dev" package if not
included in your Erlang/OTP version. If you're on Fedora, run
"dnf group install 'Development Tools'".

@zacky1972 さんのコメントの通り、cudnnがインストールされていない事が原因ようのうです。

検索してみると、wslで。cudnnができない事を指摘している方がしました。

nvidiaのレポジトリーを見てみると、
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/
ここには、libcudnn8があるんですが、
https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/
wslのレポジトリーの方にはありません。

じゃあ、ubuntu2004の方のcudaをインストールして、XLAだけとりあえず作ればいいのではないか?

WSL2のdockerの環境で、インストール試して見た。

$ sudo apt-get -y install cuda
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 cuda-drivers-520 : Depends: nvidia-settings (>= 520.61.05) but it is not installable
W: Target Packages (Packages) is configured multiple times in /etc/apt/sources.list:7 and /etc/apt/sources.list.d/cuda-ubuntu2004-x86_64.list:1
E: Unable to correct problems, you have held broken packages.

settingsこれって、GPUが見えてないからダメってことかな。

2
0
4

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
0