WSL2環境で、XLAのbuildがどうしてもうまくいかないので、素のubuntu2004で試してみました。
tensorflowのビルドでエラーとなりビルドできませんでした。
nvidiaのドキュメントに従ってcudaパッケージをインストール
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda
サーバを再起動する。
再起動後nvidia-smiで、GPUを検知していることを確認
$ nvidia-smi
Sat Oct 15 15:04:56 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:01:00.0 Off | N/A |
| 0% 40C P8 17W / 170W | 749MiB / 12288MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1145 G /usr/lib/xorg/Xorg 150MiB |
| 0 N/A N/A 1710 G /usr/lib/xorg/Xorg 391MiB |
| 0 N/A N/A 1860 G /usr/bin/gnome-shell 195MiB |
+-----------------------------------------------------------------------------+
asdfのインストール
sudo apt -y install git
sudo apt -y install curl
git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.10.2
echo ". $HOME/.asdf/asdf.sh" >> ~/.bashrc
ターミナルを開きなおす
asdf plugin add erlang
asdf plugin add elixir
sudo apt-get -y install build-essential autoconf m4 libncurses5-dev libwxgtk3.0-gtk3-dev libwxgtk-webview3.0-gtk3-dev libgl1-mesa-dev libglu1-mesa-dev libpng-dev libssh-dev unixodbc-dev xsltproc fop libxml2-utils libncurses-dev openjdk-11-jdk
asdf install erlang 25.1.1
asdf install elixir 1.14.0-otp-25
asdf global erlang 25.1.1
asdf global elixir 1.14.0-otp-25
XLAのビルドに必要な環境の準備
asdf plugin-add bazel
asdf install bazel 4.2.1
asdf global bazel 4.2.1
asdf plugin-add python
asdf install python 3.10.7
asdf global python 3.10.7
pip install NumPy
ELXAを組み込んだmixを用意して、buildを実行
export XLA_BUILD=true
export XLA_TARGET=cuda
mix deps.clean --all
mix deps.get
Nxのプログラムをcompileしてみる
$ iex -S mix
Erlang/OTP 25 [erts-13.1.1] [source] [64-bit] [smp:16:16] [ds:16:16:10] [async-threads:1] [jit:ns]
==> complex
Compiling 2 files (.ex)
Generated complex app
==> nx
Compiling 24 files (.ex)
Generated nx app
==> elixir_make
Compiling 1 file (.ex)
Generated elixir_make app
==> xla
Compiling 2 files (.ex)
Generated xla app
mkdir -p /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e && \
cd /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e && \
git init && \
git remote add origin https://github.com/tensorflow/tensorflow.git && \
git fetch --depth 1 origin 3f878cff5b698b82eea85db2b60d65a2e320850e && \
git checkout FETCH_HEAD && \
rm /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelversion
Initialized empty Git repository in /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.git/
From https://github.com/tensorflow/tensorflow
* branch 3f878cff5b698b82eea85db2b60d65a2e320850e -> FETCH_HEAD
Note: switching to 'FETCH_HEAD'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at 3f878cff Merge pull request #54226 from tensorflow-jenkins/version-numbers-2.8.0-22199
rm -f /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/compiler/xla/extension && \
ln -s "/home/masa/elixir/nxtest/deps/xla/extension" /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/compiler/xla/extension && \
cd /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e && \
bazel build --define "framework_shared_object=false" -c opt --config=cuda //tensorflow/compiler/xla/extension:xla_extension && \
mkdir -p /home/masa/.cache/xla/0.3.0/cache/build/ && \
cp -f /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/bazel-bin/tensorflow/compiler/xla/extension/xla_extension.tar.gz /home/masa/.cache/xla/0.3.0/cache/build/xla_extension-x86_64-linux-cuda.tar.gz
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
Inherited 'common' options: --isatty=0 --terminal_columns=80
INFO: Reading rc options for 'build' from /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc:
'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/fallback,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
INFO: Found applicable config definition build:short_logs in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:cuda in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
INFO: Found applicable config definition build:linux in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/c3e082762b7664bbc7ffd2c39e86464928e27c0c.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
INFO: Repository local_config_cuda instantiated at:
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/WORKSPACE:15:14: in <toplevel>
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/workspace2.bzl:878:19: in workspace
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/workspace2.bzl:96:19: in _tf_toolchains
Repository rule cuda_configure defined at:
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl:1448:33: in <toplevel>
ERROR: An error occurred during the fetch of repository 'local_config_cuda':
Traceback (most recent call last):
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
_create_local_cuda_repository(repository_ctx)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 978, column 35, in _create_local_cuda_repository
cuda_config = _get_cuda_config(repository_ctx, find_cuda_config_script)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 666, column 30, in _get_cuda_config
config = find_cuda_config(repository_ctx, find_cuda_config_script, ["cuda", "cudnn"])
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 643, column 41, in find_cuda_config
exec_result = _exec_find_cuda_config(repository_ctx, script_path, cuda_libraries)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 637, column 19, in _exec_find_cuda_config
return execute(repository_ctx, [python_bin, "-c", decompress_and_execute_cmd])
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/remote_config/common.bzl", line 230, column 13, in execute
fail(
Error in fail: Repository command failed
Could not find any cudnn.h, cudnn_version.h matching version '' in any subdirectory:
''
'include'
'include/cuda'
'include/*-linux-gnu'
'extras/CUPTI/include'
'include/cuda/CUPTI'
of:
'/lib'
'/lib/i386-linux-gnu'
'/lib/x86_64-linux-gnu'
'/usr'
'/usr/lib/x86_64-linux-gnu/libfakeroot'
'/usr/local/cuda'
'/usr/local/cuda/targets/x86_64-linux/lib'
ERROR: Error fetching repository: Traceback (most recent call last):
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
_create_local_cuda_repository(repository_ctx)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 978, column 35, in _create_local_cuda_repository
cuda_config = _get_cuda_config(repository_ctx, find_cuda_config_script)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 666, column 30, in _get_cuda_config
config = find_cuda_config(repository_ctx, find_cuda_config_script, ["cuda", "cudnn"])
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 643, column 41, in find_cuda_config
exec_result = _exec_find_cuda_config(repository_ctx, script_path, cuda_libraries)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 637, column 19, in _exec_find_cuda_config
return execute(repository_ctx, [python_bin, "-c", decompress_and_execute_cmd])
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/remote_config/common.bzl", line 230, column 13, in execute
fail(
Error in fail: Repository command failed
Could not find any cudnn.h, cudnn_version.h matching version '' in any subdirectory:
''
'include'
'include/cuda'
'include/*-linux-gnu'
'extras/CUPTI/include'
'include/cuda/CUPTI'
of:
'/lib'
'/lib/i386-linux-gnu'
'/lib/x86_64-linux-gnu'
'/usr'
'/usr/lib/x86_64-linux-gnu/libfakeroot'
'/usr/local/cuda'
'/usr/local/cuda/targets/x86_64-linux/lib'
INFO: Found applicable config definition build:cuda in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
ERROR: @local_config_cuda//:enable_cuda :: Error loading option @local_config_cuda//:enable_cuda: Repository command failed
Could not find any cudnn.h, cudnn_version.h matching version '' in any subdirectory:
''
'include'
'include/cuda'
'include/*-linux-gnu'
'extras/CUPTI/include'
'include/cuda/CUPTI'
of:
'/lib'
'/lib/i386-linux-gnu'
'/lib/x86_64-linux-gnu'
'/usr'
'/usr/lib/x86_64-linux-gnu/libfakeroot'
'/usr/local/cuda'
'/usr/local/cuda/targets/x86_64-linux/lib'
make: *** [Makefile:27: /home/masa/.cache/xla/0.3.0/cache/build/xla_extension-x86_64-linux-cuda.tar.gz] Error 2
could not compile dependency :xla, "mix compile" failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile xla", update it with "mix deps.update xla" or clean it with "mix deps.clean xla"
==> nxtest
** (Mix) Could not compile with "make" (exit status: 2).
You need to have gcc and make installed. If you are using
Ubuntu or any other Debian-based system, install the packages
"build-essential". Also install "erlang-dev" package if not
included in your Erlang/OTP version. If you're on Fedora, run
"dnf group install 'Development Tools'".
cudnnをインストール
cudnn_version="8.6.0.*"
cuda_version="cuda11.8"
sudo apt-get install libcudnn8=${cudnn_version}-1+${cuda_version}
sudo apt-get install libcudnn8-dev=${cudnn_version}-1+${cuda_version}
再度compileしてみる
masa@masa-HP-ProDesk-405-G8-Small-Form-Factor-PC:~/elixir/nxtest$ sudo apt-get install libcudnn8-dev=${cudnn_version}-1+${cuda_version}
Reading package lists... Done
Building dependency tree
Reading state information... Done
Selected version '8.6.0.163-1+cuda11.8' (NVIDIA CUDA:developer.download.nvidia.com [amd64]) for 'libcudnn8-dev'
The following NEW packages will be installed:
libcudnn8-dev
0 upgraded, 1 newly installed, 0 to remove and 59 not upgraded.
Need to get 437 MB of archives.
After this operation, 1,365 MB of additional disk space will be used.
Get:1 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 libcudnn8-dev 8.6.0.163-1+cuda11.8 [437 MB]
Fetched 437 MB in 11s (40.7 MB/s)
Selecting previously unselected package libcudnn8-dev.
(Reading database ... 176731 files and directories currently installed.)
Preparing to unpack .../libcudnn8-dev_8.6.0.163-1+cuda11.8_amd64.deb ...
Unpacking libcudnn8-dev (8.6.0.163-1+cuda11.8) ...
Setting up libcudnn8-dev (8.6.0.163-1+cuda11.8) ...
update-alternatives: using /usr/include/x86_64-linux-gnu/cudnn_v8.h to provide /usr/include/cudnn.h (libcudnn) in auto mode
masa@masa-HP-ProDesk-405-G8-Small-Form-Factor-PC:~/elixir/nxtest$ iex -S mix
Erlang/OTP 25 [erts-13.1.1] [source] [64-bit] [smp:16:16] [ds:16:16:10] [async-threads:1] [jit:ns]
==> xla
Compiling 2 files (.ex)
Generated xla app
rm -f /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/compiler/xla/extension && \
ln -s "/home/masa/elixir/nxtest/deps/xla/extension" /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/compiler/xla/extension && \
cd /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e && \
bazel build --define "framework_shared_object=false" -c opt --config=cuda //tensorflow/compiler/xla/extension:xla_extension && \
mkdir -p /home/masa/.cache/xla/0.3.0/cache/build/ && \
cp -f /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/bazel-bin/tensorflow/compiler/xla/extension/xla_extension.tar.gz /home/masa/.cache/xla/0.3.0/cache/build/xla_extension-x86_64-linux-cuda.tar.gz
INFO: Options provided by the client:
Inherited 'common' options: --isatty=0 --terminal_columns=80
INFO: Reading rc options for 'build' from /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc:
'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/fallback,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
INFO: Found applicable config definition build:short_logs in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:cuda in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
INFO: Found applicable config definition build:linux in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
INFO: Repository local_config_cuda instantiated at:
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/WORKSPACE:15:14: in <toplevel>
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/workspace2.bzl:878:19: in workspace
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/workspace2.bzl:96:19: in _tf_toolchains
Repository rule cuda_configure defined at:
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl:1448:33: in <toplevel>
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/c3e082762b7664bbc7ffd2c39e86464928e27c0c.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
ERROR: An error occurred during the fetch of repository 'local_config_cuda':
Traceback (most recent call last):
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
_create_local_cuda_repository(repository_ctx)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1076, column 27, in _create_local_cuda_repository
cuda_libs = _find_libs(repository_ctx, check_cuda_libs_script, cuda_config)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 606, column 21, in _find_libs
_check_cuda_libs(repository_ctx, check_cuda_libs_script, check_cuda_libs_params.values())
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 501, column 28, in _check_cuda_libs
checked_paths = execute(repository_ctx, [python_bin, "-c", cmd]).stdout.splitlines()
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/remote_config/common.bzl", line 230, column 13, in execute
fail(
Error in fail: Repository command failed
Expected even number of arguments
ERROR: Error fetching repository: Traceback (most recent call last):
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
_create_local_cuda_repository(repository_ctx)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1076, column 27, in _create_local_cuda_repository
cuda_libs = _find_libs(repository_ctx, check_cuda_libs_script, cuda_config)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 606, column 21, in _find_libs
_check_cuda_libs(repository_ctx, check_cuda_libs_script, check_cuda_libs_params.values())
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 501, column 28, in _check_cuda_libs
checked_paths = execute(repository_ctx, [python_bin, "-c", cmd]).stdout.splitlines()
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/remote_config/common.bzl", line 230, column 13, in execute
fail(
Error in fail: Repository command failed
Expected even number of arguments
INFO: Found applicable config definition build:cuda in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
ERROR: @local_config_cuda//:enable_cuda :: Error loading option @local_config_cuda//:enable_cuda: Repository command failed
Expected even number of arguments
make: *** [Makefile:27: /home/masa/.cache/xla/0.3.0/cache/build/xla_extension-x86_64-linux-cuda.tar.gz] Error 2
could not compile dependency :xla, "mix compile" failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile xla", update it with "mix deps.update xla" or clean it with "mix deps.clean xla"
==> nxtest
** (Mix) Could not compile with "make" (exit status: 2).
You need to have gcc and make installed. If you are using
Ubuntu or any other Debian-based system, install the packages
"build-essential". Also install "erlang-dev" package if not
included in your Erlang/OTP version. If you're on Fedora, run
"dnf group install 'Development Tools'".
buildコマンドだけ実行してみる。同じエラーが発生
cd ~/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e
$ bazel build --define "framework_shared_object=false" -c opt --config=cuda //tensorflow/compiler/xla/extension:xla_extension
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=120
INFO: Reading rc options for 'build' from /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc:
'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/fallback,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
INFO: Found applicable config definition build:short_logs in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:cuda in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
INFO: Found applicable config definition build:linux in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
INFO: Repository local_config_cuda instantiated at:
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/WORKSPACE:15:14: in <toplevel>
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/workspace2.bzl:878:19: in workspace
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/tensorflow/workspace2.bzl:96:19: in _tf_toolchains
Repository rule cuda_configure defined at:
/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl:1448:33: in <toplevel>
ERROR: An error occurred during the fetch of repository 'local_config_cuda':
Traceback (most recent call last):
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
_create_local_cuda_repository(repository_ctx)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1076, column 27, in _create_local_cuda_repository
cuda_libs = _find_libs(repository_ctx, check_cuda_libs_script, cuda_config)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 606, column 21, in _find_libs
_check_cuda_libs(repository_ctx, check_cuda_libs_script, check_cuda_libs_params.values())
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 501, column 28, in _check_cuda_libs
checked_paths = execute(repository_ctx, [python_bin, "-c", cmd]).stdout.splitlines()
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/remote_config/common.bzl", line 230, column 13, in execute
fail(
Error in fail: Repository command failed
Expected even number of arguments
ERROR: Error fetching repository: Traceback (most recent call last):
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
_create_local_cuda_repository(repository_ctx)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 1076, column 27, in _create_local_cuda_repository
cuda_libs = _find_libs(repository_ctx, check_cuda_libs_script, cuda_config)
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 606, column 21, in _find_libs
_check_cuda_libs(repository_ctx, check_cuda_libs_script, check_cuda_libs_params.values())
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/gpus/cuda_configure.bzl", line 501, column 28, in _check_cuda_libs
checked_paths = execute(repository_ctx, [python_bin, "-c", cmd]).stdout.splitlines()
File "/home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/third_party/remote_config/common.bzl", line 230, column 13, in execute
fail(
Error in fail: Repository command failed
Expected even number of arguments
INFO: Found applicable config definition build:cuda in file /home/masa/.cache/xla_extension/tf-3f878cff5b698b82eea85db2b60d65a2e320850e/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
ERROR: @local_config_cuda//:enable_cuda :: Error loading option @local_config_cuda//:enable_cuda: Repository command failed
Expected even number of arguments