0
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

話題のDeepSeekR1をJetson AGX ORINで実行してみた docker編 その2

Last updated at Posted at 2025-02-05

前回からの続き
https://qiita.com/g667408/items/2c09950c481a138bde59

docker内部でのコンパイルは挙動が怪しい

という問題があったので、今回はllama.cppを外部でコンパイルしてから、dockerに移動させてdockerイメージを完成させます。
cudaも現時点で最新の12.8にします。

cuda 12.8 @ jetson

sudo apt install wget

せっかくなので最新版のcudaを使います。
先ずはjetson用のcdua 12.8を入手します。
https://developer.nvidia.com/cuda-toolkit-archive
選択肢
image.png
実行

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.8.0/local_installers/cuda-tegra-repo-ubuntu2204-12-8-local_12.8.0-1_arm64.deb
sudo dpkg -i cuda-tegra-repo-ubuntu2204-12-8-local_12.8.0-1_arm64.deb
sudo cp /var/cuda-tegra-repo-ubuntu2204-12-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-8 cuda-compat-12-8

必要なもの

たぶん足りてると思います。

sudo apt install libcublas-12-8 \
    build-essential cmake \
    git \
    cmake \ 
    ca-certificates \
    pkg-config \
    libblas-dev \
    ccache \
    docker

NVIDIA Container Toolkit

入れなくても実行できるみたいす。
効果はよくわかりませんが、とりあえず入れます。

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

llama.cpp

llama.cppを持ってくる

git clone https://github.com/ggerganov/llama.cpp.git

build

mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DGGML_CPU_AARCH64=1 \
         -DGGML_CUDA=1 \
         -DGGML_CUDA_F16=1 \
         -DGGML_F16C=1 -DGGML_FMA=1 \
         -DCMAKE_CUDA_COMPILER=$CUDA_HOME/bin/nvcc \
         -DCUDAToolkit_ROOT=$CUDA_HOME \
         -DGGML_BLAS=1

make -j$(nproc) llama-server

Dockerfile

FROM nvcr.io/nvidia/l4t-base:r36.2.0 AS download_stage

# Copy CUDA installer files from local
COPY cuda-ubuntu2204.pin . 
COPY cuda-tegra-repo-ubuntu2204-12-8-local_12.8.0-1_arm64.deb .

# Install CUDA Toolkit
RUN mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
RUN dpkg -i cuda-tegra-repo-ubuntu2204-12-8-local_12.8.0-1_arm64.deb
RUN cp /var/cuda-tegra-repo-ubuntu2204-12-8-local/cuda-*-keyring.gpg /usr/share/keyrings/

FROM download_stage AS install_stage

RUN apt-get update && apt-get install -y --no-install-recommends \
    cuda-runtime-12-8

ENV PATH=/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH}

RUN rm cuda-tegra-repo-ubuntu2204-12-8-local_12.8.0-1_arm64.deb
RUN rm -rf /var/lib/apt/lists/*

FROM install_stage AS final_stage

COPY llama-server .
COPY lib* .

CMD ["sh", "-c", "./llama-server -m /models/$model --host 0.0.0.0 --port 8080 --gpu-layers 49"]

docker build

#出来上がったllamaのバイナリをcopy
cp /path_to_llama/build/bin/* .
docker build -t llama-server .

実行

image.png

dockerで実行する

MODEL=DeepSeek-R1-Distill-Qwen-14B-Q6_K.gguf
docker run  --runtime=nvidia --gpus all -it -e model=$MODEL --rm --network=host --volume ~/llm_models:/models llama-server

「こんにちは」への反応時間 42秒

slot launch_slot_: id  0 | task 0 | processing task
slot update_slots: id  0 | task 0 | new prompt, n_ctx_slot = 4096, n_keep = 0, n_prompt_tokens = 10
slot update_slots: id  0 | task 0 | kv cache rm [0, end)
slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 10, n_tokens = 10, progress = 1.000000
slot update_slots: id  0 | task 0 | prompt done, n_past = 10, n_tokens = 10
slot      release: id  0 | task 0 | stop processing: n_past = 358, truncated = 0
slot print_timing: id  0 | task 0 |
prompt eval time =     340.14 ms /    10 tokens (   34.01 ms per token,    29.40 tokens per second)
       eval time =   41528.10 ms /   349 tokens (  118.99 ms per token,     8.40 tokens per second)
      total time =   41868.24 ms /   359 tokens

直接実行する

./llama-server -m ~/llm_models/DeepSeek-R1-Distill-Qwen-14B-Q6_K.gguf --host 0.0.0.0 --port 8080 --gpu-layers 49

同じバイナリですが、直接実行の方が2倍ぐらい速いです。
何度か繰り返しても、同じような傾向。
なんで。。。?

「こんにちは」への反応時間 24秒

slot launch_slot_: id  0 | task 0 | processing task
slot update_slots: id  0 | task 0 | new prompt, n_ctx_slot = 4096, n_keep = 0, n_prompt_tokens = 10
slot update_slots: id  0 | task 0 | kv cache rm [0, end)
slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 10, n_tokens = 10, progress = 1.000000
slot update_slots: id  0 | task 0 | prompt done, n_past = 10, n_tokens = 10
slot      release: id  0 | task 0 | stop processing: n_past = 27, truncated = 0
slot print_timing: id  0 | task 0 |
prompt eval time =     350.45 ms /    10 tokens (   35.05 ms per token,    28.53 tokens per second)
       eval time =    2072.09 ms /    18 tokens (  115.12 ms per token,     8.69 tokens per second)
      total time =    2422.54 ms /    28 tokens

感想

同じllama-serverのバイナリでも、直接実行はdocker実行より速かった。
理由は良く分かりません。
直接実行だと、24秒とそこそこの反応速度でした。

0
2
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?