2
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

kubernetes v1.23.6/Nvidia GPUと戯れる その4

Posted at

前回からの続きになります。

作業手順

以下となります。

  1. 作業環境の準備
  2. Nvidia CUDA Toolkit、docker、Nvidia Container Toolkit
  3. kubeadmでクラスター構築
  4. helmとGPU Operator ←ココ
  5. IstioとMetalLB
  6. Jupyterhub

4. helmとGPU Operator

4.1 Helm

  • なぜHelmを入れるのか?

    • NGC(Nvidia GPU Cloud)のアセットを活用したいからです。NGCで提供されるHelm Chartsの多くはGPU Operatorを前提にしています。そして、GPU OperatorはHelmで導入することが前提になっています。
      image.png
  • なぜ、Helm→GPU Operator→Istio→MetalLBの順に導入するのか?

    • Helm、Istio、MetalLB導入後にGPU Operatorを導入すると、正常にDeployされない場合に、原因究明が大変だったから(PodがCrashLoopBackOffしたり、Errorになったり)。その際、kubectl logsやkubectl describe podなどでPodを調査しても、自分のスキルレベルではよくわからない。
      • GPU Operatorのデフォルトのコマンドで、Deployに失敗しました(後述)。
      • 結論として、IstioやMetalLB導入前にGPU Operatorを導入して様子を見ながら作業することにしました。
  • Helmの導入はマニュアルを参考にします。GPU OperatorのマニュアルのHelmのインストール方法と実質同じです。

  • 以下、実行結果です。

HelmとGPU Opeater導入
root@k8s01:~# mkdir helm
root@k8s01:~# cd helm/
root@k8s01:~/helm# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
root@k8s01:~/helm# chmod 700 get_helm.sh
root@k8s01:~/helm# ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm

4.2 GPU Operater

  • GPU Operatorのインストールはこちらです。デフォルトのコマンドは以下です。
GPU operaterのインストール
helm install --wait --generate-name \
     -n gpu-operator --create-namespace \
     nvidia/gpu-operator
  • しかし、この環境では失敗します。
    • デフォルトのコマンドでは、Nvidia DriverとContainer Toolkitを導入するようになっているけど、前記事にあるように、これらは既に導入済です。これらを外すようにOption指定すると、適切にDeployされました。
      • MetalLB導入後環境に、同様にNvidia driverとContainer toolkit導入済でのhelm installすれば適切に導入されるかもしれませんが、試していません。以下、出力結果です。
GPU Operater
root@k8s01:~/helm# // 作業するk8sノードの状態確認
root@k8s01:~/helm# kubectl get node -o wide 
NAME                  STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s01.dcws.dell.com   Ready    control-plane,master   22m     v1.23.6   172.31.35.131   <none>        Ubuntu 20.04.4 LTS   5.13.0-40-generic   docker://20.10.14
k8s02.dcws.dell.com   Ready    <none>                 6m13s   v1.23.6   172.31.35.132   <none>        Ubuntu 20.04.4 LTS   5.13.0-40-generic   docker://20.10.14
root@k8s01:~/helm# // Podの状態確認
root@k8s01:~/helm# kubectl get pod -A 
NAMESPACE          NAME                                          READY   STATUS    RESTARTS        AGE
calico-apiserver   calico-apiserver-66489864db-dgr49             1/1     Running   1 (2m26s ago)   10m
calico-apiserver   calico-apiserver-66489864db-nt569             1/1     Running   1 (2m26s ago)   10m
calico-system      calico-kube-controllers-557cb7fd8b-bw2sr      1/1     Running   1 (2m26s ago)   12m
calico-system      calico-node-lrx66                             1/1     Running   1 (2m26s ago)   12m
calico-system      calico-node-vrzm5                             1/1     Running   1 (2m44s ago)   6m17s
calico-system      calico-typha-5d4d7646fb-wv4vx                 1/1     Running   2 (35s ago)     12m
kube-system        coredns-64897985d-n49tt                       1/1     Running   1 (2m21s ago)   21m
kube-system        coredns-64897985d-schzf                       1/1     Running   1 (2m21s ago)   21m
kube-system        etcd-k8s01.dcws.dell.com                      1/1     Running   1 (2m25s ago)   22m
kube-system        kube-apiserver-k8s01.dcws.dell.com            1/1     Running   1 (2m15s ago)   22m
kube-system        kube-controller-manager-k8s01.dcws.dell.com   1/1     Running   1 (2m25s ago)   22m
kube-system        kube-proxy-4r9k4                              1/1     Running   1 (2m45s ago)   6m17s
kube-system        kube-proxy-xqwbd                              1/1     Running   1 (2m26s ago)   21m
kube-system        kube-scheduler-k8s01.dcws.dell.com            1/1     Running   1 (2m25s ago)   22m
tigera-operator    tigera-operator-75b96586c9-szbh5              1/1     Running   2 (33s ago)     12m
root@k8s01:~/helm# // OSレベルでNvidia Driverが正常に動作していることを確認
root@k8s01:~/helm# nvidia-smi 
Tue May  3 22:43:38 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla M60           On   | 00000000:0B:00.0 Off |                  Off |
| N/A   31C    P8    15W / 150W |      3MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla M60           On   | 00000000:13:00.0 Off |                  Off |
| N/A   20C    P8    14W / 150W |      3MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1273      G   /usr/lib/xorg/Xorg                  3MiB |
|    1   N/A  N/A      1273      G   /usr/lib/xorg/Xorg                  3MiB |
+-----------------------------------------------------------------------------+
root@k8s01:~/helm# 
root@k8s01:~/helm# cd
root@k8s01:~/helm# // GPU OperaterのHelmリポジトリを登録
root@k8s01:~# helm repo add nvidia https://helm.ngc.nvidia.com/nvidia \
>    && helm repo update
"nvidia" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nvidia" chart repository
Update Complete. ?Happy Helming!?
root@k8s01:~# // GPU Operaterの導入
root@k8s01:~# helm install --wait --generate-name \
>      -n gpu-operator --create-namespace \
>       nvidia/gpu-operator \
>       --set driver.enabled=false \
>       --set toolkit.enabled=false
NAME: gpu-operator-1651585557
LAST DEPLOYED: Tue May  3 22:46:00 2022
NAMESPACE: gpu-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s01:~# // GPU Operater関連のPodの状態を確認。PodInitializingになってる。
root@k8s01:~# kubectl get pod -A
NAMESPACE          NAME                                                              READY   STATUS            RESTARTS        AGE
calico-apiserver   calico-apiserver-66489864db-dgr49                                 1/1     Running           1 (7m50s ago)   16m
calico-apiserver   calico-apiserver-66489864db-nt569                                 1/1     Running           1 (7m50s ago)   16m
calico-system      calico-kube-controllers-557cb7fd8b-bw2sr                          1/1     Running           1 (7m50s ago)   17m
calico-system      calico-node-lrx66                                                 1/1     Running           1 (7m50s ago)   17m
calico-system      calico-node-vrzm5                                                 1/1     Running           1 (8m8s ago)    11m
calico-system      calico-typha-5d4d7646fb-wv4vx                                     1/1     Running           2 (5m59s ago)   17m
gpu-operator       gpu-feature-discovery-j2qkl                                       1/1     Running           0               76s
gpu-operator       gpu-feature-discovery-x6n4z                                       1/1     Running           0               76s
gpu-operator       gpu-operator-1651585557-node-feature-discovery-master-7dc5cnks5   1/1     Running           0               2m12s
gpu-operator       gpu-operator-1651585557-node-feature-discovery-worker-bg44g       1/1     Running           0               2m12s
gpu-operator       gpu-operator-1651585557-node-feature-discovery-worker-lsj2d       1/1     Running           0               2m12s
gpu-operator       gpu-operator-7bfc5f55-rq98b                                       1/1     Running           0               2m12s
gpu-operator       nvidia-cuda-validator-62mls                                       0/1     Completed         0               61s
gpu-operator       nvidia-cuda-validator-q9w4d                                       0/1     Completed         0               43s
gpu-operator       nvidia-dcgm-exporter-rsxjw                                        0/1     PodInitializing   0               76s
gpu-operator       nvidia-dcgm-exporter-sd6j7                                        0/1     PodInitializing   0               76s
gpu-operator       nvidia-device-plugin-daemonset-6bvpg                              1/1     Running           0               77s
gpu-operator       nvidia-device-plugin-daemonset-zlmhw                              1/1     Running           0               77s
gpu-operator       nvidia-device-plugin-validator-k2m7l                              0/1     Init:0/1          0               1s
gpu-operator       nvidia-device-plugin-validator-q4h85                              0/1     Completed         0               19s
gpu-operator       nvidia-operator-validator-bcq7s                                   1/1     Running           0               77s
gpu-operator       nvidia-operator-validator-q257m                                   0/1     Init:3/4          0               77s
kube-system        coredns-64897985d-n49tt                                           1/1     Running           1 (7m45s ago)   27m
kube-system        coredns-64897985d-schzf                                           1/1     Running           1 (7m45s ago)   27m
kube-system        etcd-k8s01.dcws.dell.com                                          1/1     Running           1 (7m49s ago)   27m
kube-system        kube-apiserver-k8s01.dcws.dell.com                                1/1     Running           1 (7m39s ago)   27m
kube-system        kube-controller-manager-k8s01.dcws.dell.com                       1/1     Running           1 (7m49s ago)   27m
kube-system        kube-proxy-4r9k4                                                  1/1     Running           1 (8m9s ago)    11m
kube-system        kube-proxy-xqwbd                                                  1/1     Running           1 (7m50s ago)   27m
kube-system        kube-scheduler-k8s01.dcws.dell.com                                1/1     Running           1 (7m49s ago)   27m
tigera-operator    tigera-operator-75b96586c9-szbh5                                  1/1     Running           2 (5m57s ago)   17m
root@k8s01:~# // 数分後、再確認するとRunning、または、Completedになっていて正常と判断できる。
root@k8s01:~# kubectl get pod -A
NAMESPACE          NAME                                                              READY   STATUS      RESTARTS        AGE
calico-apiserver   calico-apiserver-66489864db-dgr49                                 1/1     Running     1 (8m39s ago)   16m
calico-apiserver   calico-apiserver-66489864db-nt569                                 1/1     Running     1 (8m39s ago)   16m
calico-system      calico-kube-controllers-557cb7fd8b-bw2sr                          1/1     Running     1 (8m39s ago)   18m
calico-system      calico-node-lrx66                                                 1/1     Running     1 (8m39s ago)   18m
calico-system      calico-node-vrzm5                                                 1/1     Running     1 (8m57s ago)   12m
calico-system      calico-typha-5d4d7646fb-wv4vx                                     1/1     Running     2 (6m48s ago)   18m
gpu-operator       gpu-feature-discovery-j2qkl                                       1/1     Running     0               2m5s
gpu-operator       gpu-feature-discovery-x6n4z                                       1/1     Running     0               2m5s
gpu-operator       gpu-operator-1651585557-node-feature-discovery-master-7dc5cnks5   1/1     Running     0               3m1s
gpu-operator       gpu-operator-1651585557-node-feature-discovery-worker-bg44g       1/1     Running     0               3m1s
gpu-operator       gpu-operator-1651585557-node-feature-discovery-worker-lsj2d       1/1     Running     0               3m1s
gpu-operator       gpu-operator-7bfc5f55-rq98b                                       1/1     Running     0               3m1s
gpu-operator       nvidia-cuda-validator-62mls                                       0/1     Completed   0               110s
gpu-operator       nvidia-cuda-validator-q9w4d                                       0/1     Completed   0               92s
gpu-operator       nvidia-dcgm-exporter-rsxjw                                        1/1     Running     0               2m5s
gpu-operator       nvidia-dcgm-exporter-sd6j7                                        1/1     Running     0               2m5s
gpu-operator       nvidia-device-plugin-daemonset-6bvpg                              1/1     Running     0               2m6s
gpu-operator       nvidia-device-plugin-daemonset-zlmhw                              1/1     Running     0               2m6s
gpu-operator       nvidia-device-plugin-validator-k2m7l                              0/1     Completed   0               50s
gpu-operator       nvidia-device-plugin-validator-q4h85                              0/1     Completed   0               68s
gpu-operator       nvidia-operator-validator-bcq7s                                   1/1     Running     0               2m6s
gpu-operator       nvidia-operator-validator-q257m                                   1/1     Running     0               2m6s
kube-system        coredns-64897985d-n49tt                                           1/1     Running     1 (8m34s ago)   28m
kube-system        coredns-64897985d-schzf                                           1/1     Running     1 (8m34s ago)   28m
kube-system        etcd-k8s01.dcws.dell.com                                          1/1     Running     1 (8m38s ago)   28m
kube-system        kube-apiserver-k8s01.dcws.dell.com                                1/1     Running     1 (8m28s ago)   28m
kube-system        kube-controller-manager-k8s01.dcws.dell.com                       1/1     Running     1 (8m38s ago)   28m
kube-system        kube-proxy-4r9k4                                                  1/1     Running     1 (8m58s ago)   12m
kube-system        kube-proxy-xqwbd                                                  1/1     Running     1 (8m39s ago)   28m
kube-system        kube-scheduler-k8s01.dcws.dell.com                                1/1     Running     1 (8m38s ago)   28m
tigera-operator    tigera-operator-75b96586c9-szbh5                                  1/1     Running     2 (6m46s ago)   18m
root@k8s01:~# // 動作確認のため、マニュアルに従い、cuda-vectoraddというGPUを使用するPodをデプロイする。
root@k8s01:~# cat << EOF | kubectl create -f -
> apiVersion: v1
> kind: Pod
> metadata:
>   name: cuda-vectoradd
> spec:
>   restartPolicy: OnFailure
>   containers:
>   - name: cuda-vectoradd
>     image: "nvidia/samples:vectoradd-cuda11.2.1"
>     resources:
>       limits:
>          nvidia.com/gpu: 1
> EOF
pod/cuda-vectoradd created
root@k8s01:~# // PodがCompletedになったことを確認
root@k8s01:~# kubectl get pod
NAME             READY   STATUS              RESTARTS   AGE
cuda-vectoradd   0/1     ContainerCreating   0          9s
root@k8s01:~# kubectl get pod
NAME             READY   STATUS      RESTARTS   AGE
cuda-vectoradd   0/1     Completed   0          25s
root@k8s01:~# // PodのlogからGPU Operaterが適切に動作されていることが確認できた。
root@k8s01:~# kubectl logs cuda-vectoradd
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done

次に続きます。

2
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?