0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

kubernetes v1.23.6/Nvidia GPUと戯れる その5

Posted at

前回からの続きになります。

作業手順

以下となります。

  1. 作業環境の準備
  2. Nvidia CUDA Toolkit、docker、Nvidia Container Toolkit
  3. kubeadmでクラスター構築
  4. helmとGPU Operator
  5. IstioとMetalLB ←ココ
  6. Jupyterhub

5. istioとMetalLB

  • HelmによるIsitoの導入はこちらです。
  • HelmによるMetalLBの導入はこちらです。
    • ただし、IstioもMetalLBもなぜか、Helmで導入できなかったので(1回目は成功したのに、なぜか、2回目はPodが起動しなかったので)、実際の作業はマニフェストで導入しています。
  • 実際の導入順序は以下です。
    • ①Istio導入(ただし、Ingress gatewayのkubectl get svc istio-ingressgateway -n istio-systemは未実施)
    • ②MetalLB導入
    • ③kubectl get svc istio-ingressgateway -n istio-system
      →MetalLBを導入しないと、IstioのサンプルアプリケーションのExternal IPがPendingとなります。気分的に嫌だったので、この順番となりました。

5.1 Istioの導入

  • 以下、導入手順のマニュアルからの抜粋です。動作確認を簡略化するために、Sample ApplicationをDeployしています。
Istioの導入コマンド
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.13.3
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
kubectl label namespace default istio-injection=enabled
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl get services
kubectl get pods
  • 以下、実際の作業です。
Istioの導入作業
root@k8s01:~#
root@k8s01:~# curl -L https://istio.io/downloadIstio | sh -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   101  100   101    0     0    196      0 --:--:-- --:--:-- --:--:--   196
100  4926  100  4926    0     0   6372      0 --:--:-- --:--:-- --:--:--  6372

Downloading istio-1.13.3 from https://github.com/istio/istio/releases/download/1.13.3/istio-1.13.3-linux-amd64.tar.gz ...

Istio 1.13.3 Download Complete!

Istio has been successfully downloaded into the istio-1.13.3 folder on your system.

Next Steps:
See https://istio.io/latest/docs/setup/install/ to add Istio to your Kubernetes cluster.

To configure the istioctl client tool for your workstation,
add the /root/istio-1.13.3/bin directory to your environment path variable with:
         export PATH="$PATH:/root/istio-1.13.3/bin"

Begin the Istio pre-installation check by running:
         istioctl x precheck

Need more information? Visit https://istio.io/latest/docs/setup/install/
root@k8s01:~#
root@k8s01:~# cd istio-1.13.3
root@k8s01:~/istio-1.13.3# 
root@k8s01:~/istio-1.13.3# export PATH=$PWD/bin:$PATH
root@k8s01:~/istio-1.13.3# 
root@k8s01:~/istio-1.13.3# istioctl install --set profile=demo -y
? Istio core installed
? Istiod installed
? Ingress gateways installed
? Egress gateways installed
? Installation complete                                                                                                                                                                                                                      Making this installation the default for injection and validation.

Thank you for installing Istio 1.13.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/pzWZpAvMVBecaQ9h9
root@k8s01:~/istio-1.13.3# 
root@k8s01:~/istio-1.13.3# kubectl label namespace default istio-injection=enabled
namespace/default labeled
root@k8s01:~/istio-1.13.3# 
root@k8s01:~/istio-1.13.3# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
root@k8s01:~/istio-1.13.3# kubectl get services
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.108.221.79    <none>        9080/TCP   7s
kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    14h
productpage   ClusterIP   10.111.38.119    <none>        9080/TCP   6s
ratings       ClusterIP   10.111.171.162   <none>        9080/TCP   6s
reviews       ClusterIP   10.101.247.61    <none>        9080/TCP   6s
root@k8s01:~/istio-1.13.3# 
root@k8s01:~/istio-1.13.3# kubectl get pods
NAME                              READY   STATUS            RESTARTS   AGE
cuda-vectoradd                    0/1     Completed         0          14h
details-v1-5498c86cf5-x7zfx       0/2     PodInitializing   0          14s
productpage-v1-65b75f6885-q2swr   0/2     PodInitializing   0          14s
ratings-v1-b477cf6cf-6zz98        0/2     PodInitializing   0          14s
reviews-v1-79d546878f-8qbxr       0/2     PodInitializing   0          14s
reviews-v2-548c57f459-jdddv       0/2     PodInitializing   0          14s
reviews-v3-6dd79655b9-pzfgh       0/2     PodInitializing   0          14s
root@k8s01:~#
root@k8s01:~/istio-1.13.3# watch kubectl get pods

Every 2.0s: kubectl get pods                                                                                                                                                                    k8s01.dcws.dell.com: Wed May  4 13:14:10 2022

NAME                              READY   STATUS      RESTARTS   AGE
cuda-vectoradd                    0/1     Completed   0          14h
details-v1-5498c86cf5-x7zfx       2/2     Running     0          6m21s
productpage-v1-65b75f6885-q2swr   2/2     Running     0          6m21s
ratings-v1-b477cf6cf-6zz98        2/2     Running     0          6m21s
reviews-v1-79d546878f-8qbxr       2/2     Running     0          6m21s
reviews-v2-548c57f459-jdddv       2/2     Running     0          6m21s
reviews-v3-6dd79655b9-pzfgh       2/2     Running     0          6m21s

root@k8s01:~/istio-1.13.3# 

5.2 MetalLB

  • MetalLBのマニフェストによるインストール方法はこちらです。
    • ipvsモードでkube-proxyを使う場合は、strictARPをtrueに設定する必要があります。
    • ipvsモードと、strictARPの設定箇所が微妙に離れていてわかりにくいです。
    • 以下、マニュアルからの抜粋です。
MetalLBの導入手順
kubectl edit configmap -n kube-system kube-proxy
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
mkdir metallb
cd metallb/
vi config.yaml
kubectl apply -f config.yaml
config.yamlの内容
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.31.35.140 - 172.31.35.149
kubectl edit configmap -n kube-system kube-proxyの設定
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 192.168.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: true  // strictARPの設定
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"   // modeの設定
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
  kubeconfig.conf: |-
/mode
  • 以下、導入作業の結果です。
MetalLB導入作業
root@k8s01:~/istio-1.13.3# kubectl edit configmap -n kube-system kube-proxy

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 192.168.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: true
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
  kubeconfig.conf: |-
"/tmp/kubectl-edit-762605064.yaml" 83 lines, 2220 characters written

configmap/kube-proxy edited
root@k8s01:~/istio-1.13.3# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
namespace/metallb-system created
root@k8s01:~/istio-1.13.3# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
role.rbac.authorization.k8s.io/controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
rolebinding.rbac.authorization.k8s.io/controller created
daemonset.apps/speaker created
deployment.apps/controller created
root@k8s01:~/istio-1.13.3# ls
LICENSE  README.md  bin  manifest.yaml  manifests  samples  tools
root@k8s01:~/istio-1.13.3# cd ..
root@k8s01:~# ls
helm  istio-1.13.3  nvidia  share  snap
root@k8s01:~# mkdir metallb
root@k8s01:~# cd metallb/
root@k8s01:~/metallb# ls
root@k8s01:~/metallb# vi config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.31.35.140 - 172.31.35.149
~
"config.yaml" [New File] 12 lines, 219 characters written

root@k8s01:~/istio-1.13.3# watch kubectl get pods
root@k8s01:~/istio-1.13.3# kubectl get pods
NAME                              READY   STATUS      RESTARTS   AGE
cuda-vectoradd                    0/1     Completed   0          14h
details-v1-5498c86cf5-x7zfx       2/2     Running     0          6m30s
productpage-v1-65b75f6885-q2swr   2/2     Running     0          6m30s
ratings-v1-b477cf6cf-6zz98        2/2     Running     0          6m30s
reviews-v1-79d546878f-8qbxr       2/2     Running     0          6m30s
reviews-v2-548c57f459-jdddv       2/2     Running     0          6m30s
reviews-v3-6dd79655b9-pzfgh       2/2     Running     0          6m30s
root@k8s01:~/istio-1.13.3#

root@k8s01:~/istio-1.13.3# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
namespace/metallb-system created
root@k8s01:~/istio-1.13.3# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
role.rbac.authorization.k8s.io/controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
rolebinding.rbac.authorization.k8s.io/controller created
daemonset.apps/speaker created
deployment.apps/controller created
root@k8s01:~/istio-1.13.3# ls
LICENSE  README.md  bin  manifest.yaml  manifests  samples  tools
root@k8s01:~/istio-1.13.3# cd ..
root@k8s01:~# ls
helm  istio-1.13.3  nvidia  share  snap
root@k8s01:~# mkdir metallb
root@k8s01:~# cd metallb/
root@k8s01:~/metallb# ls
root@k8s01:~/metallb# vi config.yaml
root@k8s01:~/metallb# kubectl apply -f config.yaml
configmap/config created

root@k8s01:~/metallb#
root@k8s01:~/metallb# kubectl get pod -A
NAMESPACE          NAME                                                              READY   STATUS      RESTARTS      AGE
calico-apiserver   calico-apiserver-66489864db-dgr49                                 1/1     Running     2 (14h ago)   14h
calico-apiserver   calico-apiserver-66489864db-nt569                                 1/1     Running     2 (14h ago)   14h
calico-system      calico-kube-controllers-557cb7fd8b-bw2sr                          1/1     Running     2 (14h ago)   14h
calico-system      calico-node-lrx66                                                 1/1     Running     2 (14h ago)   14h
calico-system      calico-node-vrzm5                                                 1/1     Running     2 (14h ago)   14h
calico-system      calico-typha-5d4d7646fb-wv4vx                                     1/1     Running     4 (14m ago)   14h
default            cuda-vectoradd                                                    0/1     Completed   0             14h
default            details-v1-5498c86cf5-x7zfx                                       2/2     Running     0             11m
default            productpage-v1-65b75f6885-q2swr                                   2/2     Running     0             11m
default            ratings-v1-b477cf6cf-6zz98                                        2/2     Running     0             11m
default            reviews-v1-79d546878f-8qbxr                                       2/2     Running     0             11m
default            reviews-v2-548c57f459-jdddv                                       2/2     Running     0             11m
default            reviews-v3-6dd79655b9-pzfgh                                       2/2     Running     0             11m
gpu-operator       gpu-feature-discovery-j2qkl                                       1/1     Running     1 (14h ago)   14h
gpu-operator       gpu-feature-discovery-x6n4z                                       1/1     Running     1 (14h ago)   14h
gpu-operator       gpu-operator-1651585557-node-feature-discovery-master-7dc5cnks5   1/1     Running     1 (14h ago)   14h
gpu-operator       gpu-operator-1651585557-node-feature-discovery-worker-bg44g       1/1     Running     1 (14h ago)   14h
gpu-operator       gpu-operator-1651585557-node-feature-discovery-worker-lsj2d       1/1     Running     1 (14h ago)   14h
gpu-operator       gpu-operator-7bfc5f55-rq98b                                       1/1     Running     1 (14h ago)   14h
gpu-operator       nvidia-cuda-validator-9bkw4                                       0/1     Completed   0             14m
gpu-operator       nvidia-cuda-validator-9qvpz                                       0/1     Completed   0             14m
gpu-operator       nvidia-dcgm-exporter-rsxjw                                        1/1     Running     1 (14h ago)   14h
gpu-operator       nvidia-dcgm-exporter-sd6j7                                        1/1     Running     1 (14h ago)   14h
gpu-operator       nvidia-device-plugin-daemonset-6bvpg                              1/1     Running     1 (14h ago)   14h
gpu-operator       nvidia-device-plugin-daemonset-zlmhw                              1/1     Running     1 (14h ago)   14h
gpu-operator       nvidia-device-plugin-validator-d2jwj                              0/1     Completed   0             13m
gpu-operator       nvidia-device-plugin-validator-gfkhv                              0/1     Completed   0             13m
gpu-operator       nvidia-operator-validator-bcq7s                                   1/1     Running     1 (14h ago)   14h
gpu-operator       nvidia-operator-validator-q257m                                   1/1     Running     1 (14h ago)   14h
istio-system       istio-egressgateway-7569bf4864-2pl66                              1/1     Running     0             12m
istio-system       istio-ingressgateway-5d6f5f9d78-w7zlm                             1/1     Running     0             12m
istio-system       istiod-d56576b74-zcptd                                            1/1     Running     0             12m
kube-system        coredns-64897985d-n49tt                                           1/1     Running     2 (14h ago)   14h
kube-system        coredns-64897985d-schzf                                           1/1     Running     2 (14h ago)   14h
kube-system        etcd-k8s01.dcws.dell.com                                          1/1     Running     2 (14h ago)   14h
kube-system        kube-apiserver-k8s01.dcws.dell.com                                1/1     Running     3 (15m ago)   14h
kube-system        kube-controller-manager-k8s01.dcws.dell.com                       1/1     Running     2 (14h ago)   14h
kube-system        kube-proxy-4r9k4                                                  1/1     Running     2 (14h ago)   14h
kube-system        kube-proxy-xqwbd                                                  1/1     Running     2 (14h ago)   14h
kube-system        kube-scheduler-k8s01.dcws.dell.com                                1/1     Running     2 (14h ago)   14h
metallb-system     controller-57fd9c5bb-jnd2n                                        1/1     Running     0             4m17s
metallb-system     speaker-hwlvl                                                     1/1     Running     0             4m17s
metallb-system     speaker-lsljc                                                     1/1     Running     0             4m17s
tigera-operator    tigera-operator-75b96586c9-szbh5                                  1/1     Running     3 (14h ago)   14h
root@k8s01:~/metallb#
root@k8s01:~/metallb# cd ../istio-1.13.3/
root@k8s01:~/istio-1.13.3# kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created
root@k8s01:~/istio-1.13.3# watch kubectl get pod -A
root@k8s01:~/istio-1.13.3# kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                                      AGE
istio-ingressgateway   LoadBalancer   10.108.143.143   172.31.35.140   15021:32359/TCP,80:32236/TCP,443:30863/TCP,31400:32358/TCP,15443:30608/TCP   19m
root@k8s01:~/istio-1.13.3# export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
root@k8s01:~/istio-1.13.3# export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
root@k8s01:~/istio-1.13.3# export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
root@k8s01:~/istio-1.13.3# export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
root@k8s01:~/istio-1.13.3# echo "$GATEWAY_URL"
172.31.35.140:80
root@k8s01:~/istio-1.13.3# echo "http://$GATEWAY_URL/productpage"
http://172.31.35.140:80/productpage
root@k8s01:~/istio-1.13.3#

5.3 その他

ここまでの作業で、k8sクラスターが作られ、デプロイされたPodが外部と通信できる環境が構築されました。
続きます。

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?