4
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

[Oracle Cloud] OKE で Persistent Volume Claim の動作確認

Posted at

はじめに

Oracle Cloud で提供されている Kubernentes マネージドサービス OKE を使って、 Persistent Volume (PV) と Persistent Volume Claim (PVC) の動作を確認していきます。確認するのは次のポイントです。

  • PV と Deployment, StatefulSet の紐づけを確認
  • PVC を作って、Block Volume と連携されることを確認
  • Block Volume の Backup から PV を作成して、事前にデータを格納できるか確認

PV と Deployment, StatefulSet の紐づけ

まず結論から書きますが、Persistent Volume と Deployment, StatefulSet の紐づけは次のようになっています。Deployment は、1個の Persistent Volume を複数の Pod で共有します。StatefulSet は、1 Pod 1 Persistent Volume となります。

Deployment で複数 Pod を使用したい場合は、NFS Protocol で ReadWriteMany が利用できる、File Storage Service を使います。Block Volume の場合は、ReadWriteOnce なので、複数 Pod から利用できません。

それでは実際に動作を確認していきます。

1594548110521.png

Storage Class の確認

OKE を作成した Default の状態で、既に Storage Class が用意されています

suguru_sug@cloudshell:.kube (ap-tokyo-1)$ kubectl get storageclass
NAME            PROVISIONER      AGE
oci (default)   oracle.com/oci   19d
suguru_sug@cloudshell:.kube (ap-tokyo-1)$ 

YAML で確認します

suguru_sug@cloudshell:.kube (ap-tokyo-1)$ kubectl get storageclass -o yaml
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"true"},"name":"oci"},"provisioner":"oracle.com/oci"}
      storageclass.beta.kubernetes.io/is-default-class: "true"
    creationTimestamp: "2020-06-22T16:05:43Z"
    name: oci
    resourceVersion: "414"
    selfLink: /apis/storage.k8s.io/v1/storageclasses/oci
    uid: 02aeb840-55be-4de2-adaa-10ad3ba3b6e2
  provisioner: oracle.com/oci
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
suguru_sug@cloudshell:.kube (ap-tokyo-1)$ 

PVC + Deployment

Create

確認した Storage Class oci を使って、Deployment に割り当てを行います。
まず、PVCのマニフェストファイルを作ります。
ちなみに、Request を 10Gi のように少なくしても、実際に作成されるのは 50GB の Block Volume になります。

cat <<'EOF' > ~/workdir/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cowweb-volume
spec:
  storageClassName: "oci"
  selector:
    matchLabels:
      failure-domain.beta.kubernetes.io/zone: "AP-TOKYO-1-AD-1"
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
EOF

apply

kubectl apply -f ~/workdir/pvc.yaml

確認

suguru_sug@cloudshell:workdir (ap-tokyo-1)$ kubectl get pvc -o wide
NAME            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
cowweb-volume   Pending                                      oci            13s   Filesystem
suguru_sug@cloudshell:workdir (ap-tokyo-1)$ 

一定時間後、Bond されています

suguru_sug@cloudshell:workdir (ap-tokyo-1)$ kubectl get pvc -o wide
NAME            STATUS   VOLUME                                                                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
cowweb-volume   Bound    ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja   50Gi       RWO            oci            39s   Filesystem
suguru_sug@cloudshell:workdir (ap-tokyo-1)$ 

PV も自動生成されています
Name が、Block Volume の OCID になっています

suguru_sug@cloudshell:workdir (ap-tokyo-1)$ kubectl get pv -o wide
NAME                                                                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE   VOLUMEMODE
ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja   50Gi       RWO            Delete           Bound    default/cowweb-volume   oci                     44s   Filesystem
suguru_sug@cloudshell:workdir (ap-tokyo-1)$ 

YAMLで確認
ociVolumeID という指定で、Block Volume と紐づけているようです。

suguru_sug@cloudshell:workdir (ap-tokyo-1)$ kubectl get pv -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      ociAvailabilityDomain: AP-TOKYO-1-AD-1
      ociCompartment: ocid1.compartment.oc1..aaaaaaaalxxd67fpsduvby7s3cj2ykm3bbl4myg37atkky27ipvfxv5iepda
      ociProvisionerIdentity: ociProvisionerIdentity
      ociVolumeID: ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja
      pv.kubernetes.io/provisioned-by: oracle.com/oci
    creationTimestamp: "2020-07-11T19:07:08Z"
    finalizers:
    - kubernetes.io/pv-protection
    labels:
      failure-domain.beta.kubernetes.io/region: ap-tokyo-1
      failure-domain.beta.kubernetes.io/zone: AP-TOKYO-1-AD-1
    name: ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja
    resourceVersion: "3997342"
    selfLink: /api/v1/persistentvolumes/ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja
    uid: fb926db2-8538-48ed-b05f-fe8f5dbf97c3
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 50Gi
    claimRef:
      apiVersion: v1
      kind: PersistentVolumeClaim
      name: cowweb-volume
      namespace: default
      resourceVersion: "3997295"
      uid: a4a5dfdb-8de4-4b69-85c1-4e7ad9363209
    flexVolume:
      driver: oracle/oci
      fsType: ext4
    persistentVolumeReclaimPolicy: Delete
    storageClassName: oci
    volumeMode: Filesystem
  status:
    phase: Bound
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
suguru_sug@cloudshell:workdir (ap-tokyo-1)$ 

OCI 上でも、同じ Compartment 上に自動的に Block Volume が作成されています

1594494529314.png

詳細画面

1594494590098.png

それでは、作成した PVC を使って、Deployment に紐づけてみます。まず、replicas は1です。

cat <<'EOF' > ~/workdir/deployment-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cowweb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cowweb
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: cowweb
    spec:
      containers:
      - name: cowweb
        image: sugimount/cowweb:v1.0
        ports:
        - name: api
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /cowsay/ping
            port: api
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /cowsay/ping
            port: api
          initialDelaySeconds: 15
          periodSeconds: 20
        imagePullPolicy: Always
        volumeMounts:
        - mountPath: /persistent-volume
          name: cowweb-storage
      volumes:
      - name: cowweb-storage
        persistentVolumeClaim:
          claimName: cowweb-volume
      securityContext:
        fsGroup: 1000
EOF

apply

kubectl apply -f ~/workdir/deployment-pvc.yaml

deployment

suguru_sug@cloudshell:~ (ap-tokyo-1)$ kubectl get deployment -o wide
NAME     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                  SELECTOR
cowweb   1/1     1            1           35m   cowweb       sugimount/cowweb:v1.0   app=cowweb
suguru_sug@cloudshell:~ (ap-tokyo-1)$ 

pod

suguru_sug@cloudshell:~ (ap-tokyo-1)$ kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
cowweb-644d5fc5c6-qrxv4   1/1     Running   0          36m   10.244.0.5   10.0.10.8   <none>           <none>
suguru_sug@cloudshell:~ (ap-tokyo-1)$ 

作成した pod の中に入ります

kubectl exec -it cowweb-644d5fc5c6-qrxv4 sh

Persistent Volume が mount されています。Path は マニフェストファイルの指定通りです。

~ $ df -hT
Filesystem           Type            Size      Used Available Use% Mounted on
overlay              overlay        38.4G      3.8G     34.6G  10% /
tmpfs                tmpfs          64.0M         0     64.0M   0% /dev
tmpfs                tmpfs           7.2G         0      7.2G   0% /sys/fs/cgroup
/dev/sdb             ext4           49.1G     52.0M     46.5G   0% /persistent-volume <=================== ここにMountされている
/dev/sda3            xfs            38.4G      3.8G     34.6G  10% /dev/termination-log
/dev/sda3            xfs            38.4G      3.8G     34.6G  10% /etc/resolv.conf
/dev/sda3            xfs            38.4G      3.8G     34.6G  10% /etc/hostname
/dev/sda3            xfs            38.4G      3.8G     34.6G  10% /etc/hosts
shm                  tmpfs          64.0M         0     64.0M   0% /dev/shm
tmpfs                tmpfs           7.2G     12.0K      7.2G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                tmpfs           7.2G         0      7.2G   0% /proc/acpi
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/kcore
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/keys
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/timer_list
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                tmpfs           7.2G         0      7.2G   0% /proc/scsi
tmpfs                tmpfs           7.2G         0      7.2G   0% /sys/firmware

Deployment を作成した直後、Pod が稼働している Worker Node に Block Volume が Attach されているのが確認できる

1594544660769.png

Worker Node から見た時

コンテナはプロセスなので、Worker Node から iSCSI で認識している Block Volume が見えます。
scsi device の一覧を確認します。

[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 ~]# lsblk --scsi
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sdb  3:0:0:1    disk ORACLE   BlockVolume      1.0  iscsi <======= Persistent Volume
sda  1:0:0:1    disk ORACLE   BlockVolume      1.0
[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 ~]#

lsblk で見ると、MOUNTPOINT が見えて、kubelet 配下に mount していることが分かります。

[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb      8:16   0   50G  0 disk /var/lib/kubelet/pods/aa72eb6f-edf9-4cd4-ab4a-caf98e7e7a37/volumes/oracle~oci/ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja  <======= Persistent Volume
sda      8:0    0 46.6G  0 disk
├─sda2   8:2    0    8G  0 part
├─sda3   8:3    0 38.4G  0 part /
└─sda1   8:1    0  200M  0 part /boot/efi
[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 ~]#

Path も含めて表示します

[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 ~]# lsblk -P
NAME="sdb" MAJ:MIN="8:16" RM="0" SIZE="50G" RO="0" TYPE="disk" MOUNTPOINT="/var/lib/kubelet/pods/aa72eb6f-edf9-4cd4-ab4a-caf98e7e7a37/volumes/oracle~oci/ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja" <======= Persistent Volume
NAME="sda" MAJ:MIN="8:0" RM="0" SIZE="46.6G" RO="0" TYPE="disk" MOUNTPOINT=""
NAME="sda2" MAJ:MIN="8:2" RM="0" SIZE="8G" RO="0" TYPE="part" MOUNTPOINT=""
NAME="sda3" MAJ:MIN="8:3" RM="0" SIZE="38.4G" RO="0" TYPE="part" MOUNTPOINT="/"
NAME="sda1" MAJ:MIN="8:1" RM="0" SIZE="200M" RO="0" TYPE="part" MOUNTPOINT="/boot/efi"
[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 ~]#

df で確認すると、ext4 として 50GB のファイルシステムをマウントしていることが見えます

[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 ~]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  7.2G     0  7.2G   0% /dev
tmpfs          tmpfs     7.3G     0  7.3G   0% /dev/shm
tmpfs          tmpfs     7.3G   33M  7.2G   1% /run
tmpfs          tmpfs     7.3G     0  7.3G   0% /sys/fs/cgroup
/dev/sda3      xfs        39G  3.8G   35G  10% /
/dev/sda1      vfat      200M  9.7M  191M   5% /boot/efi
tmpfs          tmpfs     7.3G   12K  7.3G   1% /var/lib/kubelet/pods/224cded9-1b6b-45a5-89f2-6e0c0fc642ab/volumes/kubernetes.io~secret/kube-proxy-token-f9hnj
tmpfs          tmpfs     7.3G   12K  7.3G   1% /var/lib/kubelet/pods/40a85423-7cc8-45ac-b628-a34cee14aa5b/volumes/kubernetes.io~secret/flannel-token-swf2n
tmpfs          tmpfs     7.3G   12K  7.3G   1% /var/lib/kubelet/pods/8ef6eaa9-418c-4a01-9ef2-17cd54c86ac2/volumes/kubernetes.io~secret/default-token-bj8nz
overlay        overlay    39G  3.8G   35G  10% /u01/data/docker/overlay2/6010dde33448e6cf70b136f8085c86ef35e37c01cc7122a22fec451c223219a1/merged
overlay        overlay    39G  3.8G   35G  10% /u01/data/docker/overlay2/a76f81dc09a6441e74633001abe48d86216904e7c18f1a741df145281d590560/merged
shm            tmpfs      64M     0   64M   0% /u01/data/docker/containers/121e5664df21f3b3e7dea1d7b035b062538efb2f03810a49df87b19d01e7fb79/mounts/shm
shm            tmpfs      64M     0   64M   0% /u01/data/docker/containers/c241fea2620190f34b4ee0216d8a71da96d12dabec9cf91e4fa0a864e72d3d3b/mounts/shm
overlay        overlay    39G  3.8G   35G  10% /u01/data/docker/overlay2/8ec841feaa2b22f303fb527f8f2a28a24cdb3a3aac3432c2ad483be4b7fc0f8b/merged
shm            tmpfs      64M     0   64M   0% /u01/data/docker/containers/bc3b5a5ed6d763e7cee803c14d4fba8baf5c278849bf4b5e0d072d71cb2d8d7f/mounts/shm
overlay        overlay    39G  3.8G   35G  10% /u01/data/docker/overlay2/41ecb906a4ec14fb1e4834ed4e18a672e1cf60328054264af531f061806681f7/merged
overlay        overlay    39G  3.8G   35G  10% /u01/data/docker/overlay2/0714ab0710e5c855e1c3ae54bae78160c1da5c29be821b6a6d981d18922b5a91/merged
overlay        overlay    39G  3.8G   35G  10% /u01/data/docker/overlay2/ceed125e97ff0ad5ce2b68e8650761a05cef9020495ecde0fb83edb06a3a4c92/merged
tmpfs          tmpfs     7.3G   12K  7.3G   1% /var/lib/kubelet/pods/aa72eb6f-edf9-4cd4-ab4a-caf98e7e7a37/volumes/kubernetes.io~secret/default-token-wqzmq
/dev/sdb       ext4       50G   53M   47G   1% /var/lib/kubelet/plugins/kubernetes.io/flexvolume/oracle/oci/mounts/ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja <======= Persistent Volume
overlay        overlay    39G  3.8G   35G  10% /u01/data/docker/overlay2/38c5c69d4147da1136032480b97778c44a687b58007d37e2fd21601dc8436fa3/merged
shm            tmpfs      64M     0   64M   0% /u01/data/docker/containers/c5714fdf8673d4e2775f6b76c07a034d5acd44263f74509ee38356fe26a0842a/mounts/shm
overlay        overlay    39G  3.8G   35G  10% /u01/data/docker/overlay2/c2286133eedd9516d0e68d3e4bd0c62888d82c75b4e24bd10226ebdd1577c224/merged
tmpfs          tmpfs     1.5G     0  1.5G   0% /run/user/1000
[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 ~]#

docker ps で、該当のコンテナを特定します。sugimount/cowweb が該当のコンテナです。

[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 etc]# docker ps
CONTAINER ID        IMAGE                                                   COMMAND                  CREATED             STATUS              PORTS               NAMES
7cf1807afdca        sugimount/cowweb                                        "java -jar /home/app…"   2 hours ago         Up 2 hours                              k8s_cowweb_cowweb-644d5fc5c6-qrxv4_default_aa72eb6f-edf9-4cd4-ab4a-caf98e7e7a37_0
c5714fdf8673        ap-tokyo-1.ocir.io/odx-oke/oke-public/pause-amd64:3.1   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_cowweb-644d5fc5c6-qrxv4_default_aa72eb6f-edf9-4cd4-ab4a-caf98e7e7a37_0
f9f87a531a9e        f0fad859c909                                            "/opt/bin/flanneld -…"   15 hours ago        Up 15 hours                             k8s_kube-flannel_kube-flannel-ds-bqsdc_kube-system_40a85423-7cc8-45ac-b628-a34cee14aa5b_3
cd90f9f0fd7f        849af609e0c6                                            "/usr/local/bin/kube…"   15 hours ago        Up 15 hours                             k8s_kube-proxy_kube-proxy-g47m7_kube-system_224cded9-1b6b-45a5-89f2-6e0c0fc642ab_1
d300cee122d0        c31b051add64                                            "/bin/proxymux.sh --…"   15 hours ago        Up 15 hours                             k8s_proxymux-client_proxymux-client-47fpz_kube-system_8ef6eaa9-418c-4a01-9ef2-17cd54c86ac2_1
bc3b5a5ed6d7        ap-tokyo-1.ocir.io/odx-oke/oke-public/pause-amd64:3.1   "/pause"                 15 hours ago        Up 15 hours                             k8s_POD_proxymux-client-47fpz_kube-system_8ef6eaa9-418c-4a01-9ef2-17cd54c86ac2_1
c241fea26201        ap-tokyo-1.ocir.io/odx-oke/oke-public/pause-amd64:3.1   "/pause"                 15 hours ago        Up 15 hours                             k8s_POD_kube-flannel-ds-bqsdc_kube-system_40a85423-7cc8-45ac-b628-a34cee14aa5b_1
121e5664df21        ap-tokyo-1.ocir.io/odx-oke/oke-public/pause-amd64:3.1   "/pause"                 15 hours ago        Up 15 hours                             k8s_POD_kube-proxy-g47m7_kube-system_224cded9-1b6b-45a5-89f2-6e0c0fc642ab_1
[root@oke-crdazdbgvsw-n3geolfmm3d-s6bnuvahoda-0 etc]#

inspect で詳細を確認します

# docker inspect 7cf1807afdca

Mounts 配下に情報が見えます。Worker Node 側で Block Volume を ext4 でマウントしている Directory をコンテナ側に見せています。

詳細
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/aa72eb6f-edf9-4cd4-ab4a-caf98e7e7a37/volumes/oracle~oci/ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja",
                "Destination": "/persistent-volume",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }

2個 Pod の動作

では、先ほどまで Deployment 配下の Pod は1個でした。2個にしてみて、動作を見てみましょう。2個目の Pod は稼働しないのが、想定している挙動です。

cat <<'EOF' > ~/workdir/deployment-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cowweb
spec:
  replicas: 2
  selector:
    matchLabels:
      app: cowweb
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: cowweb
    spec:
      containers:
      - name: cowweb
        image: sugimount/cowweb:v1.0
        ports:
        - name: api
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /cowsay/ping
            port: api
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /cowsay/ping
            port: api
          initialDelaySeconds: 15
          periodSeconds: 20
        imagePullPolicy: Always
        volumeMounts:
        - mountPath: /persistent-volume
          name: cowweb-storage
      volumes:
      - name: cowweb-storage
        persistentVolumeClaim:
          claimName: cowweb-volume
EOF

apply

kubectl apply -f ~/workdir/deployment-pvc.yaml

deployment

suguru_sug@cloudshell:~ (ap-tokyo-1)$ kubectl get deployment -o wide
NAME     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                  SELECTOR
cowweb   1/2     2            1           72m   cowweb       sugimount/cowweb:v1.0   app=cowweb

pod
2個目の Pod は Running にはなりません

suguru_sug@cloudshell:~ (ap-tokyo-1)$ kubectl get pods -o wide
NAME                      READY   STATUS              RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
cowweb-644d5fc5c6-dshhf   0/1     ContainerCreating   0          73s   <none>       10.0.10.7   <none>           <none>
cowweb-644d5fc5c6-qrxv4   1/1     Running             0          72m   10.244.0.5   10.0.10.8   <none>           <none>
suguru_sug@cloudshell:~ (ap-tokyo-1)$ 

Pod の Events を見ると、エラーが出ています

Multi-Attach error for volume "ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja" Volume is already used by pod(s) cowweb-644d5fc5c6-qrxv4 と表示されており、ReadWriteOnce のため、複数の Pod からアクセスできないことが分かります。

suguru_sug@cloudshell:~ (ap-tokyo-1)$ kubectl describe pod cowweb-644d5fc5c6-dshhf
Name:           cowweb-644d5fc5c6-dshhf
Namespace:      default
Priority:       0
Node:           10.0.10.7/10.0.10.7
Start Time:     Sun, 12 Jul 2020 08:55:39 +0000
Labels:         app=cowweb
                pod-template-hash=644d5fc5c6
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/cowweb-644d5fc5c6
Containers:
  cowweb:
    Container ID:   
    Image:          sugimount/cowweb:v1.0
    Image ID:       
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:api/cowsay/ping delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:      http-get http://:api/cowsay/ping delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /persistent-volume from cowweb-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wqzmq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  cowweb-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  cowweb-volume
    ReadOnly:   false
  default-token-wqzmq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wqzmq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason              Age        From                     Message
  ----     ------              ----       ----                     -------
  Normal   Scheduled           <unknown>  default-scheduler        Successfully assigned default/cowweb-644d5fc5c6-dshhf to 10.0.10.7
  Warning  FailedAttachVolume  93s        attachdetach-controller  Multi-Attach error for volume "ocid1.volume.oc1.ap-tokyo-1.abxhiljr76vvzu6b4xyfut7ftxjscg4gurlfeawdyfkgtsa6dpdzzkbjr3ja" Volume is already used by pod(s) cowweb-644d5fc5c6-qrxv4
suguru_sug@cloudshell:~ (ap-tokyo-1)$ 

PVC from Backup

OKE では、Block Volume の Backup を使って PVC を作成できます。動作するアプリケーションによっては、必要なデータを外部から持ってきた方が良い場合があります。様々な方法がありますが、Block Volume Backup に必要なデータを事前に格納しておくことが、一つの方法です。(他には、initContainer で Object Storage からダウンロードすることも考えられます)

Create

既存の Persistent Volume に適当にデータを入れます。

[opc@bastion ~]$ kubectl exec -it cowweb-7f8d748898-wsh92 sh
~ $ echo "mieru?" > /persistent-volume/CouldYouSeeMe?
~ $ cat /persistent-volume/CouldYouSeeMe?
mieru?
~ $

PV の OCID を確認

[opc@bastion ~]$ kubectl get pv -o wide
NAME                                                                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE     VOLUMEMODE
ocid1.volume.oc1.ap-tokyo-1.abxhiljr3anwkx6ye6i5eeyijzqapqqdr5esnrz6zouvfphpgcaitg32pkta   50Gi       RWO            Delete           Bound    default/cowweb-volume   oci                     9m55s   Filesystem

OCID が一致している Block Volume を特定

1594553343745.png

Backup を作成

1594553912300.png

Block Volume Backup 作成

1594554021931.png

一定時間後に、Available となります
OCID をコピーしておきます

ocid1.volumebackup.oc1.ap-tokyo-1.abxhiljrokdlb3nungbzuh25bcnliwm2rwle3pui3ft27soy6tqor2xep4ha

1594554093605.png

PVC from Backup のための、マニフェストファイルを作成します。volume.beta.kubernetes.io/oci-volume-source に OCID を指定します。

cat <<'EOF' > ~/workdir/pvcfrombackup.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvcfrombackup
  annotations:
    volume.beta.kubernetes.io/oci-volume-source: ocid1.volumebackup.oc1.ap-tokyo-1.abxhiljrokdlb3nungbzuh25bcnliwm2rwle3pui3ft27soy6tqor2xep4ha
spec:
  selector:
    matchLabels:
      failure-domain.beta.kubernetes.io/zone: "AP-TOKYO-1-AD-1"
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
EOF

PVC Create

kubectl apply -f ~/workdir/pvcfrombackup.yaml

Pending

[opc@bastion ~]$ kubectl get pvc -o wide
NAME            STATUS    VOLUME                                                                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
cowweb-volume   Bound     ocid1.volume.oc1.ap-tokyo-1.abxhiljr3anwkx6ye6i5eeyijzqapqqdr5esnrz6zouvfphpgcaitg32pkta   50Gi       RWO            oci            25m   Filesystem
pvcfrombackup   Pending                                                                                                                        oci            9s    Filesystem
[opc@bastion ~]$

Bound

[opc@bastion ~]$ kubectl get pvc -o wide
NAME            STATUS   VOLUME                                                                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
cowweb-volume   Bound    ocid1.volume.oc1.ap-tokyo-1.abxhiljr3anwkx6ye6i5eeyijzqapqqdr5esnrz6zouvfphpgcaitg32pkta   50Gi       RWO            oci            26m   Filesystem
pvcfrombackup   Bound    ocid1.volume.oc1.ap-tokyo-1.abxhiljrvjisjbpz2p3kihjmhyttgpffmpf4txpmuterw3lrr2ihpd5znefa   50Gi       RWO            oci            39s   Filesystem
[opc@bastion ~]$

PV

[opc@bastion ~]$ kubectl get pv -o wide
NAME                                                                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE   VOLUMEMODE
ocid1.volume.oc1.ap-tokyo-1.abxhiljr3anwkx6ye6i5eeyijzqapqqdr5esnrz6zouvfphpgcaitg32pkta   50Gi       RWO            Delete           Bound    default/cowweb-volume   oci                     25m   Filesystem
ocid1.volume.oc1.ap-tokyo-1.abxhiljrvjisjbpz2p3kihjmhyttgpffmpf4txpmuterw3lrr2ihpd5znefa   50Gi       RWO            Delete           Bound    default/pvcfrombackup   oci                     39s   Filesystem

Deployment
Replicas : 1

cat <<'EOF' > ~/workdir/deployment-pvcfrombackup.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cowweb2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cowweb2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: cowweb2
    spec:
      containers:
      - name: cowweb
        image: sugimount/cowweb:v1.0
        ports:
        - name: api
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /cowsay/ping
            port: api
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /cowsay/ping
            port: api
          initialDelaySeconds: 15
          periodSeconds: 20
        imagePullPolicy: Always
        volumeMounts:
        - mountPath: /persistent-volume
          name: cowweb-storage
      volumes:
      - name: cowweb-storage
        persistentVolumeClaim:
          claimName: pvcfrombackup
      securityContext:
        fsGroup: 1000
EOF

apply

kubectl apply -f ~/workdir/deployment-pvcfrombackup.yaml

deployment

[opc@bastion ~]$ kubectl get deployment -o wide
NAME      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                  SELECTOR
cowweb    1/1     1            1           30m   cowweb       sugimount/cowweb:v1.0   app=cowweb
cowweb2   0/1     1            0           30s   cowweb       sugimount/cowweb:v1.0   app=cowweb2

pod

$ kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
cowweb-7f8d748898-wsh92   1/1     Running   0          30m   10.244.0.7   10.0.10.8   <none>           <none>
cowweb2-7f74d8cf-gxwpd    1/1     Running   0          68s   10.244.0.8   10.0.10.8   <none>           <none>

pod の中にアクセスしてみて、Backup 前に作成したデータが有るか確認します。

kubectl exec -it cowweb2-7f74d8cf-gxwpd sh

無事に見えました。

/persistent-volume $ ls -la /persistent-volume/
total 24
drwxrwsr-x    3 root     app           4096 Jul 12 11:26 .
drwxr-xr-x    1 root     root            65 Jul 12 11:49 ..
-rw-rw-r--    1 app      app              7 Jul 12 11:26 CouldYouSeeMe?
drwxrws---    2 root     app          16384 Jul 12 11:20 lost+found
/persistent-volume $

無事に見えました。

/persistent-volume $ cat /persistent-volume/CouldYouSeeMe?
mieru?

PVC + StatefulSet

StatefulSet と PVC の挙動を確認します。

Create

StetefulSet の Manifest を作成します。volumeClaimTemplates がポイントです。replicas は 3 です。

cat <<'EOF' > statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cowweb-statefulset
spec:
  serviceName: cowweb
  replicas: 3
  selector:
    matchLabels:
      app: cowweb-statefulset
  template:
    metadata:
      labels:
        app: cowweb-statefulset
    spec:
      containers:
      - name: cowweb
        image: sugimount/cowweb:v1.0
        ports:
        - name: api
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /cowsay/ping
            port: api
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /cowsay/ping
            port: api
          initialDelaySeconds: 15
          periodSeconds: 20
        imagePullPolicy: Always
        volumeMounts:
        - mountPath: /persistent-volume
          name: cowweb-storage
      securityContext:
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: cowweb-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: "oci"
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 50G
EOF

apply

kubectl apply -f statefulset.yaml

確認

[opc@bastion workdir]$ kubectl get sts -o wide
NAME                 READY   AGE    CONTAINERS   IMAGES
cowweb-statefulset   1/3     105s   cowweb       sugimount/cowweb:v1.0

PodManagementPolicy が Default の OrderedReady なので、1個ずつ順番に作成される

[opc@bastion workdir]$ kubectl get pod -o wide
NAME                   READY   STATUS              RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
cowweb-statefulset-0   1/1     Running             0          3m7s   10.244.1.9   10.0.10.7   <none>           <none>
cowweb-statefulset-1   1/1     Running             0          91s    10.244.0.9   10.0.10.8   <none>           <none>
cowweb-statefulset-2   0/1     ContainerCreating   0          15s    <none>       10.0.10.7   <none>           <none>

PCVが3個自動作成される

[opc@bastion workdir]$ kubectl get pvc -o wide
NAME                                  STATUS   VOLUME                                                                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE     VOLUMEMODE
cowweb-storage-cowweb-statefulset-0   Bound    ocid1.volume.oc1.ap-tokyo-1.abxhiljrnrmimc6jjgrcjdyhxbge6cyvi2ekq3qljs36vbd3dzh2j77c4lrq   50Gi       RWO            oci            8m36s   Filesystem
cowweb-storage-cowweb-statefulset-1   Bound    ocid1.volume.oc1.ap-tokyo-1.abxhiljrrhgra64eelaepnmlhrq6tgs7ooajxq4rrtnf5ikmshraoa67eqra   50Gi       RWO            oci            7m      Filesystem
cowweb-storage-cowweb-statefulset-2   Bound    ocid1.volume.oc1.ap-tokyo-1.abxhiljrkhglvrysh4aevdcrnnajmk42ha3rjozgeo76i2bybodgajxvfesa   50Gi       RWO            oci            5m44s   Filesystem

PVが3個自動作成される

[opc@bastion workdir]$ kubectl get pv -o wide
NAME                                                                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                         STORAGECLASS   REASON   AGE     VOLUMEMODE
ocid1.volume.oc1.ap-tokyo-1.abxhiljrkhglvrysh4aevdcrnnajmk42ha3rjozgeo76i2bybodgajxvfesa   50Gi       RWO            Delete           Bound    default/cowweb-storage-cowweb-statefulset-2   oci
    5m50s   Filesystem
ocid1.volume.oc1.ap-tokyo-1.abxhiljrnrmimc6jjgrcjdyhxbge6cyvi2ekq3qljs36vbd3dzh2j77c4lrq   50Gi       RWO            Delete           Bound    default/cowweb-storage-cowweb-statefulset-0   oci
    8m42s   Filesystem
ocid1.volume.oc1.ap-tokyo-1.abxhiljrrhgra64eelaepnmlhrq6tgs7ooajxq4rrtnf5ikmshraoa67eqra   50Gi       RWO            Delete           Bound    default/cowweb-storage-cowweb-statefulset-1   oci
    6m57s   Filesystem

OCI

1594557147069.png

pods

[opc@bastion workdir]$ kubectl get pods -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
cowweb-statefulset-0   1/1     Running   0          25m   10.244.1.9    10.0.10.7   <none>           <none>
cowweb-statefulset-1   1/1     Running   0          23m   10.244.0.9    10.0.10.8   <none>           <none>
cowweb-statefulset-2   1/1     Running   0          22m   10.244.1.10   10.0.10.7   <none>           <none>

Mount している

[opc@bastion workdir]$ kubectl exec -it cowweb-statefulset-0 sh
~ $
~ $
~ $ df -hT
Filesystem           Type            Size      Used Available Use% Mounted on
overlay              overlay        38.4G      3.8G     34.6G  10% /
tmpfs                tmpfs          64.0M         0     64.0M   0% /dev
tmpfs                tmpfs           7.2G         0      7.2G   0% /sys/fs/cgroup
/dev/sdb             ext4           49.1G     52.0M     46.5G   0% /persistent-volume
/dev/sda3            xfs            38.4G      3.8G     34.6G  10% /dev/termination-log
/dev/sda3            xfs            38.4G      3.8G     34.6G  10% /etc/resolv.conf
/dev/sda3            xfs            38.4G      3.8G     34.6G  10% /etc/hostname
/dev/sda3            xfs            38.4G      3.8G     34.6G  10% /etc/hosts
shm                  tmpfs          64.0M         0     64.0M   0% /dev/shm
tmpfs                tmpfs           7.2G     12.0K      7.2G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                tmpfs           7.2G         0      7.2G   0% /proc/acpi
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/kcore
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/keys
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/timer_list
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                tmpfs           7.2G         0      7.2G   0% /proc/scsi
tmpfs                tmpfs           7.2G         0      7.2G   0% /sys/firmware

参考URL

4
2
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
4
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?