0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

KubernetesクラスタをVMware Workstationでオンプレ想定で構築してみる (RHEL9) - (2) Rook-Ceph導入編 -

Last updated at Posted at 2025-05-13

はじめに

KubernetesクラスタにRook-Cephをインストールする際の備忘録。ディプロイ環境はオンプレ想定で構成してみる。

以下で構成する。

  • Ceph File System
  • Ceph Block Device
  • Ceph Object Store

利用するソフトウェア

  • VMware Workstation 17 Pro (Windows 11 / X86_64)
  • RHEL 9.5 (VM)
  • Dnsmasq (2.79)
  • HA-Proxy (1.8.27)
  • Kubernetes (v1.32)
  • CRI-O (v1.32)
  • Calico (v3.29.3)
  • MetalLB (v0.14.9)
  • Rook-Ceph (v1.17.1)
  • Ceph (v19.2.2)
  • amazon/aws-cli

Kubernetesクラスタ構成

以下で構築済みのKubernetesクラスタを利用する。

ワーカーノードへ仮想ハードディスクを追加

対象ノードは以下の通り。

  • k8s-worker0.test.k8s.local
  • k8s-worker1.test.k8s.local
  • k8s-worker2.test.k8s.local

追加ディスクの設定例

項目
ディスクタイプ SCSI
サイズ 5GB

(おまけ) 起動ディスク順の修正例

追加ディスクにより起動順が変わってしまう場合があるのでその際の備忘録。

(1) 仮想マシンのパワーオン後に「Esc」キーでBIOS画面へ入る
(2) 「Boot Menu」 => [Enter Setup]
(3) 「Boot」=>「Hard Drive」
(4) (従来の)起動するディスクを選択して<+/->で上へ移動
(5) 「Exit」=>Exit Saving Changes
(6) 「Setup Confirmation」=>[Yes]

確認

追加ディスクの確認
[k8s-worker0 ~]$ lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda             8:0    0  100G  0 disk
tqsda1          8:1    0    1G  0 part /boot
mqsda2          8:2    0   99G  0 part
  tqrhel-root 253:0    0 65.2G  0 lvm  /var/lib/containers/storage/overlay
  x                                    /
  tqrhel-swap 253:1    0    2G  0 lvm
  mqrhel-home 253:2    0 31.8G  0 lvm  /home
sdb             8:16   0    5G  0 disk ★追加したディスク
sr0            11:0    1 1024M  0 rom

k8s-worker1/2も同様に確認する。

※ 追加ディスク(sdb)はマウントされていないこと。

事前確認

lvm2インストール済みの確認
[k8s-worker0 ~]$ rpm -qa |grep lvm2
lvm2-libs-2.03.24-2.el9.x86_64
lvm2-2.03.24-2.el9.x86_64
udisks2-lvm2-2.9.4-11.el9.x86_64

Rook-Cephインストール

Rookインストール
[mng ~]$ sudo dnf install -y git

[mng ~]$ git clone --single-branch --branch v1.17.1 https://github.com/rook/rook.git

[mng ~]$ cd deploy/examples/
 
[mng examples]$ kubectl apply -f crds.yaml -f common.yaml -f operator.yaml
確認
[mng examples]$ kubectl -n rook-ceph get pod
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-676f8f8ffd-vp6gs   1/1     Running   0          2m41s
[mng ~]$ cd deploy/examples/

[mng example]$ vi cluster.yaml
cluster.yaml
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
...
  mon:
    count: 3
    allowMultiplePerNode: false
...
  mgr:
    count: 2
...
  storage:
    useAllNodes: true   # すべてのノードを対象にする
    useAllDevices: true # 未使用のすべてのディスクを自動利用 (今回はsdbが対象)
...
Cephクラスタをインストール
[mng example]$ kubectl apply -f cluster.yaml
確認 (数分後・・・それなりに時間かかる)
[mng example]$ kubectl -n rook-ceph get pod -o wide
NAME                                                    READY   STATUS      RESTARTS        AGE   IP               NODE          NOMINATED NODE   READINESS GATES
csi-cephfsplugin-4kdfc                                  3/3     Running     1 (27m ago)     27m   10.0.0.157       k8s-worker2   <none>           <none>
csi-cephfsplugin-b7652                                  3/3     Running     1 (27m ago)     27m   10.0.0.155       k8s-worker0   <none>           <none>
csi-cephfsplugin-provisioner-64ff4dcc86-fcwvb           6/6     Running     2 (25m ago)     27m   172.20.194.67    k8s-worker1   <none>           <none>
csi-cephfsplugin-provisioner-64ff4dcc86-p7fsh           6/6     Running     1 (27m ago)     27m   172.30.126.1     k8s-worker2   <none>           <none>
csi-cephfsplugin-x567t                                  3/3     Running     1 (27m ago)     27m   10.0.0.156       k8s-worker1   <none>           <none>
csi-rbdplugin-d7sgh                                     3/3     Running     1 (27m ago)     27m   10.0.0.156       k8s-worker1   <none>           <none>
csi-rbdplugin-ldkrg                                     3/3     Running     1 (27m ago)     27m   10.0.0.157       k8s-worker2   <none>           <none>
csi-rbdplugin-provisioner-7cbd54db94-g49fq              6/6     Running     1 (27m ago)     27m   172.20.194.68    k8s-worker1   <none>           <none>
csi-rbdplugin-provisioner-7cbd54db94-l6k2h              6/6     Running     3 (23m ago)     27m   172.23.229.131   k8s-worker0   <none>           <none>
csi-rbdplugin-qsvr9                                     3/3     Running     1 (27m ago)     27m   10.0.0.155       k8s-worker0   <none>           <none>
rook-ceph-crashcollector-k8s-worker0-7ff6f9d799-hhs25   1/1     Running     0               19m   172.23.229.134   k8s-worker0   <none>           <none>
rook-ceph-crashcollector-k8s-worker1-55cfdf86f4-2ctfp   1/1     Running     0               18m   172.20.194.76    k8s-worker1   <none>           <none>
rook-ceph-crashcollector-k8s-worker2-77f749fc89-hsfbn   1/1     Running     0               19m   172.30.126.10    k8s-worker2   <none>           <none>
rook-ceph-exporter-k8s-worker0-7456d45dd9-psgjm         1/1     Running     0               19m   172.23.229.135   k8s-worker0   <none>           <none>
rook-ceph-exporter-k8s-worker1-5bc668cb8-b89x6          1/1     Running     0               18m   172.20.194.77    k8s-worker1   <none>           <none>
rook-ceph-exporter-k8s-worker2-554cdc747f-wznbf         1/1     Running     0               19m   172.30.126.11    k8s-worker2   <none>           <none>
rook-ceph-mgr-a-9c478d87-swllz                          3/3     Running     0               98s   172.23.229.138   k8s-worker0   <none>           <none>
rook-ceph-mgr-b-8b8dbd5d4-2pddq                         3/3     Running     0               20m   172.30.126.5     k8s-worker2   <none>           <none>
rook-ceph-mon-a-67bdf7bfc7-gsv7x                        2/2     Running     0               26m   172.30.126.4     k8s-worker2   <none>           <none>
rook-ceph-mon-b-5f64d68b48-fzgv8                        2/2     Running     0               20m   172.23.229.133   k8s-worker0   <none>           <none>
rook-ceph-mon-c-5966ff8f88-jngq8                        2/2     Running     7 (110s ago)    20m   172.20.194.70    k8s-worker1   <none>           <none>
rook-ceph-operator-676f8f8ffd-vp6gs                     1/1     Running     0               61m   172.30.126.0     k8s-worker2   <none>           <none>
rook-ceph-osd-0-56ff568c7-z5fmx                         2/2     Running     0               19m   172.23.229.137   k8s-worker0   <none>           <none>
rook-ceph-osd-1-7b6844dcf8-kg4hc                        2/2     Running     0               19m   172.30.126.9     k8s-worker2   <none>           <none>
rook-ceph-osd-2-c9c9b4fd6-s6lbw                         2/2     Running     4 (5m54s ago)   18m   172.20.194.75    k8s-worker1   <none>           <none>
rook-ceph-osd-prepare-k8s-worker0-bmmwf                 0/1     Completed   0               19m   172.23.229.136   k8s-worker0   <none>           <none>
rook-ceph-osd-prepare-k8s-worker1-w5dml                 0/1     Completed   0               19m   172.20.194.74    k8s-worker1   <none>           <none>
rook-ceph-osd-prepare-k8s-worker2-bpj5x                 0/1     Completed   0               19m   172.30.126.8     k8s-worker2   <none>           <none>

インストールログの確認 (対象ディスク: sdb)
[mng example]$ kubectl -n rook-ceph get pod -l app=rook-ceph-osd-prepare -o wide
NAME                                      READY   STATUS      RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
rook-ceph-osd-prepare-k8s-worker0-bmmwf   0/1     Completed   0          21m   172.23.229.136   k8s-worker0   <none>           <none>
rook-ceph-osd-prepare-k8s-worker1-w5dml   0/1     Completed   0          20m   172.20.194.74    k8s-worker1   <none>           <none>
rook-ceph-osd-prepare-k8s-worker2-bpj5x   0/1     Completed   0          20m   172.30.126.8     k8s-worker2   <none>           <none>

mng rook-ceph-cluster]$ kubectl -n rook-ceph logs rook-ceph-osd-prepare-k8s-worker0-bmmwf provision |less
...
2025-04-07 17:07:39.288163 I | cephosd: device "sdb" is available.
...
kubectl -n rook-ceph get svc -o wide
NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE   SELECTOR
rook-ceph-exporter        ClusterIP      10.98.241.113   <none>        9926/TCP            35h   app=rook-ceph-exporter,rook_cluster=rook-ceph
rook-ceph-mgr             ClusterIP      10.102.117.52   <none>        9283/TCP            35h   app=rook-ceph-mgr,mgr_role=active,rook_cluster=rook-ceph
rook-ceph-mgr-dashboard   ClusterIP      10.101.88.188   <none>        8443/TCP            35h   app=rook-ceph-mgr,mgr_role=active,rook_cluster=rook-ceph
rook-ceph-mon-a           ClusterIP      10.97.133.0     <none>        6789/TCP,3300/TCP   35h   app=rook-ceph-mon,ceph_daemon_id=a,mon=a,mon_cluster=rook-ceph,rook_cluster=rook-ceph
rook-ceph-mon-b           ClusterIP      10.99.168.208   <none>        6789/TCP,3300/TCP   35h   app=rook-ceph-mon,ceph_daemon_id=b,mon=b,mon_cluster=rook-ceph,rook_cluster=rook-ceph
rook-ceph-mon-c           ClusterIP      10.97.123.26    <none>        6789/TCP,3300/TCP   35h   app=rook-ceph-mon,ceph_daemon_id=c,mon=c,mon_cluster=rook-ceph,rook_cluster=rook-ceph

Rook Toolboxインストール

cephコマンドなどツールを実行するPodをインストールする。今回はとりあえずreplicas数を1(toolbox.yaml)で導入。

Toolboxインストール
[mng ~]$ cd deploy/examples/
 
[mng examples]$ kubectl apply -f toolbox.yaml

[mng examples]$ kubectl -n rook-ceph rollout status deploy/rook-ceph-tools
deployment "rook-ceph-tools" successfully rolled out

[mng examples]$ kubectl -n rook-ceph get pod -l app=rook-ceph-tools -o wide
NAME                               READY   STATUS    RESTARTS   AGE    IP              NODE          NOMINATED NODE   READINESS GATES
rook-ceph-tools-7b75b967db-h65ww   1/1     Running   0          2m2s   172.20.194.82   k8s-worker1   <none>           <none>
Ceph Podへ接続してステータスなどを確認
[mng ~]$ kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
bash-5.1$ ceph status
  cluster:
    id:     7eda17b1-3eba-44a2-9b23-a68b71b5fe4a
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 13m)
    mgr: b(active, since 12m), standbys: a
    osd: 3 osds: 3 up (since 11m), 3 in (since 92m)

  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   109 MiB used, 15 GiB / 15 GiB avail
    pgs:     1 active+clean

bash-5.1$ ceph osd status
ID  HOST          USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  k8s-worker0  34.1M  5085M      0        0       0        0   exists,up
 1  k8s-worker2  37.0M  5082M      0        0       0        0   exists,up
 2  k8s-worker1  38.1M  5081M      0        0       0        0   exists,up

bash-5.1$ ceph df
--- RAW STORAGE ---
CLASS    SIZE   AVAIL     USED  RAW USED  %RAW USED
hdd    15 GiB  15 GiB  109 MiB   109 MiB       0.71
TOTAL  15 GiB  15 GiB  109 MiB   109 MiB       0.71

--- POOLS ---
POOL  ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr   1    1  449 KiB        2  1.3 MiB      0    4.7 GiB

bash-5.1$ ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME             STATUS  REWEIGHT  PRI-AFF
-1         0.01469  root default
-3         0.00490      host k8s-worker0
 0    hdd  0.00490          osd.0             up   1.00000  1.00000
-7         0.00490      host k8s-worker1
 2    hdd  0.00490          osd.2             up   1.00000  1.00000
-5         0.00490      host k8s-worker2
 1    hdd  0.00490          osd.1             up   1.00000  1.00000

bash-5.1$ exit

Ceph File Systemインストール

Ceph File Systemの定義
[mng ~]$ cd rook/deploy/examples/

[mng examples]$ vi filesystem.yaml
filesystem.yaml
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: myfs
  namespace: rook-ceph
spec:
  metadataPool:
    replicated:
      size: 3
...
  dataPools:
    - replicated:
        size: 3
...
  metadataServer:
    activeCount: 1
    activeStandby: true
...

※ MDS (Meta Data Server): ファイルやディレクトリのメタデータ(例: 階層構造)の管理を行う

インストール
[mng examples]$ kubectl apply -f filesystem.yaml
MDS Podを確認
[mng examples]$ kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME                                    READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-a-76fd55bff9-z28rb   2/2     Running   0          17s
rook-ceph-mds-myfs-b-7bfdd88975-52rzw   2/2     Running   0          15s
MDSの動作ステータスを確認
[mng examples]$ kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
bash-5.1$ ceph status
  cluster:
    id:     7eda17b1-3eba-44a2-9b23-a68b71b5fe4a
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 26m)
    mgr: a(active, since 26m), standbys: b
    mds: 1/1 daemons up, 1 hot standby ★
    osd: 3 osds: 3 up (since 26m), 3 in (since 31h)

  data:
    volumes: 1/1 healthy
    pools:   11 pools, 233 pgs
    objects: 412 objects, 526 KiB
    usage:   174 MiB used, 15 GiB / 15 GiB avail
    pgs:     233 active+clean

  io:
    client:   1.2 KiB/s rd, 2 op/s rd, 0 op/s wr

Ceph File System用のStorageClassを作成

StorageClassの定義
[mng ~]$ cd deploy/examples/csi/cephfs

[mng cephfs]$ vi storageclass.yaml
storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com
...
parameters:
  fsName: myfs
  clusterID: rook-ceph
...
適用
[mng cephfs]$ kubectl apply -f storageclass.yaml
確認
mng cephfs]$ kubectl -n rook-ceph get cephfilesystem
NAME   ACTIVEMDS   AGE   PHASE
myfs   1           13m   Ready

Ceph File Systemの動作確認

異なるワーカーノード上で動作するPod間において、共有ファイルシステムとしてマウントしたディレクトリへのファイル読み書きテストを実施してみる。

cephfs1.png

PVC (Persistent Volume Claim)の作成

cephfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: rook-cephfs
  resources:
    requests:
      storage: 1Gi
適用
[mng ceph-test]$ kubectl apply -f cephfs-pvc.yaml

k8s-worker0.test.k8s.localノードへテスト用Podをディプロイ

共有ファイルシステムを「/mnt/cephfs」配下へマウントする。

cephfs-test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: cephfs-test-pod
  namespace: default
spec:
  nodeName: k8s-worker0
  containers:
    - name: tester
      image: busybox
      command: ["/bin/sh"]
      args: ["-c", "sleep 3600"]
      volumeMounts:
        - name: cephfs-vol
          mountPath: /mnt/cephfs
  volumes:
    - name: cephfs-vol
      persistentVolumeClaim:
        claimName: cephfs-pvc
適用
[mng ceph-test]$ kubectl apply -f cephfs-test-pod.yaml

k8s-worker1.test.k8s.localノードへテスト用Podをディプロイ

cephfs-test-pod-2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: cephfs-test-pod-2
  namespace: default
spec:
  nodeName: k8s-worker1
  containers:
    - name: tester
      image: busybox
      command: ["/bin/sh"]
      args: ["-c", "sleep 3600"]
      volumeMounts:
        - name: cephfs-vol
          mountPath: /mnt/cephfs
  volumes:
    - name: cephfs-vol
      persistentVolumeClaim:
        claimName: cephfs-pvc
適用
[mng ceph-test]$  kubectl apply -f cephfs-test-pod-2.yaml

CephFSをマウントしたディレクトリでファイル読み書きテスト

k8s-worker0ノード上で書き込み
[mng ceph-test]$ kubectl exec -it cephfs-test-pod -- /bin/sh

/ # hostname
cephfs-test-pod
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000
    link/ether 5a:ed:65:c0:96:8d brd ff:ff:ff:ff:ff:ff
    inet 172.23.229.174/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::58ed:65ff:fec0:968d/64 scope link
       valid_lft forever preferred_lft forever

/ # cd /mnt/cephfs/

/mnt/cephfs # echo "hello cephfs" > hello.txt

/mnt/cephfs # exit
k8s-worker1ノード上で読み込み
[mng ceph-test]$$ kubectl exec -it cephfs-test-pod-2 -- /bin/sh
/ # hostname
cephfs-test-pod-2

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000
    link/ether 0e:2d:a8:84:a6:9f brd ff:ff:ff:ff:ff:ff
    inet 172.20.194.101/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::c2d:a8ff:fe84:a69f/64 scope link
       valid_lft forever preferred_lft forever

/ # cd /mnt/cephfs/

/mnt/cephfs # ls
hello.txt

/mnt/cephfs # cat hello.txt
hello cephfs

/mnt/cephfs # exit
テスト用Podを削除
[mng ceph-test]$$ kubectl delete pod cephfs-test-pod
[mng ceph-test]$$ kubectl delete pod cephfs-test-pod-2

Ceph Block Deviceインストール

Ceph File Systemの定義
[mng ~]$ cd rook/deploy/examples/csi/rbd

[mng rbd]$ vi storageclass.yaml
storageclass.yaml
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3
...
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
...
インストール
[mng rbd]$ kubectl apply -f storageclass.yaml
確認
[mng rbd]$ kubectl get cephblockpool -n rook-ceph -o wide
NAME          PHASE   TYPE         FAILUREDOMAIN   REPLICATION   EC-CODINGCHUNKS   EC-DATACHUNKS   AGE
replicapool   Ready   Replicated   host            3             0                 0               85s

[mng rbd]$ kubectl get storageclass -n rook-ceph
NAME                      PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block           rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   2m17s
rook-cephfs               rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   3d19h

Ceph Block Deviceの動作確認

PVC (Persistent Volume Claim)の作成

rbd-test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-ceph-block
適用
[mng ceph-test]$ kubectl apply -f rbd-test-pvc.yaml
確認
[mng ceph-test]$ kubectl get pvc -o wide
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE   VOLUMEMODE
cephfs-pvc     Bound    pvc-f48cc70f-e92d-474d-82f0-10e6cc0a3084   1Gi        RWX            rook-cephfs       <unset>                 31h   Filesystem
rbd-test-pvc   Bound    pvc-4a77879c-66e7-4616-a92f-9f591fb9d668   1Gi        RWO            rook-ceph-block   <unset>                 48s   Filesystem

テスト用Podの作成

rbd-test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: rbd-test-pod
spec:
  containers:
    - name: app
      image: busybox
      command: [ "sh", "-c", "sleep 3600" ]
      volumeMounts:
        - mountPath: "/data"
          name: rbd-storage
  volumes:
    - name: rbd-storage
      persistentVolumeClaim:
        claimName: rbd-test-pvc
適用
[mng ceph-test]$ kubectl apply -f rbd-test-pod.yaml
確認
[mng ceph-test]$ kubectl get pod rbd-test-pod -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
rbd-test-pod   1/1     Running   0          32s   172.23.229.134   k8s-worker0   <none>           <none>

動作確認

テスト実施
mng ceph-test]$ kubectl exec -it rbd-test-pod -- sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  65.1G      9.4G     55.8G  14% /
tmpfs                    64.0M         0     64.0M   0% /dev
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                   725.2M     24.9M    700.3M   3% /etc/resolv.conf
tmpfs                   725.2M     24.9M    700.3M   3% /etc/hostname
tmpfs                   725.2M     24.9M    700.3M   3% /run/.containerenv
/dev/rbd0               973.4M     28.0K    957.4M   0% /data ★
/dev/mapper/rhel-root
                         65.1G      9.4G     55.8G  14% /etc/hosts
/dev/mapper/rhel-root
                         65.1G      9.4G     55.8G  14% /dev/termination-log
tmpfs                   725.2M     24.9M    700.3M   3% /run/secrets
tmpfs                     3.4G     12.0K      3.4G   0% /var/run/secrets/kubernetes.io/serviceaccount
devtmpfs                  4.0M         0      4.0M   0% /proc/kcore
devtmpfs                  4.0M         0      4.0M   0% /proc/keys
devtmpfs                  4.0M         0      4.0M   0% /proc/timer_lis

/ # cd /data/

/data # cd

/data # echo hello rbd > hello.txt

/data # cat hello.txt
hello rbd
テスト用PVとPodを削除
[mng ceph-test]$ kubectl delete -f rbd-test-pod.yaml
[mng ceph-test]$ kubectl delete -f rbd-test-pvc.yaml

Ceph Object Storeインストール

Amazon S3 APIでアクセス可能なオブジェクトストレージを構築してみる。

Ceph Object Storeの定義
[mng ~]$ cd rook/deploy/examples/

[mng examples]$ vi object.yaml
object.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  name: my-store
  namespace: rook-ceph
spec:
  metadataPool:
    failureDomain: host
    replicated:
      size: 3
...
  dataPool:
    failureDomain: host
    replicated:
      size: 3
...
  gateway:
    # sslCertificateRef:
    port: 80
    # securePort: 443
    instances: 3
...

※ RGW (RADOS Gateway): オブジェクトストレージのAPIゲートウェイ。Amazon S3互換インタフェースを提供。

※ RGWのインスタンス数を3にして全てのワーカーノードで動作させる。

インストール
[mng examples]$ kubectl apply -f object.yaml
確認
[mng examples]$ kubectl -n rook-ceph get pod -l app=rook-ceph-rgw -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
rook-ceph-rgw-my-store-a-76c74fbfc7-gdfcj   2/2     Running   0          13m   172.23.229.181   k8s-worker0   <none>           <none>
rook-ceph-rgw-my-store-a-76c74fbfc7-kb7lr   2/2     Running   2          19h   172.20.194.105   k8s-worker1   <none>           <none>
rook-ceph-rgw-my-store-a-76c74fbfc7-qnmdd   2/2     Running   0          13m   172.30.126.49    k8s-worker2   <none>           <none>
RGWの動作ステータスを確認
[mng examples]$ kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
bash-5.1$ ceph status
bash-5.1$ ceph status
  cluster:
    id:     7eda17b1-3eba-44a2-9b23-a68b71b5fe4a
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 4h)
    mgr: a(active, since 4h), standbys: b
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 4h), 3 in (since 35h)
    rgw: 3 daemons active (3 hosts, 1 zones) ★

  data:
    volumes: 1/1 healthy
    pools:   11 pools, 233 pgs
    objects: 412 objects, 526 KiB
    usage:   215 MiB used, 15 GiB / 15 GiB avail
    pgs:     233 active+clean

  io:
    client:   1.2 KiB/s rd, 2 op/s rd, 0 op/s wr

Ceph Object Store用のStorageClassを作成

StorageClassの定義
[mng ~]$ cd rook/deploy/examples/

[mng examples]$ vi storageclass-bucket-delete.yaml
storageclass-bucket-delete.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-delete-bucket
provisioner: rook-ceph.ceph.rook.io/bucket
...
parameters:
  objectStoreName: my-store
  objectStoreNamespace: rook-ceph
...

※とりあえずサンプルのまま利用する。

適用
[mng examples]$ kubectl apply -f storageclass-bucket-delete.yaml
確認
[mng examples]$ kubectl -n rook-ceph get storageclass -o wide
NAME                      PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-delete-bucket   rook-ceph.ceph.rook.io/bucket   Delete          Immediate           false                  21h
rook-cephfs               rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   33h
[mng examples]$ kubectl -n rook-ceph get svc -o wide
NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE   SELECTOR
rook-ceph-exporter        ClusterIP      10.98.241.113   <none>        9926/TCP            35h   app=rook-ceph-exporter,rook_cluster=rook-ceph
rook-ceph-mgr             ClusterIP      10.102.117.52   <none>        9283/TCP            35h   app=rook-ceph-mgr,mgr_role=active,rook_cluster=rook-ceph
rook-ceph-mgr-dashboard   ClusterIP      10.101.88.188   <none>        8443/TCP            35h   app=rook-ceph-mgr,mgr_role=active,rook_cluster=rook-ceph
rook-ceph-mon-a           ClusterIP      10.97.133.0     <none>        6789/TCP,3300/TCP   35h   app=rook-ceph-mon,ceph_daemon_id=a,mon=a,mon_cluster=rook-ceph,rook_cluster=rook-ceph
rook-ceph-mon-b           ClusterIP      10.99.168.208   <none>        6789/TCP,3300/TCP   35h   app=rook-ceph-mon,ceph_daemon_id=b,mon=b,mon_cluster=rook-ceph,rook_cluster=rook-ceph
rook-ceph-mon-c           ClusterIP      10.97.123.26    <none>        6789/TCP,3300/TCP   35h   app=rook-ceph-mon,ceph_daemon_id=c,mon=c,mon_cluster=rook-ceph,rook_cluster=rook-ceph
rook-ceph-rgw-my-store    ClusterIP      10.99.31.50     <none>        80/TCP              21h   app=rook-ceph-rgw,ceph_daemon_id=my-store,rgw=my-store,rook_cluster=rook-ceph,rook_object_store=my-store

RGW用のServiceを作成 (MetalLB)

RGW用のServiceを定義
[mng ~]$ cd ceph-test

[mng ceph-test]$ vi rgw-lb-service.yaml
rgw-lb-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: rgw-lb
  namespace: rook-ceph
spec:
  type: LoadBalancer
  selector:
    rgw: my-store
  ports:
    - name: http
      port: 80
      targetPort: 8080
適用
[mng ceph-test]$ kubectl apply -f rgw-lb-service.yaml
RGW Service確認
[mng ceph-test]$ kubectl -n rook-ceph get svc rgw-lb -o wide
NAME     TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
rgw-lb   LoadBalancer   10.98.122.211   10.0.0.60     80:32224/TCP   27m   rgw=my-store
RGW Podのエンドポイントを確認
[mng ceph-test]$ kubectl -n rook-ceph get endpoints rgw-lb -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    endpoints.kubernetes.io/last-change-trigger-time: "2025-04-09T02:29:35Z"
  creationTimestamp: "2025-04-08T07:33:46Z"
  name: rgw-lb
  namespace: rook-ceph
  resourceVersion: "102352"
  uid: 0e7e3086-520c-43fa-ac61-30c76f69360c
subsets:
- addresses:
  - ip: 172.20.194.105
    nodeName: k8s-worker1
    targetRef:
      kind: Pod
      name: rook-ceph-rgw-my-store-a-76c74fbfc7-kb7lr
      namespace: rook-ceph
      uid: e1c4c510-edda-43b6-8899-b514dcd6e5d6
  - ip: 172.23.229.181
    nodeName: k8s-worker0
    targetRef:
      kind: Pod
      name: rook-ceph-rgw-my-store-a-76c74fbfc7-gdfcj
      namespace: rook-ceph
      uid: c67f193a-09d1-41eb-aa1f-0862554fc886
  - ip: 172.30.126.49
    nodeName: k8s-worker2
    targetRef:
      kind: Pod
      name: rook-ceph-rgw-my-store-a-76c74fbfc7-qnmdd
      namespace: rook-ceph
      uid: 739c9783-95a3-4d1b-b48b-99e5da089b2e
  ports:
  - name: http
    port: 8080 ★RGW Podのリスニングポート
    protocol: TCP
疎通確認
[mng ceph-test]$ curl http://apps.test.k8s.local/
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID></Owner><Buckets></Buckets></ListAllMyBucketsResult>

ObjectBucketClaimでテスト用のバケットを作成

Ceph Object Storeの定義
[mng ~]$ cd rook/deploy/examples/

[mng examples]$ vi object-bucket-claim-delete.yaml
object-bucket-claim-delete.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: ceph-delete-bucket
spec:
  #bucketName:
  generateBucketName: ceph-bkt
  storageClassName: rook-ceph-delete-bucket
  additionalConfig:
    # To set for quota for OBC
    #maxObjects: "1000"
    #maxSize: "2G"

※とりあえずサンプルのまま利用する。

適用
kubectl apply -f object-bucket-claim-delete.yaml
確認 (ConfigMap)
[mng examples]$ kubectl -n default get cm ceph-delete-bucket -o yaml
apiVersion: v1
data:
  BUCKET_HOST: rook-ceph-rgw-my-store.rook-ceph.svc
  BUCKET_NAME: ceph-bkt-85b77650-f56a-476a-85ce-db4660979da6
  BUCKET_PORT: "80"
  BUCKET_REGION: ""
  BUCKET_SUBREGION: ""
kind: ConfigMap
metadata:
  creationTimestamp: "2025-04-08T07:26:22Z"
  finalizers:
  - objectbucket.io/finalizer
  labels:
    bucket-provisioner: rook-ceph.ceph.rook.io-bucket
  name: ceph-delete-bucket
  namespace: default
  ownerReferences:
  - apiVersion: objectbucket.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ObjectBucketClaim
    name: ceph-delete-bucket
    uid: 1d4301db-c55d-4074-8fbc-4424005d8bfa
  resourceVersion: "66208"
  uid: 225f672b-c65a-48aa-a67e-069434aa5a45

バケットアクセス用の認証情報も自動的に作成されているので確認する。

確認 (Secret)
[mng examples]$ kubectl -n default get secret ceph-delete-bucket -o yaml
apiVersion: v1
data:
  AWS_ACCESS_KEY_ID: MEI5SkY0xxxxxxxxxxxxxxxxUUY=
  AWS_SECRET_ACCESS_KEY: TThRdlpxxxxxxxxxxxxxxxxrYmswM2VrUxxxxxxxxxxxxxxxxmxDenFGZA==
kind: Secret
metadata:
  creationTimestamp: "2025-04-08T07:26:22Z"
  finalizers:
  - objectbucket.io/finalizer
  labels:
    bucket-provisioner: rook-ceph.ceph.rook.io-bucket
  name: ceph-delete-bucket
  namespace: default
  ownerReferences:
  - apiVersion: objectbucket.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ObjectBucketClaim
    name: ceph-delete-bucket
    uid: 1d4301db-c55d-4074-8fbc-4424005d8bfa
  resourceVersion: "66207"
  uid: 841c63cf-8550-4b50-8d1e-9def007f90cb
type: Opaque

バケットアクセスの動作確認

aws cliコマンドでバケットアクセスの動作確認を実施する。

aws cliをインストール
[mng ~]$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

[mng ~]$ unzip awscliv2.zip

[mng ~]$ sudo ./aws/install

[mng ~]$ aws --version
aws-cli/2.27.10 Python/3.13.2 Linux/4.18.0-553.50.1.el8_10.x86_64 exe/x86_64.rhel.8
RGWへのアクセス情報を取得する
# バケット名
[mng ~]$ kubectl -n default get cm ceph-delete-bucket -o jsonpath='{.data.BUCKET_NAME}'
ceph-bkt-85b77650-f56a-476a-85ce-db4660979da

# アクセスキーID (認証ID)
[mng ~]$ kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode
0B9JxxxxxxxxxxxxDQF

# シークレットアクセスキー (認証鍵)
[mng ~]$ kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode
M8Qvxxxxxxxxxxxxxxxx03ekSgYxxxxxxxxCzqFd
RGWアクセス用のaws cliプロファイルを作成
[mng ~]$ aws configure --profile rook
AWS Access Key ID [None]: 0B9JxxxxxxxxxxxxDQF
AWS Secret Access Key [None]: M8Qvxxxxxxxxxxxxxxxx03ekSgYxxxxxxxxCzqFd
Default region name [None]: us-east-1
Default output format [None]: json

※region nameには「us-east-1」など適当に入れておけばOK。

aws cliでRGWへアクセス確認

# バケット一覧
[mng ~]$ aws --profile rook --endpoint-url http://apps.test.k8s.local s3 ls
2025-04-08 16:26:22 ceph-bkt-85b77650-f56a-476a-85ce-db4660979da6

# ファイルアップロード
[mng ~]$ echo "hello, world" > hello_world.txt
[mng ~]$ aws --profile rook --endpoint-url http://apps.test.k8s.local s3 cp ./hello_world.txt s3://ceph-bkt-85b77650-f56a-476a-85ce-db4660979da6/

# ファイル一覧
[mng ~]$ aws --profile rook --endpoint-url http://apps.test.k8s.local s3 ls s3://ceph-bkt-85b77650-f56a-476a-85ce-db4660979da6
2025-04-08 16:47:30         13 hello_world.txt

新規ユーザを作成してバケットを追加

ユーザ情報を定義
[mng ~]$ cd rook/deploy/examples/

[mng examples]$ vi object-bucket-claim-delete.yaml
object-user.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
  name: testuser1
  namespace: rook-ceph
spec:
  store: my-store
  displayName: "test user 1"
  # quotas:
  #   maxBuckets: 100
  #   maxSize: 10G
  #   maxObjects: 10000
  # capabilities:
  #   user: "*"
  #   bucket: "*"
  #   metadata: "*"
  #   usage: "*"
  #   zone: "*"
  # clusterNamespace: rook-ceph
適用
[mng examples]$ kubectl apply -f object-user.yaml
確認
[mng examples]$ kubectl -n rook-ceph get secret
NAME                                       TYPE                 DATA   AGE
...
rook-ceph-object-user-my-store-testuser1   kubernetes.io/rook   3      8m9s
...

[mng examples]$ kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-testuser1 -o yaml
apiVersion: v1
data:
  AccessKey: VTFMxxxxxxxxxxxxxxxxTkJMRkE=
  Endpoint: aHR0cxxxxxxxxxxxxxxxxC1yZ3ctxxxxxxxxxxxxxxxxay1jZXBoLnN2Yzo4MA==
  SecretKey: RjVxxxxxxxxxxxxxxxxwSU5pWUI0bxxxxxxxxxxxxlN1cQ==
kind: Secret
metadata:
  creationTimestamp: "2025-04-09T05:32:50Z"
  labels:
    app: rook-ceph-rgw
    rook_cluster: rook-ceph
    rook_object_store: my-store
    user: testuser1
  name: rook-ceph-object-user-my-store-testuser1
  namespace: rook-ceph
  ownerReferences:
  - apiVersion: ceph.rook.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: CephObjectStoreUser
    name: testuser1
    uid: 3185aa8b-bab6-4935-8247-6de6bfed5055
  resourceVersion: "131611"
  uid: 41aa934b-3239-4e98-970a-5289bf543938
type: kubernetes.io/rook
RGWへのアクセス情報を取得する
# アクセスキーID (認証ID)
[mng ~]$  get secret rook-ceph-object-user-my-store-testuser1 -o  jsonpath='{.data.AccessKey}' | base64 --decode
0B9JxxxxxxxxxxxxDQF

# シークレットアクセスキー (認証鍵)
[mng ~]$  kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-testuser1 -o  jsonpath='{.data.SecretKey}' | base64 --decode
M8Qvxxxxxxxxxxxxxxxx03ekSgYxxxxxxxxCzqFd
RGWアクセス用のaws cliプロファイルを作成
[mng ~]$ aws configure --profile testuser1
AWS Access Key ID [None]: 0B9JxxxxxxxxxxxxDQF
AWS Secret Access Key [None]: M8Qvxxxxxxxxxxxxxxxx03ekSgYxxxxxxxxCzqFd
Default region name [None]: us-east-1
Default output format [None]: json

※region nameには「us-east-1」など適当に入れておけばOK。

aws cliでRGWへアクセス確認
# テスト用バケット作成
[mng ~]$ aws --profile testuser1 --endpoint-url http://apps.test.k8s.local s3 mb s3://bucket-for-testuser1

# バケット一覧を表示
[mng ~]$ aws --profile testuser1 --endpoint-url http://apps.test.k8s.local s3 ls
2025-04-09 15:13:39 bucket-for-testuser1

Web Dashboardの設定

デフォルトでWeb Dashboardが利用できるようになっている様子。外部クライアントからアクセスできるようにService(MetalLB)設定を追加する。

Service情報を定義
[mng ~]$ cd rook/deploy/examples/

[mng examples]$ vi dashboard-loadbalancer.yaml
dashboard-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
  name: rook-ceph-mgr-dashboard-loadbalancer
  namespace: rook-ceph
  labels:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph
spec:
  ports:
    - name: dashboard
      port: 8443
      protocol: TCP
      targetPort: 8443
  selector:
    app: rook-ceph-mgr
    mgr_role: active
    rook_cluster: rook-ceph
  sessionAffinity: None
  type: LoadBalancer

※特に変更せずにサンプルを適用する。

適用
[mng examples]$  kubectl apply -f dashboard-loadbalancer.yaml
確認
[mng examples]$ kubectl -n rook-ceph get service
NAME                                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
...
rook-ceph-mgr-dashboard                ClusterIP      10.101.88.188   <none>        8443/TCP            38h
rook-ceph-mgr-dashboard-loadbalancer   LoadBalancer   10.111.122.68   10.0.0.61     8443:31034/TCP      17m
...

※ MetalLBより「10.0.0.61」がVIPとして割り当てされている。

管理者(admin)の初期パスワードを確認
[mng examples]$ kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode

管理端末などから「hxxps://10.0.0.61:8443」へブラウザでアクセスする。
username: admin
password: 上記の初期パスワード

ceph_dashboard1.png

(おまけ) Too many PGs per OSD

Web DashboardのStatusに「Too many PGs per OSD」のワーニングが表示されたがOSDを増設できないのでPG数を減らした。

[mng ~]$ kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash

bash-5.1$ ceph health detail
HEALTH_WARN too many PGs per OSD (318 > max 250)

bash-5.1$ ceph osd pool ls detail -f json-pretty | jq -r '.[] | "NAME: \(.pool_name)\tPG_NUM: \(.pg_num)\tPGP_NUM: \(.pgp_num)"'
NAME: .mgr      PG_NUM: 1       PGP_NUM: null
NAME: myfs-metadata     PG_NUM: 16      PGP_NUM: null
NAME: myfs-replicated   PG_NUM: 32      PGP_NUM: null
NAME: my-store.rgw.otp  PG_NUM: 8       PGP_NUM: null
NAME: .rgw.root PG_NUM: 8       PGP_NUM: null
NAME: my-store.rgw.buckets.non-ec       PG_NUM: 8       PGP_NUM: null
NAME: my-store.rgw.control      PG_NUM: 8       PGP_NUM: null
NAME: my-store.rgw.meta PG_NUM: 8       PGP_NUM: null
NAME: my-store.rgw.log  PG_NUM: 8       PGP_NUM: null
NAME: my-store.rgw.buckets.index        PG_NUM: 8       PGP_NUM: null
NAME: my-store.rgw.buckets.data PG_NUM: 128     PGP_NUM: null
NAME: default.rgw.log   PG_NUM: 32      PGP_NUM: null
NAME: default.rgw.control       PG_NUM: 32      PGP_NUM: null
NAME: default.rgw.meta  PG_NUM: 32      PGP_NUM: null

bash-5.1$ ceph osd pool set my-store.rgw.buckets.data pg_num 64
set pool 11 pg_num to 64

bash-5.1$ ceph osd pool set my-store.rgw.buckets.data pgp_num 64
set pool 11 pgp_num to 64

bash-5.1$ ceph osd pool set default.rgw.log pg_num 16
set pool 12 pg_num to 16

bash-5.1$ ceph osd pool set default.rgw.log pgp_num 16
set pool 12 pgp_num to 16

bash-5.1$ ceph health detail
HEALTH_WARN too many PGs per OSD (253 > max 250)
[WRN] TOO_MANY_PGS: too many PGs per OSD (253 > max 250)

bash-5.1$ ceph health detail
HEALTH_WARN too many PGs per OSD (252 > max 250)
[WRN] TOO_MANY_PGS: too many PGs per OSD (252 > max 250)

bash-5.1$ ceph health detail
HEALTH_WARN too many PGs per OSD (251 > max 250)
[WRN] TOO_MANY_PGS: too many PGs per OSD (251 > max 250)

bash-5.1$ ceph health detail
HEALTH_OK

※OSD障害発生時に他のOSDへのリバランスが発生するが、PG数を減らすとその際に分散単位が荒くなるため属するデータ量も多くなる可能性があり、リバランスのトラフィックが増加し復旧時間も長くなるリスクがあるなど本番環境では考慮が必要。また上記のPG数を変更した後も再配置が発生するので負荷の低い時間帯に実施するなど留意。

Dnsmasq (DNSサービス)の設定を更新

Ceph RGWとDashboardのVIPにホスト名を割り当てる。lb.test.k8s.localノードで作業する。

設定ファイルを編集
[lb ~]$ sudo vi /etc/dnsmasq.d/k8s.conf
k8s.conf
# VIP by MetalLB
#address=/apps.test.k8s.local/10.0.0.60
address=/rgw.test.k8s.local/10.0.0.60   # Ceph RGW
address=/ceph.test.k8s.local/10.0.0.61  # Ceph Dashboard
...
Dnsmasqサービスを再起動
[lb ~]$ sudo systemctl restart dnsmasq

ホスト名でRGWへ疎通確認してみる

RGWのServiceを確認
[mng ~]$ kubectl get svc -n rook-ceph rgw-lb
NAME     TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
rgw-lb   LoadBalancer   10.98.122.211   10.0.0.60     80:32224/TCP   2d
外部端末から
[mng ~]$ curl http://rgw.test.k8s.local
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID></Owner><Buckets></Buckets></ListAllMyBucketsResult>
Podから
# CephFSのテストで使ったcephfs-test-podを利用する
[mng ~]$ kubectl exec -it cephfs-test-pod -- /bin/sh

~ # nslookup rgw.test.k8s.local
Server:         10.96.0.10
Address:        10.96.0.10:53
Name:   rgw.test.k8s.local
Address: 10.0.0.60

Non-authoritative answer:

~ # wget http://rgw.test.k8s.local
Connecting to rgw.test.k8s.local (10.0.0.60:80)
saving to 'index.html'
index.html           100% |*************************************************************************************|   187  0:00:00 ETA
'index.html' saved
(おまけ) RGWのService名でもアクセスしてみる
~ # nslookup rgw-lb.rook-ceph.svc.cluster.local
Server:         10.96.0.10
Address:        10.96.0.10:53
Name:   rgw-lb.rook-ceph.svc.cluster.local
Address: 10.98.122.211

~ # wget http://rgw-lb.rook-ceph.svc.cluster.local
Connecting to rgw-lb.rook-ceph.svc.cluster.local (10.98.122.211:80)
saving to 'index.html'
index.html           100% |*************************************************************************************|   187  0:00:00 ETA
'index.html' saved

RGW (S3 API)アクセスのTLS設定

証明書の作成

PrivateCAを作成しRGW用のサーバ証明書を発行する。

RHEL環境の場合には以下が参考になる。

PrivateCAの作成

PrivateCA
[mng ~]$ mkdir tls
[mng ~]$ cd tls

# 秘密鍵の作成
[mng tls]$ mng tls]$ openssl genpkey -algorithm ec -pkeyopt ec_paramgen_curve:P-256 -out private_ca.key

# CA証明書の作成
[mng tls]$ openssl req -key private_ca.key -new -x509 -days 3650 -addext keyUsage=critical,keyCertSign,cRLSign -subj "/CN=private_ca"
-out private_ca.crt

[mng tls]$ openssl x509 -in private_ca.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
...
        Subject: CN = private_ca
...
        X509v3 extensions:
...
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Key Usage: critical
                Certificate Sign, CRL Sign
    Signature Algorithm: ecdsa-with-SHA256
...

RGW用のサーバ証明書作成

RGW用のサーバ証明書
# 証明書設定の作成
[mng tls]$ vi openssl.cnf

# 秘密鍵の作成
[mng tls]$  openssl genpkey -algorithm ec -pkeyopt ec_paramgen_curve:P-256 -out rgw.key

# CSRの作成
[mng tls]$  openssl req -new -key rgw.key -out rgw.csr -config openssl.cnf

# 証明書作成
[mng tls]$  openssl x509 -req -in rgw.csr -CA private_ca.crt -CAkey private_ca.key -CAcreateserial -out rgw.crt -days 365 -sha256 -extfile openssl.cnf -extensions server-cert
Signature ok
subject=C = JP, O = k8s, OU = test, CN = rgw
Getting CA Private Key

[mng tls]$  openssl x509 -in rgw.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
...
        Issuer: CN = private_ca
...
        Subject: C = JP, O = k8s, OU = test, CN = rgw
...
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Key Agreement
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Subject Alternative Name:
                DNS:rgw.test.k8s.local, DNS:rgw-lb.rook-ceph.svc.cluster.local, DNS:rgw-lb, IP Address:10.0.0.60, IP Address:10.98.122.211, IP Address:127.0.0.1

SAN(alt_name)をService(MetalLB)の情報に合わせて指定しておく。

openssl.cnf
[server-cert]
keyUsage = critical, digitalSignature, keyEncipherment, keyAgreement
extendedKeyUsage = serverAuth
subjectAltName = @alt_name

[req]
distinguished_name = dn
prompt = no

[dn]
C = JP
O = k8s
OU = test
CN = rgw

[alt_name]
DNS.1 = rgw.test.k8s.local # 外部アクセス用のホスト名
DNS.2 = rgw-lb.rook-ceph.svc.cluster.local # クラスタ内のホスト名
DNS.3 = rgw-lb # クラスタ内のホスト名 (同一ネームスペース)
IP.1 = 10.0.0.60 # 外部アクセス用のIP
IP.2 = 10.98.122.211 # ClusterIP
IP.3 = 127.0.0.1 # ローカルホストからのアクセス用

RGWサーバ証明書のSecretを作成

作成
[mng tls]$ kubectl create secret tls rgw-tls-cert --cert=rgw.crt --key=rgw.key -n rook-ceph
secret/rgw-tls-cert created
確認
[mng tls]$ kubectl get secret -n rook-ceph rgw-tls-cert -o yaml
apiVersion: v1
data:
  tls.crt: xxxx...
...
  tls.key: yyyy...
...
kind: Secret
metadata:
  creationTimestamp: "2025-05-11T07:37:25Z"
  name: rgw-tls-cert
  namespace: rook-ceph
  resourceVersion: "266739"
  uid: 36b16128-8545-41be-aaef-9588e4e29a55
type: kubernetes.io/tls

Ceph Object Store設定を更新

TLS設定を追加する。

TLS設定を追加
[mng tls]$ kubectl edit cephobjectstore my-store -n rook-ceph -o yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
...
  name: my-store
  namespace: rook-ceph
...
spec:
...
  gateway:
    instances: 3
    port: 80
    securePort: 443                   # 443ポートを有効化
    sslCertificateRef: rgw-tls-cert   # サーバ証明書のSecretを指定
...
確認
[mng tls]$ kubectl get pods -n rook-ceph -o wide |grep my-store
NAME                                                    READY   STATUS      RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
rook-ceph-rgw-my-store-a-68f5777c66-m2p52               2/2     Running     0          1m14s   172.20.194.64    k8s-worker1   <none>           <none>
rook-ceph-rgw-my-store-a-68f5777c66-t4xxd               2/2     Running     0          1m46s   172.23.229.190   k8s-worker0   <none>           <none>
rook-ceph-rgw-my-store-a-68f5777c66-v22h9               2/2     Running     0          1m31s   172.30.126.40    k8s-worker2   <none>           <none>

RGWのService設定を更新

TLS設定を追加する。

[mng tls]$ kubectl edit svc -n rook-ceph rgw-lb
apiVersion: v1
kind: Service
metadata:
...
  name: rgw-lb
  namespace: rook-ceph
...
spec:
...
  ports:
  - name: http
    nodePort: 32224
    port: 80
    protocol: TCP
    targetPort: 8080

  # TLS用のポート設定を追加
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
...
確認
[mng tls]$ kubectl get svc -n rook-ceph rgw-lb -o wide
NAME     TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE    SELECTOR
rgw-lb   LoadBalancer   10.98.122.211   10.0.0.60     80:32224/TCP,443:31140/TCP   3d1h   rgw=my-store

外部クライアントからHTTPSアクセス確認

# VIPにHTTPSでアクセス疎通
[mng tls]$ curl -k https://10.0.0.60
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID></Owner><Buckets></Buckets></ListAllMyBucketsResult>

# ホスト名でHTTPSでアクセス疎通
[mng tls]$ curl -k https://rgw.test.k8s.local
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID></Owner><Buckets></Buckets></ListAllMyBucketsResult>

# PrivateCA証明書を指定してHTTPSアクセス疎通
[mng tls]$ curl --cacert ./private_ca.crt https://rgw.test.k8s.local
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID></Owner><Buckets></Buckets></ListAllMyBucketsResult>
# aws cliによりHTTPSアクセス疎通
[mng tls]$ aws --profile testuser1 --endpoint-url https://rgw.test.k8s.local --ca-bundle ./private_ca.crt s3 ls
2025-05-09 15:13:39 bucket-for-testuser1

PodからHTTPSアクセス確認

amazon/aws-cliイメージによりPodを作成し確認する。

testuser1ユーザの認証情報でSecretを作成
[mng ~]$ kubectl apply -f rgw-s3-credentials.yaml
rgw-s3-credentials.yaml
apiVersion: v1
kind: Secret
metadata:
  name: rgw-s3-credentials
  namespace: rook-ceph
type: Opaque
stringData:
  credentials: |
    [default]
    aws_access_key_id = 0B9JxxxxxxxxxxxxDQF
    aws_secret_access_key = M8Qvxxxxxxxxxxxxxxxx03ekSgYxxxxxxxxCzqFd
    region = us-east-1
PrivateCA証明書用のConfigMapを作成
[mng ~]$ kubectl apply -f rgw-ca-cert.yaml
rgw-ca-cert.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: rgw-ca-cert
  namespace: rook-ceph
data:
  ca.crt: |
    -----BEGIN CERTIFICATE-----
...
    -----END CERTIFICATE-----
amazon/aws-cli Podを作成
[mng ~]$ kubectl apply -f awscli.yaml

RGWへのユーザ認証情報とPrivateCA証明書をマウントしてPodを作成する。

awscli.yaml
apiVersion: v1
kind: Pod
metadata:
  name: awscli
  namespace: rook-ceph
spec:
  containers:
  - name: aws
    image: amazon/aws-cli:latest
    command: ["sleep", "3600"]
    volumeMounts:
    - name: creds
      mountPath: /root/.aws
    - name: ca
      mountPath: /etc/ssl/certs/rgw-private-ca.crt
      subPath: ca.crt
  volumes:
  - name: creds
    secret:
      secretName: rgw-s3-credentials
  - name: ca
    configMap:
      name: rgw-ca-cert
      items:
      - key: ca.crt
        path: ca.crt
  restartPolicy: Never

PodからRGWへHTTPSアクセス確認

AWS CLI PodからRGWへアクセス疎通
$ kubectl exec -n rook-ceph -it awscli -- /bin/sh
sh-4.2# whoami
root

sh-4.2# pwd
/root

sh-4.2# aws --endpoint-url https://rgw.test.k8s.local --ca-bundle /etc/ssl/certs/rgw-private-ca.crt s3 ls
2025-05-09 06:13:39 bucket-for-testuser1
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?