0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

Kubernetes上にオブジェクトストレージMinIOをDistributed Modeでインストール

Last updated at Posted at 2022-05-26

概要

S3互換のあるMinIOをKuberunetesで分散構成にて構築した際の手順です
KubernetesのStorageClassにはRookでRBD(Ceph Block Device)を作成しています

蛇足

MinIO ミニオではない
Min IOなので ミンアイオーらしい。

MinIOとは

High Performance,
Kubernetes-Friendly
Object Storage

翻訳すると

高性能で
Kubernetesに優しい
オブジェクトストレージ

環境

AWS上にEC2を同じサブネット内に複数台にて構成します
今回は3台ですが、台数増えても手順は変わりません。
操作は全てMaster上にて行います。

ホスト

hostname IPaddress Role
minio1 192.168.1.1 Master
minio2 192.168.1.2 Worker
minio3 192.168.1.3 Worker

各種バージョン

Name Version
OS Linux version 4.14.154-128.181.amzn2.x86_64
Docker 18.09.9-ce
Kubernetes 1.17.2-0
Flannel 0.11.0
Helm 3.0.3
Rook 1.2.3
Ceph 14.2.6
MinIO 2020-01-16T22:40:29Z

前提

DockerやKubernetesはインストール済みであること

参考:Amazon Linux2にKubernetes1.17

手順

Helmインストール


$ wget https://get.helm.sh/helm-v3.0.3-linux-amd64.tar.gz
~
~ 省略
~
Saving to: ‘helm-v3.0.3-linux-amd64.tar.gz’
100%[=====================================>] 12,102,556  64.2MB/s   in 0.2s 

$ tar xvfz helm-v3.0.3-linux-amd64.tar.gz 
$ sudo mv linux-amd64/helm /usr/local/bin/
$ helm version
version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.6"}

$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
"stable" has been added to your repositories

$ helm search repo stable |grep minio
stable/minio    5.0.6     master     MinIO is a high performance data infrastructure...

Rook with Cephを使用してStorageClass作成

RBD(Ceph Block Device)を使用してStorage Classを作成します。

事前準備

WorkerNodeにEBSをアタッチします。
フォーマットやマウントは不要です。
インスタンスへの Amazon EBS ボリュームのアタッチ

Rookの導入


$ sudo yum install -y git
$ git clone --single-branch --branch master https://github.com/rook/rook.git
$ cd /home/ec2-user/rook/cluster/examples/kubernetes/ceph

詳細はこちら

KubernetesにRBD(Ceph Block Device)をインストール

$ pwd
/home/ec2-user/rook/cluster/examples/kubernetes/ceph
$ kubectl create -f common.yaml
namespace/rook-ceph created
~
~ 省略
~
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created

$ kubectl create -f operator.yaml
deployment.apps/rook-ceph-operator created

$ cp -p cluster.yaml my-clustar.yaml

my-cluster.yamlを環境や構成に応じて修正します。

今回は以下のファイルで実行します。

my-cluster.yaml

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  cephVersion:
    image: ceph/ceph:v14.2.6
    allowUnsupported: true
  dataDirHostPath: /var/lib/rook
  skipUpgradeChecks: false
  continueUpgradeAfterChecksEvenIfNotHealthy: false
  mon:
    count: 1
    allowMultiplePerNode: true
  dashboard:
    enabled: true
  monitoring:
    enabled: false
    rulesNamespace: rook-ceph
  network:
    hostNetwork: false
  rbdMirroring:
    workers: 0
  crashCollector:
    disable: false
  mgr:
    modules:
    - name: pg_autoscaler
      enabled: true
  storage:
    useAllNodes: true
    useAllDevices: true
    config:
      databaseSizeMB: "1024" # this value can be removed for environments with normal sized disks (100 GB or larger)
      journalSizeMB: "1024"  # this value can be removed for environments with normal sized disks (20 GB or larger)
      osdsPerDevice: "1" # this value can be overridden at the node or device level
    directories:
    - path: /var/lib/rook


$ kubectl apply -f my-clustar.yaml 
cephcluster.ceph.rook.io/rook-ceph created

$ watch kubectl get pod -n rook-ceph
Every 2.0s: kubectl get pod -n rook-ceph
csi-cephfsplugin-bbtkx                                  3/3     Running     0          5m13s
csi-cephfsplugin-nrfqk                                  3/3     Running     0          5m13s
csi-cephfsplugin-provisioner-7c7f7f9d5f-4tsk8           5/5     Running     0          5m13s
csi-cephfsplugin-provisioner-7c7f7f9d5f-zlftt           5/5     Running     0          5m13s
csi-cephfsplugin-v7kzn                                  3/3     Running     0          5m13s
csi-rbdplugin-bqzxz                                     3/3     Running     0          5m13s
csi-rbdplugin-nfxct                                     3/3     Running     0          5m13s
csi-rbdplugin-provisioner-696474ffdb-6hrk6              6/6     Running     0          5m13s
csi-rbdplugin-provisioner-696474ffdb-q5qh9              6/6     Running     0          5m13s
csi-rbdplugin-wd7pm                                     3/3     Running     0          5m13s
rook-ceph-crashcollector-postgresql2-7d75945f8f-t7l9w   1/1     Running     0          3m34s
rook-ceph-crashcollector-postgresql3-6c87d48d6b-xmcb2   1/1     Running     0          4m8s
rook-ceph-crashcollector-postgresql4-ff79fbdf9-sb8qh    1/1     Running     0          39s
rook-ceph-mgr-a-765cc44b98-mxs7z                        1/1     Running     0          3m59s
rook-ceph-mon-a-6754f4b4f8-759jx                        1/1     Running     0          4m8s
rook-ceph-operator-678887c8d-qmcrg                      1/1     Running     0          17m
rook-ceph-osd-0-74478d7865-647gx                        1/1     Running     0          3m34s
rook-ceph-osd-1-684b5bdf65-9lfgn                        1/1     Running     0          3m33s
rook-ceph-osd-2-5df6878f89-jdpqf                        1/1     Running     0          39s
rook-ceph-osd-prepare-postgresql2-26lsc                 0/1     Completed   0          52s
rook-ceph-osd-prepare-postgresql3-scvp8                 0/1     Completed   0          50s
rook-ceph-osd-prepare-postgresql4-828pd                 0/1     Completed   0          48s
rook-discover-5kk9t                                     1/1     Running     0          17m
rook-discover-6jxhk                                     1/1     Running     0          17m
rook-discover-wwvht                                     1/1     Running     0          17m

大量にpodが作成されます
[rook-ceph-osd-]がアタッチしたEBSの数だけ作成されるかと思うので
全てRunningになるまで待機します

Ceph動作確認(ToolBoxインストール)

$ pwd
/home/ec2-user/rook/cluster/examples/kubernetes/ceph
$ kubectl create -f toolbox.yaml
$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
[root@rook-ceph-tools-7f96779fb9-pwf4p /]# @

ToolBox Pod内でCephコマンドを実行することで、ステータスや容量の確認ができます。
よく使うのは [ceph df]や[ceph status]など
コマンドの詳細はこちら

Storage Class作成

こちらを参考にstorageclass.yamlを作成します。

storageclass.yaml

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
    clusterID: rook-ceph
    pool: replicapool
    imageFormat: "2"
    imageFeatures: layering
    csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
    csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
    csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
    csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
    csi.storage.k8s.io/fstype: xfs
reclaimPolicy: Delete


$ kubectl apply -f strageclass.yaml
$ kubectl get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           false                  10s

MinIOのインストール

こちらを参考に、values.yamlを作成します。

values.yaml

clusterDomain: cluster.local
image:
  repository: minio/minio
  pullPolicy: IfNotPresent
mcImage:
  repository: minio/mc
  pullPolicy: IfNotPresent
mode: distributed
drivesPerNode: 1
replicas: 4
persistence:
  enabled: true
  storageClass: "rook-ceph-block"
  accessMode: ReadWriteOnce
  size: 1Gi
service:
  type: NodePort
  port: 9000
s3gateway:
  enabled: true
  replicas: 4

Namespaceの作成


$ kubectl create ns minio
namespace/minio created

HelmでMinIOをインストール

$ helm install -f distributed-values.yaml --namespace minio --generate-name stable/minio
NAME: minio-1580963826
LAST DEPLOYED: Thu Feb  6 04:37:08 2020
NAMESPACE: minio
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Minio can be accessed via port 9000 on the following DNS name from within your cluster:
minio-1580963826.minio.svc.cluster.local

To access Minio from localhost, run the below commands:

  1. export POD_NAME=$(kubectl get pods --namespace minio -l "release=minio-1580963826" -o jsonpath="{.items[0].metadata.name}")

  2. kubectl port-forward $POD_NAME 9000 --namespace minio

Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/

You can now access Minio server on http://localhost:9000. Follow the below steps to connect to Minio server with mc client:

  1. Download the Minio mc client - https://docs.minio.io/docs/minio-client-quickstart-guide

  2. mc config host add minio-1580963826-local http://localhost:9000 AKIAIOSFODNN7EXAMPLE wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY S3v4

  3. mc ls minio-1580963826-local

Alternately, you can use your browser or the Minio SDK to access the server - https://docs.minio.io/categories/17

コマンド結果にmc(MinIO Client)での接続方法が出るのでそのまま実行

ブラウザで確認


$ kubectl get svc -n minio
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
minio-1580963826       NodePort    10.101.47.122   <none>        9000:31311/TCP   4h5m
minio-1580963826-svc   ClusterIP   None            <none>        9000/TCP         4h5m

TYPEがNodePortになっていることを確認し、ブラウザで http://<MasterかWokerいずれかのIP>:31311 にアクセスすることでMinIO Browserにアクセスできます。
MinIO Browserでもバケットの作成やファイルのアップロード・ダウンロードが可能です。

AWS CLIで確認

S3互換なので、AWS CLIでも接続可能です、以下の情報をAWS Configureした後に確認します。

Access Key:AKIAIOSFODNN7EXAMPLE
Secret Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY


$ aws --endpoint-url http://<MasterかWokerいずれかのIP>:31311 s3 ls

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?