Help us understand the problem. What is going on with this article?

Kubernetes in docker (kind) on Mac に MetalLB 入れて type:LoadBalancer もマルチノードもお手軽に遊ぶ

前置き

なぜ Docker Desktop の Kubernetes でないのか?

  • 重い。(感覚値)
  • 環境をきれいに保てない。(自身のスキル不足かも)
  • マルチノードで遊びたい。
  • マルチクラスタで遊びたい。
  • あわよくばテスト環境として使いたい。

動かしている環境

  • macOS 10.14.5
  • Docker Desktop 2.1.0.3
  • go version go1.11.1 darwin/amd64

kind とは?

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

  • Docker の 1 コンテナを 1 ノードとして Kubernetes Cluster を実現するもの。
  • どこまで使えるか、kind 特有の問題に悩ませられないかは、今後、使用していって確かめる。
    • 使い込むのはこれからなので、今は書けない。
  • kind に似たようなやつにこんなのもある。
    • Kubeadm-dind
      • 廃止されてた。
    • k3d
      • 使い込んだわけではないけれど kind と比べるとイマイチ感。(主観)
      • そもそもコンセプトが違うのか。

kind をインストールしただけでは Service で LoadBalancer が使えない

  • MetalLB (Google 製) を入れて type: LoadBalancer もいけるようにする。

kind のインストール

  • kind version v0.5.1 と出たら OK 。(2019-10-20 現在)
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.5.1/kind-$(uname)-amd64
chmod +x ./kind
mv ./kind /usr/local/bin/
kind version

Kubernetes cluster を作成

cat <<EOF > config.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
  • 下記コマンドで、クラスター作成。しばらく時間がかかる。
kind create cluster --name kind1 --config config.yaml
  • クラスターが作成されると下記が出力されるので、言われるがままに実行する。
    • 下記実行前に echo $KOBECONFIG の値をコピっとく。
export KUBECONFIG="$(kind get kubeconfig-path --name="kind1")"
kubectl cluster-info
  • ノードを取得してみる。
$ kubectl get node
NAME                  STATUS   ROLES    AGE     VERSION
kind1-control-plane   Ready    master   3m57s   v1.15.3
kind1-worker          Ready    <none>   3m17s   v1.15.3
kind1-worker2         Ready    <none>   3m17s   v1.15.3
  • Docker のコンテナを表示してみる。
$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                  NAMES
d01aa2c77cc2        kindest/node:v1.15.3   "/usr/local/bin/entr…"   9 minutes ago       Up 9 minutes                                               kind1-worker2
a9395bd71807        kindest/node:v1.15.3   "/usr/local/bin/entr…"   9 minutes ago       Up 9 minutes                                               kind1-worker
f91f98a8f6b4        kindest/node:v1.15.3   "/usr/local/bin/entr…"   9 minutes ago       Up 9 minutes        63002/tcp, 127.0.0.1:63002->6443/tcp   kind1-control-plane

クラスターの後始末の方法を確認しておく。

  • KUBECONFIG に元の値をセットする。
kind get clusters
kind delete cluster --name kind1
export KUBECONFIG=~/.kube/config

MetalLB のデプロイ

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
kubectl apply -f - -o yaml << 'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.2-192.168.1.254
EOF

Kubernetes のチュートリアルをやってみる

kubectl apply -f - -o yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/name: load-balancer-example
  name: hello-world
spec:
  replicas: 5
  selector:
    matchLabels:
      app.kubernetes.io/name: load-balancer-example
  template:
    metadata:
      labels:
        app.kubernetes.io/name: load-balancer-example
    spec:
      containers:
      - image: gcr.io/google-samples/node-hello:1.0
        name: hello-world
        ports:
        - containerPort: 8080
EOF
  • Pod を取得してみる。 READY になるまでは少し時間がかかる。
$ kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
hello-world-bbbb4c85d-572xj   1/1     Running   0          83s
hello-world-bbbb4c85d-bdwgb   1/1     Running   0          83s
hello-world-bbbb4c85d-fhx9z   1/1     Running   0          83s
hello-world-bbbb4c85d-gc5rb   1/1     Running   0          83s
hello-world-bbbb4c85d-gndwl   1/1     Running   0          83s
  • 現在の Service を確認。
$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   21m
  • 公開するための Service を作成する。
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
  • Service が追加されているのを確認。
$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP          23m
my-service   LoadBalancer   10.104.220.75   192.168.1.2   8080:31116/TCP   2s
  • Service の詳細を取得。
$ kubectl describe services my-service
Name:                     my-service
Namespace:                default
Labels:                   app.kubernetes.io/name=load-balancer-example
Annotations:              <none>
Selector:                 app.kubernetes.io/name=load-balancer-example
Type:                     LoadBalancer
IP:                       10.100.88.21
LoadBalancer Ingress:     172.17.255.1
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30268/TCP
Endpoints:                10.244.1.3:8080,10.244.1.4:8080,10.244.2.2:8080 + 2 more...
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age    From                Message
  ----    ------        ----   ----                -------
  Normal  IPAllocated   2m25s  metallb-controller  Assigned IP "172.17.255.1"
  Normal  nodeAssigned  2m24s  metallb-speaker     announcing from node "kind1-worker"
  • Port Forward で確認。
kubectl port-forward svc/my-service 8080:8080
open http://localhost:8080
  • 後始末
kubectl delete services my-service
kubectl delete deployment hello-world

Kubernetes のチュートリアルをやってみる (2)

kubectl apply -f - -o yaml << 'EOF'
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: redis-master
  labels:
    app: redis
spec:
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: k8s.gcr.io/redis:e2e  # or just image: redis
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379
EOF
  • Redis Master の Service を作成。
kubectl apply -f - -o yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend
EOF
  • Redis Slave の Pod をデプロイ。
kubectl apply -f - -o yaml << 'EOF'
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: redis-slave
  labels:
    app: redis
spec:
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google_samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # Using `GET_HOSTS_FROM=dns` requires your cluster to
          # provide a dns service. As of Kubernetes 1.3, DNS is a built-in
          # service launched automatically. However, if the cluster you are using
          # does not have a built-in DNS service, you can instead
          # access an environment variable to find the master
          # service's host. To do so, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379
EOF
  • Redis Slave の Service を作成。
kubectl apply -f - -o yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend
EOF
  • フロントエンドをデプロイ。
kubectl apply -f - -o yaml << 'EOF'
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: frontend
  labels:
    app: guestbook
spec:
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v4
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # Using `GET_HOSTS_FROM=dns` requires your cluster to
          # provide a dns service. As of Kubernetes 1.3, DNS is a built-in
          # service launched automatically. However, if the cluster you are using
          # does not have a built-in DNS service, you can instead
          # access an environment variable to find the master
          # service's host. To do so, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 80
EOF
  • typeLoadBalancer に変更して、フロントエンドの Service を作成。
kubectl apply -f - -o yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # comment or delete the following line if you want to use a LoadBalancer
  # type: NodePort 
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend
EOF
  • Port Forward して確認。
kubectl port-forward svc/frontend 8080:80
open http://localhost:8080
  • 出た。

image.png

  • 後始末。
kubectl delete deployment -l app=redis
kubectl delete service -l app=redis
kubectl delete deployment -l app=guestbook
kubectl delete service -l app=guestbook

Kubernetes のチュートリアルをやってみる (3)

cat <<EOF >./kustomization.yaml
secretGenerator:
- name: mysql-pass
  literals:
  - password=YOUR_PASSWORD
EOF
  • mysql-deployment.yaml を作成
cat <<EOF >./mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
EOF
  • wordpress-deployment.yaml を作成。
cat <<EOF >./wordpress-deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim
EOF
  • kustomization.yaml に追記する。
cat <<EOF >>./kustomization.yaml
resources:
  - mysql-deployment.yaml
  - wordpress-deployment.yaml
EOF
  • kustomization.yaml を含むディレクトリからリソースを適用する。
kubectl apply -k ./
  • 下記リソースが作成された情報が出力される。
secret/mysql-pass-bk8h8tgt5t created
service/wordpress-mysql created
service/wordpress created
deployment.apps/wordpress-mysql created
deployment.apps/wordpress created
persistentvolumeclaim/mysql-pv-claim created
persistentvolumeclaim/wp-pv-claim created
  • Service と Persistent Volume Claims を確認。
kubectl get pvc
kubectl get svc
  • Port Forward で WordPress を確認。
kubectl port-forward svc/wordpress 8080:80
  • 動いてる。

image.png

image.png

image.png

image.png

以上。
とりあえず、基本は動いたと言えるのかな。

あと、 helm と operator もやってみないと。

(追記) kubernetes dashboard を入れる

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy
open http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

image.png

kubectl get secret -n kube-system
  • deployment-controller-token- で始まるものを指定する。
kubectl describe secret deployment-controller-token-vtddf -n kube-system
  • token でログインする。

image.png

Why not register and get more from Qiita?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
Comments
No comments
Sign up for free and join this conversation.
If you already have a Qiita account
Why do not you register as a user and use Qiita more conveniently?
You need to log in to use this function. Qiita can be used more conveniently after logging in.
You seem to be reading articles frequently this month. Qiita can be used more conveniently after logging in.
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
ユーザーは見つかりませんでした