LoginSignup
4
1

More than 3 years have passed since last update.

今更ながらRaspberry Piでkubernetes構築(クラスタ編)

Last updated at Posted at 2019-09-19

これは2019/09/19時点での話です

前回はRaspberry Piにkubernetesをシングル構成で構築した。
3台構成での残り2台分の資材が届いたので早速クラスタ構築開始。
最終的に以下のような構成にする。

ホスト名 役割 IP
k8s01 Master 192.168.11.10
k8s02 Worker 192.168.11.11
k8s03 Worker 192.168.11.12

(続き)
Raspberry PiのKubernetesにmetrics-serverを入れる
Raspberry PiのKubernetesにDashboardを入れた話

構築手順

前回構築したやつをMasterとし、今回構築するやつはWorkerにする。

OSインストールとか

OSインストールや基本設定は前回と同様なので省略。
前回手順をそのまま実行。

kubeadm joinでクラスタに組み込み

Worker用のノードに入れるもんいれたら、Masterでjoin用のtoken発行を行う。

root@k8s01:~# kubeadm token create --print-join-command
kubeadm join 192.168.11.10:6443 --token 1flgzl.4x70omgdnigie8yn     --discovery-token-ca-cert-hash sha256:76ea8013fa291ca1350d633343f2cfba9b4ece246f39fb3048ac4458c1f3e96c 

上記出力結果をコピーしてWorker側でそのまま実行。

root@k8s02:~#  kubeadm join 192.168.11.10:6443 --token 1flgzl.4x70omgdnigie8yn     --discovery-token-ca-cert-hash sha256:76ea8013fa291ca1350d633343f2cfba9b4ece246f39fb3048ac4458c1f3e96c 
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s02:~# 

ノード追加がうまくいったか確認する。

root@k8s01:~# kubectl get nodes
NAME    STATUS     ROLES    AGE     VERSION
k8s01   Ready      master   27h     v1.15.3
k8s02   NotReady   <none>   5m20s   v1.16.0
k8s03   NotReady   <none>   19s     v1.16.0
root@k8s01:~# 

あれ???
なんかバージョン違うし、待っててもNotReadyからReadyにならないし。。。
運が悪いことに、Master構築した時と最新バージョンが違っていた模様。
Workerノードをいったんクラスタから切り離し。

root@k8s02:~# kubeadm reset

紆余曲折を経てv1.15.3に統一した

Workerノードで下記を実行してkubeletをv.15.3に統一した。

root@k8s02:~# apt-get install kubelet=1.15.3*
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Selected version '1.15.3-00' (kubernetes-xenial:kubernetes-xenial [armhf]) for 'kubelet'
The following packages will be DOWNGRADED:
  kubelet
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 56 not upgraded.
Need to get 17.7 MB of archives.
After this operation, 2,905 kB disk space will be freed.
Do you want to continue? [Y/n] y
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main armhf kubelet armhf 1.15.3-00 [17.7 MB]
Fetched 17.7 MB in 4s (4,115 kB/s)  
dpkg: warning: downgrading kubelet from 1.16.0-00 to 1.15.3-00
(Reading database ... 92015 files and directories currently installed.)
Preparing to unpack .../kubelet_1.15.3-00_armhf.deb ...
Unpacking kubelet (1.15.3-00) over (1.16.0-00) ...
Setting up kubelet (1.15.3-00) ...
root@k8s02:~# 

念のためOS再起動してからkubeadm join実行したら今度はうまくいった。

root@k8s01:~# kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
k8s01   Ready    master   7m18s   v1.15.3
k8s02   Ready    <none>   2m54s   v1.15.3
k8s03   Ready    <none>   2m54s   v1.15.3
root@k8s01:~# 
root@k8s01:~# kubectl get pods --all-namespaces
NAMESPACE        NAME                            READY   STATUS    RESTARTS   AGE
kube-system      coredns-5644d7b6d9-sz2t8        1/1     Running   0          19m
kube-system      coredns-5644d7b6d9-vnxxf        1/1     Running   0          19m
kube-system      etcd-k8s01                      1/1     Running   0          18m
kube-system      kube-apiserver-k8s01            1/1     Running   0          19m
kube-system      kube-controller-manager-k8s01   1/1     Running   1          19m
kube-system      kube-flannel-ds-arm-426sl       1/1     Running   0          17m
kube-system      kube-flannel-ds-arm-c7gpj       1/1     Running   8          15m
kube-system      kube-flannel-ds-arm-v5w64       1/1     Running   8          15m
kube-system      kube-proxy-2j2ds                1/1     Running   1          15m
kube-system      kube-proxy-75lpk                1/1     Running   0          19m
kube-system      kube-proxy-8dsf2                1/1     Running   1          15m
kube-system      kube-scheduler-k8s01            1/1     Running   0          19m
metallb-system   controller-6bcfdfd677-7dvpv     1/1     Running   0          19m
metallb-system   speaker-5pdrx                   1/1     Running   1          15m
metallb-system   speaker-62zk4                   1/1     Running   0          17m
metallb-system   speaker-fh48h                   1/1     Running   1          15m
root@k8s01:~# 

動作確認

ノードが3つになったので、replicasを3にしてnginxサービスを構築。

nginx3.deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
  type: LoadBalancer

サービスデプロイ。

root@k8s01:~# kubectl create -f nginx3.deployment.yaml 

しばらくすると3か所でnginxのコンテナが作られてることが確認できる。

root@k8s01:~# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-85ff79dd56-8rdr2   1/1     Running   0          19m   10.244.1.2   k8s03   <none>           <none>
nginx-deployment-85ff79dd56-cq6gw   1/1     Running   0          19m   10.244.2.2   k8s02   <none>           <none>
nginx-deployment-85ff79dd56-lmdm5   1/1     Running   0          19m   10.244.0.5   k8s01   <none>           <none>
root@k8s01:~# 

このままだと面白みがないので、それぞれのコンテナの中に入ってnginxのindex.htmlを変えてやる。

root@k8s01:~# kubectl exec -it nginx-deployment-85ff79dd56-lmdm5 -- /bin/bash
root@nginx-deployment-85ff79dd56-lmdm5:/# 
root@nginx-deployment-85ff79dd56-lmdm5:/# cd /usr/share/nginx/html/
root@nginx-deployment-85ff79dd56-lmdm5:/usr/share/nginx/html# ls
50x.html  index.html
root@nginx-deployment-85ff79dd56-lmdm5:/usr/share/nginx/html# echo "node1" > index.html 
root@nginx-deployment-85ff79dd56-lmdm5:/usr/share/nginx/html# exit

他のノードも同様。
curlしてみた結果、ロードバランスされてることを確認。

root@k8s01:~# curl 192.168.11.200
node3
root@k8s01:~# curl 192.168.11.200
node1
root@k8s01:~# curl 192.168.11.200
node2

忘れてたがラベルもつけてあげよう。

root@k8s01:~# kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
k8s01   Ready    master   12m     v1.15.3
k8s02   Ready    <none>   4m19s   v1.15.3
k8s03   Ready    <none>   4m4s    v1.15.3
root@k8s01:~# 
root@k8s01:~# kubectl label node k8s02 node-role.kubernetes.io/worker=worker
node/k8s02 labeled
root@k8s01:~# kubectl label node k8s03 node-role.kubernetes.io/worker=worker
node/k8s03 labeled
root@k8s01:~# 
root@k8s01:~# kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
k8s01   Ready    master   14m     v1.15.3
k8s02   Ready    worker   6m22s   v1.15.3
k8s03   Ready    worker   6m7s    v1.15.3
root@k8s01:~# 

(参考)Masterのバージョンをv1.16.0にしてみた場合

これは結局失敗した手順だが、何が起こったかただの参考情報として残す。
バージョン合わせてやるためにMasterノードを更新。

root@k8s01:~# apt-get update
root@k8s01:~# apt-get install -y kubelet kubeadm kubectl 

リセット、再構築

root@k8s02:~# kubeadm reset
root@k8s02:~# kubeadm init --pod-network-cidr=10.244.0.0/16
root@k8s02:~# cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s02:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@k8s02:~# kubectl taint nodes --all node-role.kubernetes.io/master-

flannelが動かないいいい

root@k8s01:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY   STATUS             RESTARTS   AGE
kube-system   coredns-5644d7b6d9-k8qgx        0/1     Pending            0          13m
kube-system   coredns-5644d7b6d9-xg7nt        0/1     Pending            0          13m
kube-system   etcd-k8s01                      1/1     Running            0          13m
kube-system   kube-apiserver-k8s01            1/1     Running            0          14m
kube-system   kube-controller-manager-k8s01   1/1     Running            0          13m
kube-system   kube-flannel-ds-arm-8vfb8       0/1     CrashLoopBackOff   3          80s
kube-system   kube-flannel-ds-arm-fjl64       0/1     CrashLoopBackOff   3          80s
kube-system   kube-flannel-ds-arm-l5h42       0/1     CrashLoopBackOff   3          80s
kube-system   kube-proxy-2zrl6                1/1     Running            0          13m
kube-system   kube-proxy-49qsj                1/1     Running            0          11m
kube-system   kube-proxy-fzj6h                1/1     Running            0          12m
kube-system   kube-scheduler-k8s01            1/1     Running            0          13m
root@k8s01:~# 

describeで確認してみるが、動かない理由はよくわからんかった。

root@k8s01:~# kubectl describe pod  kube-flannel-ds-arm-8vfb8   -n kube-system 
Name:         kube-flannel-ds-arm-8vfb8
Namespace:    kube-system
Priority:     0
Node:         k8s03/192.168.11.12
Start Time:   Thu, 19 Sep 2019 09:13:30 +0100
Labels:       app=flannel
              controller-revision-hash=7d7b8f7d47
              pod-template-generation=1
              tier=node
Annotations:  <none>
Status:       Running
IP:           192.168.11.12
IPs:
  IP:           192.168.11.12
Controlled By:  DaemonSet/kube-flannel-ds-arm
Init Containers:
  install-cni:
    Container ID:  docker://6d53a05527e7cbfe07b1876360578e7464b61a5fa8a7830eb6853191fa670a27
    Image:         quay.io/coreos/flannel:v0.11.0-arm
    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:c3d2c9c54eadeacd9d74717cfcd73a12782773d864e7be4686fa8bd9ae2c4f42
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 19 Sep 2019 09:13:32 +0100
      Finished:     Thu, 19 Sep 2019 09:13:32 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-46hvn (ro)
Containers:
  kube-flannel:
    Container ID:  docker://e5fe0253df90e0b7343707b0c19487eed36fa70f9bc7d75be31dacd78de2ebe7
    Image:         quay.io/coreos/flannel:v0.11.0-arm
    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:c3d2c9c54eadeacd9d74717cfcd73a12782773d864e7be4686fa8bd9ae2c4f42
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 19 Sep 2019 09:15:12 +0100
      Finished:     Thu, 19 Sep 2019 09:15:14 +0100
    Ready:          False
    Restart Count:  4
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:       kube-flannel-ds-arm-8vfb8 (v1:metadata.name)
      POD_NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-46hvn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:  
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  flannel-token-46hvn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  flannel-token-46hvn
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/arch=arm
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  <unknown>            default-scheduler  Successfully assigned kube-system/kube-flannel-ds-arm-8vfb8 to k8s03
  Normal   Pulled     2m54s                kubelet, k8s03     Container image "quay.io/coreos/flannel:v0.11.0-arm" already present on machine
  Normal   Created    2m54s                kubelet, k8s03     Created container install-cni
  Normal   Started    2m53s                kubelet, k8s03     Started container install-cni
  Normal   Pulled     73s (x5 over 2m53s)  kubelet, k8s03     Container image "quay.io/coreos/flannel:v0.11.0-arm" already present on machine
  Normal   Created    73s (x5 over 2m53s)  kubelet, k8s03     Created container kube-flannel
  Normal   Started    73s (x5 over 2m52s)  kubelet, k8s03     Started container kube-flannel
  Warning  BackOff    70s (x7 over 2m47s)  kubelet, k8s03     Back-off restarting failed container
root@k8s01:~# 

結局、これを調べるよりv1.15.3にしたほうがよさそうな気がしたので、v1.15.3にダウングレードした。
そのうちflannelもv1.16.0で使えるようになるだろう。たぶん。

4
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
4
1