LoginSignup
2
3

More than 3 years have passed since last update.

今更ながらRaspberry Piでkubernetes構築(シングルノード編)

Last updated at Posted at 2019-09-18

これは2019/09/18時点での話です

ラズパイでkubernetesクラスターを作るという記事がググればわんさか出るこのご時世に、この記事がどれほど役に立つかは不明だが、まぁとにかく記録は残しておくことにする。
ちょいちょい地味なトラブルがあったし。

今回の内容について

タイトル通りRaspberryPiを1個つかってkubernetesを入れるだけの話。
最終的にはラズパイ3個でクラスタ構築する予定だが、現在手元にあるのはRaspberry Pi 3 Model Bが1個のみなので、これを使ってまずはシングルノード構築。
ラズパイ残り2個+構築資材もろもろは発注済みなので2~3日以内には来るんじゃないかしら。
資材がきたら3台クラスタ構築を頑張る。

続編

構築手順

OSインストール

NOOBSからRaspbian Buster Liteをインストールした。
NOOBSはこの辺から拾ってきてMicroSDカードにコピーする。
余談だが、MicroSDカードもお古だったのでなんか余分なパーティションがあったりしたため、削除しまくってFAT32で初期化してからNOOBS書き込んだ。
参考 https://qiita.com/alter095/items/799c13636acbe72a3a3c

MicroSDカードをラズパイにさして電源入れればNOOBSの画面が出てくるのでRaspbian Liteを選択してインストール。(インターネットがつながる環境で実施すること)
OSのインストールが終わったら下記でログインできるはず。

  • user: pi
  • password: raspberry

基本設定

sshが起動していないので起こしてやる。

pi@raspberry:~$ sudo systemctl enable ssh
pi@raspberry:~$ sudo systemctl start ssh

ノード名を変えたいので下記2つの内容をいじる。

  • /etc/hostname
  • /etc/hosts

今回はk8s01というノード名に変更してreboot。

パッケージのインストール

sudoするのもめんどくさいからrootでガンガンいく。

root@k8s01:~# apt-get install apt-transport-https ca-certificates curl software-properties-common
root@k8s01:~# curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | sudo apt-key add -
root@k8s01:~# echo "deb [arch=armhf] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
     $(lsb_release -cs) stable" | \
    sudo tee /etc/apt/sources.list.d/docker.list
root@k8s01:~# apt-get update
root@k8s01:~# apt-get install docker-ce

はい、ここでトラブル発生。
下記のようなエラーでdockerのインストールが失敗。

dpkg: error processing package aufs-dkms (--configure):
 installed aufs-dkms package post-installation script subprocess returned error exit status 10

こちらの記事を参考に下記コマンドで対処。

root@k8s01:~# rm /var/lib/dpkg/info/aufs-dkms.postinst
root@k8s01:~# rm /var/lib/dpkg/info/aufs-dkms.prerm
root@k8s01:~# dpkg --configure aufs-dkms

kubernetes用パッケージインストール。

root@k8s01:~# curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg|sudo apt-key add -
root@k8s01:~# echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kube.list
root@k8s01:~# apt-get update
root@k8s01:~# apt-get install -y kubelet kubeadm kubectl

kubernetesセットアップ

kubeadmでMasterノードを構築する。
下記の要素で構成する。

pod間ネットワーク flannel
LoadBalancer MetalLB
root@k8s01:~# sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
root@k8s01:~# 

はい、トラブル発生。swapがあるとダメだと怒られた。
なのでswapをつぶす。

root@k8s01:~# dphys-swapfile swapoff
root@k8s01:~# dphys-swapfile uninstall
root@k8s01:~# update-rc.d dphys-swapfile remove
root@k8s01:~# systemctl stop  dphys-swapfile
root@k8s01:~# systemctl disable  dphys-swapfile

気を取り直して再開。
kubeadm initがうまくいくと以下のような感じのアウトプットが出る。

root@k8s01:~# sudo kubeadm init --pod-network-cidr=10.244.0.0/16                                                                                                                                                                   
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.11.10 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.11.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 110.009699 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: arfdpt.kjfnf4acxdxhiibr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.11.10:6443 --token arfdpt.kjfnf4acxdxhiibr \
    --discovery-token-ca-cert-hash sha256:f23570483e84944034e254df3c2132beabbaf9be05b30aea3413a157f716de78 
root@k8s01:~# 

アウトプットの仰せのままに、下記コマンド実行。

root@k8s01:~# mkdir -p $HOME/.kube
root@k8s01:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s01:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

flannelインストール

下記参照。
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

root@k8s01:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@k8s01:~# kubectl get pods -n kube-system -o wide | grep flannel
kube-flannel-ds-arm-5qp2x       0/1     Init:0/1   0          33s     192.168.11.10   k8s01    <none>           <none>
root@k8s01:~# kubectl describe node
Name:               k8s01
Roles:              master
Labels:             beta.kubernetes.io/arch=arm
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm
                    kubernetes.io/hostname=k8s01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"8e:24:7c:fe:82:2f"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.11.10
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 17 Sep 2019 11:48:46 +0100
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 17 Sep 2019 12:25:32 +0100   Tue, 17 Sep 2019 11:48:44 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 17 Sep 2019 12:25:32 +0100   Tue, 17 Sep 2019 11:48:44 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 17 Sep 2019 12:25:32 +0100   Tue, 17 Sep 2019 11:48:44 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 17 Sep 2019 12:25:32 +0100   Tue, 17 Sep 2019 11:54:54 +0100   KubeletReady                 kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Addresses:
  InternalIP:  192.168.11.10
  Hostname:    k8s01
Capacity:
 cpu:                4
 ephemeral-storage:  30497732Ki
 memory:             948304Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  28106709765
 memory:             845904Ki
 pods:               110
System Info:
 Machine ID:                 a260550c5829479ea01ed1c3b9618f4f
 System UUID:                a260550c5829479ea01ed1c3b9618f4f
 Boot ID:                    d12ad2d0-d18a-46fb-8f31-49db3ade09d9
 Kernel Version:             4.19.57-v7+
 OS Image:                   Raspbian GNU/Linux 10 (buster)
 Operating System:           linux
 Architecture:               arm
 Container Runtime Version:  docker://19.3.2
 Kubelet Version:            v1.15.3
 Kube-Proxy Version:         v1.15.3
PodCIDR:                     10.244.0.0/24
Non-terminated Pods:         (8 in total)
  Namespace                  Name                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                             ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-5c98db65d4-dkdg8         100m (2%)     0 (0%)      70Mi (8%)        170Mi (20%)    37m
  kube-system                coredns-5c98db65d4-nj4kr         100m (2%)     0 (0%)      70Mi (8%)        170Mi (20%)    37m
  kube-system                etcd-k8s01                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         36m
  kube-system                kube-apiserver-k8s01             250m (6%)     0 (0%)      0 (0%)           0 (0%)         37m
  kube-system                kube-controller-manager-k8s01    200m (5%)     0 (0%)      0 (0%)           0 (0%)         37m
  kube-system                kube-flannel-ds-arm-5qp2x        100m (2%)     100m (2%)   50Mi (6%)        50Mi (6%)      32m
  kube-system                kube-proxy-x8t59                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         37m
  kube-system                kube-scheduler-k8s01             100m (2%)     0 (0%)      0 (0%)           0 (0%)         37m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                850m (21%)   100m (2%)
  memory             190Mi (23%)  390Mi (47%)
  ephemeral-storage  0 (0%)       0 (0%)
Events:
  Type    Reason                   Age                From               Message
  ----    ------                   ----               ----               -------
  Normal  NodeHasSufficientPID     38m (x7 over 38m)  kubelet, k8s01     Node k8s01 status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  38m (x8 over 38m)  kubelet, k8s01     Node k8s01 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    38m (x8 over 38m)  kubelet, k8s01     Node k8s01 status is now: NodeHasNoDiskPressure
  Normal  Starting                 36m                kube-proxy, k8s01  Starting kube-proxy.
  Normal  NodeReady                31m                kubelet, k8s01     Node k8s01 status is now: NodeReady
root@k8s01:~# 

Masterノードでpodを動かしたい

kubectl describe nodeで下記のようになっており、このノードではpodがスケジュールされないようになっている。

Taints:             node-role.kubernetes.io/master:NoSchedule

まだシングルノードなのでこのノード上でもpodを動かしたいため、下記の呪文で制限を外す。

root@k8s01:~# kubectl taint nodes --all node-role.kubernetes.io/master-

これでkubectl describe nodeで下記のようになるはず。

Taints:      <none>

MetalLBインストール

下記参照。
https://metallb.universe.tf/installation/

root@k8s01:~# kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
root@k8s01:~# kubectl apply -f metallb.config.yaml
root@k8s01:~# kubectl get pod -n metallb-system
NAME                        READY   STATUS    RESTARTS   AGE
controller-55d74449-zg9wd   1/1     Running   2          14m
speaker-jksdf               1/1     Running   2          14m

metallb.config.yamlの内容は下記。

metallb.config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.11.200-192.168.11.240

address-poolsのところは適当に環境に合わせて書き換える。

動作確認

みんな大好きnginxをデプロイ。下記のようにデプロイメント用ファイル作成。

nginx.deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
  type: LoadBalancer

deployment作成。

root@k8s01:~# kubectl create -f nginx.deployment.yaml 
deployment.apps/nginx-deployment created
service/nginx-service created
root@k8s01:~# 
root@k8s01:~# kubectl get svc                                                                                                                                                                                                      
NAME            TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)        AGE
kubernetes      ClusterIP      10.96.0.1    <none>           443/TCP        3h21m
nginx-service   LoadBalancer   10.97.23.9   192.168.11.200   80:30779/TCP   30s

nginx-testに割り振られたEXTERNAL-IPにアクセスしてみる。

root@k8s01:~# curl 192.168.11.200
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

ここまでくれば、外部からブラウザでアクセスしてみても見れるはず。。。
見れるはずだったが、最初はうまくいかなかった。
ブラウザからアクセスする前に、pingか何かでarp解決してみないとMetalLBのspeakerが反応しないっぽい。
arp解決したら外部からブラウザでアクセスしても見えるようになる。

その他トラブル

iptables -Lでiptablesの状態を確認したところ、下記のような設定が大量に作られていた。

target     prot opt source               destination
DROP       all  --  anywhere             anywhere             mark match 0x8000/0x8000 /* kubernetes firewall for dropping marked packets */
DROP       all  --  anywhere             anywhere             mark match 0x8000/0x8000 /* kubernetes firewall for dropping marked packets */
DROP       all  --  anywhere             anywhere             mark match 0x8000/0x8000 /* kubernetes firewall for dropping marked packets */
DROP       all  --  anywhere             anywhere             mark match 0x8000/0x8000 /* kubernetes firewall for dropping marked packets */
DROP       all  --  anywhere             anywhere             mark match 0x8000/0x8000 /* kubernetes firewall for dropping marked packets */
(すごい大量に続く)

ググった結果、どうやらiptablesのバグらしい。
https://github.com/kubernetes/kubernetes/issues/82361

iptablesを1.8.3以上にすればいいらしいが、Raspbian Busterだとまだ1.8.2しか拾えないのでワークアラウンド手順を実施。

root@k8s01:~# update-alternatives --set iptables /usr/sbin/iptables-legacy

これをやった後rebootしたらゴミが消えた。めでたしめでたし。

再構築

実は何回か再構築しているので、コマンド打つのめんどいからスクリプトにして再構築していた。

reset.sh
#!/bin/bash
# 全部消す
kubeadm reset
# 初期構築
kubeadm init --pod-network-cidr=10.244.0.0/16
cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# flannelインストール
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# masterでpodを動かす
kubectl taint nodes --all node-role.kubernetes.io/master-
# metallbインストール
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
kubectl apply -f metallb.config.yaml

パッケージインストールやワークアラウンド含めてansible化しようか検討中。

2
3
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
3