0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

KubernetesをalmalinuxでHA構成で構築する② single control node編

Posted at

最初に

この記事は下記urlの続きである。
HA構成をする前にsingle controller構成を構築した。
後のHA構成ではここで作成したconfigを書き換えて構築する。

kubernetesのconfigを作成

kubeadmにdefaultのconfigを出力させて、それを編集する。
< >のところは自身の環境の値に置き換える。
#のcommentが記載されたところは書き換えなくても大丈夫

$ kubeadm config print init-defaults > kubeadm-config.yaml
$ vim ~/kubeadm-config.yaml

# ところどころ編集して、結果下記のようになった。
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef # tokenはdefaultのまま、あとで置き換える。
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: <control nodeのip>
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: control-node0 # nodeの名前、自由に設定してよし。
  taints: null
  kubeletExtraArgs:
    node-ip: <control nodeのip>
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.29.0 # 自身のversionに変更する
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.2.0.0/16 # serviceを提供するsubnet
  podSubnet: 10.1.0.0/16 # containerの通信用のsubnet
scheduler: {}
# 下記は丸々追記した。
# cgroup driverにsystemdを使うと明示。
# しかし、後々わかったことだが現在のversionではsystemdがdefaultになっているらしい。
# ので、おそらくいらない。
--- 
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

kubernetesを構築

$ sudo kubeadm init --config ~/kubeadm-config.yaml
[init] Using Kubernetes version: v1.29.6
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING Hostname]: hostname "k8s-control-node0" could not be reached
        [WARNING Hostname]: hostname "k8s-control-node0": lookup k8s-control-node0 on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0620 19:45:56.742455   20140 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-control-node0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-control-node0 localhost] and IPs [192.168.110.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-control-node0 localhost] and IPs [192.168.110.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controler-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controler-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.002055 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-control-node0 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-control-node0 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 29yn2t.2f2lphauo4b5ypow
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controler automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.110.10:6443 --token 29yn2t.2f2lphauo4b5ypow \
        --discovery-token-ca-cert-hash sha256:47fd261edc6b8a693088c726cbaccda783603bc1218d349c0891c0997569c24b

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

flannelのinstall

CNIにflannelを用いいる。
wgetはdefaultのalmalinuxにinstallされていないので先にdnfでinstallしておく。
kubectl applyはurlも指定できるが少し編集したいのでdownloadした。

$ wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
$ vim kube-flannnel.yml

  net-conf.json: |
    {
      "Network": "10.1.0.0/16", # ここをpodsubnetに揃える
      "Backend": {
        "Type": "vxlan"
      }
    }


$ kubectl get pod -A # flannelがrunningならok
# しかし、kubectl describe node control-node0 を実行するとNetwork plugin returns error: cni plugin not initializedが表示されている。
# それを直すにはcontainerdをrestartする
$ sudo systemctl restart containerd
$ kubectl get pod -A
NAMESPACE      NAME                                     READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-fn965                    1/1     Running   0          8m
kube-system    coredns-76f75df574-2f4d5                 1/1     Running   0          8m23s
kube-system    coredns-76f75df574-twqnz                 1/1     Running   0          8m23s
kube-system    etcd-control-node0                      1/1     Running   4          8m38s
kube-system    kube-apiserver-control-node0            1/1     Running   4          8m37s
kube-system    kube-controler-manager-control-node0   1/1     Running   1          8m37s
kube-system    kube-proxy-9b495                         1/1     Running   0          8m23s
kube-system    kube-scheduler-control-node0            1/1     Running   4          8m37s

これでcontrol nodeの構築は完了した。

参考

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?