5
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

containerdとkubeadmを使用してkubernetesを構築②

Last updated at Posted at 2023-11-07

環境

  • ubuntu:22.0.4.1 LTS
  • kubernetes:1.28.2

はじめに

本記事では、kubeadmを使用するうえでの準備が完了しているものとします。
準備については、以下の記事を書いたので参考までに。

kuberadm向けConfiguration(InitConfiguration/JoinConfiguration)の準備

今回準備したサーバーには、2つのNICがあります。おそらくデフォルトでkubernetesで使用されるNICは、Default Routeが設定されているNICを使います。
今回管理系で2つ目のNICを用意したのですが、このまま構築しようとすると、期待するNICでノード同志の通信がされなかったので、Configurationファイルを用意しkubeadm実行時に食わせようと思います。

デフォルトのコンフィグを確認

以下のコマンドでinit、joinのデフォルトのコンフィグを生成できます。

  • InitConfiguration
kubeadm config print init-defaults
出力例
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
  • JoinConfiguration
kubeadm config print join-defaults
出力例
apiVersion: kubeadm.k8s.io/v1beta3
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
  bootstrapToken:
    apiServerEndpoint: kube-apiserver:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
  timeout: 5m0s
  tlsBootstrapToken: abcdef.0123456789abcdef
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: kube-controlplane-001
  taints: null

コンフィグの作成

これらを基にして、それぞれ対象のノード上にコンフィグファイルを作成します。本環境では以下のようなコンフィグを作成しました。

  • InitConfiguration
~/init_config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.21.0.8
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: kube-controlplane-001
  taints: null
  kubeletExtraArgs:
    node-ip: 172.21.0.8
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
  dnsDomain: example.local
  serviceSubnet: 192.16.0.0/12
  podSubnet: 192.168.0.0/16
scheduler: {}
  • JoinConfiguration
~/join_config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.21.0.8:6443
    token: abcdef.0123456789abcdef
    caCertHashes:
    - sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxx
    unsafeSkipCAVerification: true
  timeout: 5m0s
  tlsBootstrapToken: abcdef.0123456789abcdef
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: kube-worker-001
  taints: null
  kubeletExtraArgs:
    node-ip: 172.21.0.18

実際のホスト名と異なる名前でnodeRegistration.nameを登録しようとすると、kubeadm実行時に以下のWARNINGが出るのでhostsファイルに記載します。

[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "kube-worker-001" could not be reached
        [WARNING Hostname]: hostname "kube-worker-001": lookup kube-worker-001 on 127.0.0.53:53: server misbehaving

また、discovery.bootstrapToken.caCertHashes[]は別途修正が必要となります。(後述)

kubeadm initを実行

それでは、先に作成したコンフィグファイルを使用してkubeadm initを実行してみます。

controle-plane-node
sudo kubeadm init --config ~/config.yaml
出力例
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube-controlplane-001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [192.16.0.1 172.21.0.8]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube-controlplane-001 localhost] and IPs [172.21.0.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube-controlplane-001 localhost] and IPs [172.21.0.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.506164 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kube-controlplane-001 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kube-controlplane-001 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.21.0.8:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:dc254f77ea2ad56f5c2e2d4db4ab3a3c4859cece253198640a2a4bc8c2b0f6a7

ここで、最終行の「sha256:」からはじまるハッシュ値が先のJoinConfigurationに必要な情報ですので、JoinConfigurationを修正してください。

~/join_config.yaml
    caCertHashes:
-   - sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxx
+   - sha256:922084bff5a544762a8a7421938b7fe89c508e356a3943492a7e43025fcab63b

また、上記のハッシュ値を控え忘れてしまった場合は、以下コマンドにて再度出力することができます。

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

kubeadm joinを実行

それでは、ワーカーノードの準備も完了したので、クラスターに参加させてみたいと思います。

sudo kubeadm join --config ~/join_config.yaml
出力例
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

これでクラスターを作成することができました。
ワーカーノードを2台目以降追加する場合には、上記の手順を繰り返せばよいです。

kubectlの準備

今後は、kubectlにてクラスターを管理していくのですが、現段階ではまだ使用できません。
kube-apiに接続するための設定が必要となります。
先のkubeadm initの出力ログを再度確認してみると...

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

これが、kube-apiに接続するための設定となりますので、順番に実行してください。

ステータス確認

上記設定が完了すると、無事にkubectlが使用できます。試しにクラスターに登録されたノード情報を見てみましょう。

# kubectl get nodes -owide
NAME                    STATUS     ROLES           AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kube-controlplane-001   NotReady   control-plane   104s   v1.28.2   172.21.0.8    <none>        Ubuntu 22.04.1 LTS   5.15.0-88-generic   containerd://1.6.24
kube-worker-001         NotReady   <none>          39s    v1.28.2   172.21.0.18   <none>        Ubuntu 22.04.1 LTS   5.15.0-88-generic   containerd://1.6.24

これでクラスターが構成できました。「STATUS」は"NotReady"になっていますが、CNIプラグインを導入することで、晴れて"Ready"状態になります。

「HostName」、「INTERNAL-IP」、「EXTERNAL-IP」については、以下に解説があります。

HostName: ノードのカーネルによって伝えられたホスト名です。
ExternalIP: 通常は、外部にルーティング可能(クラスターの外からアクセス可能)なノードのIPアドレスです。
InternalIP: 通常は、クラスター内でのみルーティング可能なノードのIPアドレスです。

今回は以上です。

次回は、calicoを使用してCNIプラグインを導入していきます。

5
3
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
5
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?