6
5

More than 1 year has passed since last update.

kubeadmでk8sクラスタを構築CRI-O編

Last updated at Posted at 2023-01-19

Kubernetes 1.24からDockershimが廃止となりcriを使う必要がでてきました。
上に乗せるコンテナのビルド自体はDockerでやってよいと思いますが、クラスタはcri-docker or cri-oで構築する必要があります。
ちなみに別環境でcontainerdを利用している1.23をアップデートする際にcri-oに同時に上げようとしましたが、うまくいきませんでした。(わからん)

今回は、Rockey Linux 9にKubernetes v1.25.5, cri-oを新規セットアップしました。

ここではオレオレk8s環境を、minikubeなどを使わずにkubeadmを使ってサーバにセットアップしていきます。
公式の手順を参考にしながら進めましょう。
https://kubernetes.io/ja/docs/setup/independent/install-kubeadm/

[root@localhost tmp]# uname -a
Linux localhost.localdomain 5.14.0-162.6.1.el9_1.0.1.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Nov 28 18:44:09 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost tmp]# cat /etc/os-release 
NAME="Rocky Linux"
VERSION="9.1 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.1"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.1 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.1"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.1"

TL;DR

オレオレk8sクラスタ(1台)を立ててkubectl applyできるようにする。

OSの準備

公式の手順を参考にしながら進めましょう。
https://kubernetes.io/ja/docs/setup/independent/install-kubeadm/

swapをoff

始める前にの項にswapをOFFにしておけよってあるのでします。
Swapがオフであること。kubeletが正常に動作するためにはswapは必ずオフでなければなりません。

[root@localhost tmp]# sudo swapoff -a
[root@localhost tmp]# free -h
               total        used        free      shared  buff/cache   available
Mem:           7.5Gi       1.4Gi       3.9Gi        10Mi       2.5Gi       6.1Gi
Swap:             0B          0B          0B

sudo swapoff -aしてswap領域がなくなっていることを確認すればよろしいかと。

その他

そのほかにMACアドレスやホスト名がユニークであること、product_uuidがほかサーバと重複していないか確認すべし!とあります。
firewallによって利用するポートを許可してください。とりあえず今回はお試しなので systemctl stop firewalld しました。(外に出ているサーバの場合には絶対にしないように注意)

Install CRI-O

こちらを参考に勧めていきます。
https://kubernetes.io/ja/docs/setup/production-environment/container-runtimes/#cri-o

networkのブリッジの設定

書いてあるとおりにコピペします。ネットワークのパケットをブリッジ&ルーティングできるようにします。
k8sはiptablesを利用してパケットを転送します。

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

リポジトリ追加

下記にRockey用のリポジトリがないため、CentOS_8_Streamで代用します。(ビルドしたほうがよいのかも)

export OS=CentOS_8_Stream
export VERSION=1.25
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_8_Stream/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo

dnf install cri-o
dnf install containernetworking-plugins

crio.conf調整

/etc/crio/crio.confに以下を追加(コメントアウトされてるので外すのでもおk

[crio.runtime.runtimes.runc]
runtime_path = "" 
runtime_type = "oci" 
runtime_root = "/run/runc" 

cri-o動かす。

systemctlで起動させてあげます。

sudo systemctl daemon-reload
sudo systemctl enable crio
sudo systemctl start crio

[root@k8s-node01 crio]# systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/usr/lib/systemd/system/crio.service; enabled; vendor preset: disabled)
     Active: active (running) since Thu 2022-12-15 03:55:24 EST; 5s ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 12123 (crio)
      Tasks: 28
     Memory: 38.2M
        CPU: 351ms
     CGroup: /system.slice/crio.service
             └─12123 /usr/bin/crio

Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.006674834-05:00" level=info msg="RDT not available in the host system" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.011036842-05:00" level=info msg="Conmon does support the --sync option" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.011088556-05:00" level=info msg="Conmon does support the --log-global-size-max option" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.018631810-05:00" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge>
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.025073153-05:00" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/>
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.025121761-05:00" level=info msg="Updated default CNI network name to crio" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.044838541-05:00" level=info msg="Serving metrics on :9537 via HTTP" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.045259409-05:00" level=error msg="Writing clean shutdown supported file: open /var/lib/crio/clean.shutd>
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.045361471-05:00" level=error msg="Failed to sync parent directory of clean shutdown file: open /var/lib>
Dec 15 03:55:24 k8s-node01.ceres.local systemd[1]: Started Container Runtime Interface for OCI (CRI-O).

インストール

kubeadm、kubelet、kubectlのインストールを参考にyumリポジトリを追加します。

公式のままですね。

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

selinuxを調整して、インストールします。公式ではyumコマンドを利用していますが、fedoraさんらしくdnfでインストールします。

# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# dnf install -y kubelet-1.25.5-0 kubeadm-1.25.5-0 kubectl-1.25.5-0 --disableexcludes=kubernetes
systemctl enable --now kubelet

packageをインストールしたら一度rebootしておいてください。selinuxをdisableにしておきたいので。

kubeadmによる初期設定

Creating a single control-plane cluster with kubeadm

dnf updateなどはしてあるので飛ばします。

kubeadm init <args>をします。<args>にはオプションを入れられます。次の項目でCalicoというコンテナネットワークプラグインを使うので、--pod-network-cidr=10.244.0.0/16を追加しておきます。後述のcalico側のマニフェストファイル側を変えてもいい。

[root@localhost ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
I0119 02:55:04.007666   12858 version.go:256] remote version is much newer: v1.26.1; falling back to: stable-1.25
[init] Using Kubernetes version: v1.25.6
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.96.0.1 172.xx.12.62]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [172.xx.12.62 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [172.xx.12.62 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.501654 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 815j1e.wy5xkrhs0fkwkkcx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.xx.12.62:6443 --token 815j1e.wy5xkrhs0fkwkkcx \
        --discovery-token-ca-cert-hash sha256:88abcd03f98035ef780d5a2455d89c6a3c8fc860bf2baf285396a78e349499f2 

上にあるように、設定ファイルをhomeディレクトリに持ってきます。

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ながい
    server: https://172.xx.12.62:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: ながい
    client-key-data: ながい

これでkubectlで見れるようになるはず。

[root@localhost lib]# kubectl get all -A
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-565d847f94-b9p5t                        0/1     Pending   0          38s
kube-system   pod/coredns-565d847f94-tm6lw                        0/1     Pending   0          38s
kube-system   pod/etcd-localhost.localdomain                      1/1     Running   2          52s
kube-system   pod/kube-apiserver-localhost.localdomain            1/1     Running   2          54s
kube-system   pod/kube-controller-manager-localhost.localdomain   1/1     Running   2          53s
kube-system   pod/kube-proxy-64t26                                1/1     Running   0          38s
kube-system   pod/kube-scheduler-localhost.localdomain            1/1     Running   2          52s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  54s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   53s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   53s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           53s

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-565d847f94   2         2         0       39s

現段階ではcorednsのpodは動かないよ。

CNIのセットアップ

pod同士のネットワークはプラグインで入れる必要があるので、Calicoを入れる。
何種類か存在する。ontainer Network Interface (CNI) の略。

この手順通りにやればおk
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises
manifestを直接読み込ませる方法とoperatorのpodがやってくれる方法と2種類あるが、今回は前者で。

※2023/04/11 tigera-operator.yamlを使う方法でも問題ありませんでした。custom-resources.yaml にipPoolsを設定します。calico:v3.25.1, k8s:v1.26.3

canalというcalicoとFlannelを組み合わせたものもあるが、calicoにVXLAN機能が内報されたので特段理由がなければ使わなくてよさそう。

calico.yaml

calico.yaml をダウンロードしてCIDRのところを編集する必要がある。
昔は10.244.0.0/16が例に書いてあった気がするが、192.168.0.0/16になっていたので、DLして編集しよう。

# curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O

CALICO_IPV4POOL_CIDRのところのコメントを外して192.168.0.0/16から10.244.0.0/16に変更。

            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"

を下のようにする。

            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"

編集したマニフェストをapply。

[root@localhost ~]# kubectl apply -f ./calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

作成中。

[root@localhost ~]# kubectl get all -A
NAMESPACE     NAME                                                READY   STATUS              RESTARTS   AGE
kube-system   pod/calico-kube-controllers-74677b4c5f-7mm8p        0/1     ContainerCreating   0          17s
kube-system   pod/calico-node-bpznj                               0/1     Init:2/3            0          17s
kube-system   pod/coredns-565d847f94-b9p5t                        0/1     ContainerCreating   0          8m51s
kube-system   pod/coredns-565d847f94-tm6lw                        0/1     ContainerCreating   0          8m51s
kube-system   pod/etcd-localhost.localdomain                      1/1     Running             2          9m5s
kube-system   pod/kube-apiserver-localhost.localdomain            1/1     Running             2          9m7s
kube-system   pod/kube-controller-manager-localhost.localdomain   1/1     Running             2          9m6s
kube-system   pod/kube-proxy-64t26                                1/1     Running             0          8m51s
kube-system   pod/kube-scheduler-localhost.localdomain            1/1     Running             2          9m5s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  9m7s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   9m6s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         0       1            0           kubernetes.io/os=linux   17s
kube-system   daemonset.apps/kube-proxy    1         1         1       1            1           kubernetes.io/os=linux   9m6s

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   0/1     1            0           17s
kube-system   deployment.apps/coredns                   0/2     2            0           9m6s

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-74677b4c5f   1         1         0       17s
kube-system   replicaset.apps/coredns-565d847f94                   2         2         0       8m52s

すべてのpodがrunningになる。

[root@localhost ~]# kubectl get all -A
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-kube-controllers-74677b4c5f-7mm8p        1/1     Running   0          45s
kube-system   pod/calico-node-bpznj                               1/1     Running   0          45s
kube-system   pod/coredns-565d847f94-b9p5t                        1/1     Running   0          9m19s
kube-system   pod/coredns-565d847f94-tm6lw                        1/1     Running   0          9m19s
kube-system   pod/etcd-localhost.localdomain                      1/1     Running   2          9m33s
kube-system   pod/kube-apiserver-localhost.localdomain            1/1     Running   2          9m35s
kube-system   pod/kube-controller-manager-localhost.localdomain   1/1     Running   2          9m34s
kube-system   pod/kube-proxy-64t26                                1/1     Running   0          9m19s
kube-system   pod/kube-scheduler-localhost.localdomain            1/1     Running   2          9m33s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  9m35s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   9m34s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   45s
kube-system   daemonset.apps/kube-proxy    1         1         1       1            1           kubernetes.io/os=linux   9m34s

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           45s
kube-system   deployment.apps/coredns                   2/2     2            2           9m34s

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-74677b4c5f   1         1         1       45s
kube-system   replicaset.apps/coredns-565d847f94                   2         2         2       9m20s

できた。

ip aコマンドをするとNICが増えているのがわかる。tunl0とcaliなんちゃらがそれ。

[root@localhost etc]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ae:20:df brd ff:ff:ff:ff:ff:ff
    inet 172.xx.12.62/24 brd 172.xx.12.255 scope global noprefixroute enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feae:20df/64 scope link 
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.244.102.128/32 scope global tunl0
       valid_lft forever preferred_lft forever
6: cali9cc271e60ca@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns 38b565cd-5099-43f6-a21e-82ccd68eda6c
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
7: cali015e1fca632@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns 86d781d2-7ad6-4f84-9a1b-febd17732b48
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
8: cali2db521aeade@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns c1ac37af-9a8d-45a0-93ae-4f6a0cdda07f
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

Control Planeでpodを動かす場合はnodeのtaintを消して、podをスケジューリングできるようにする。(デフォルトでControl Planeでは動かないようになっている。)

[root@localhost ~]# kubectl taint nodes --all node-role.kubernetes.io/control-plane-
node/localhost.localdomain untainted

動作確認

ためしにnginxをデプロイしてみます。

cat <<EOF | kubectl apply -f -
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-test
  template:
    metadata:
      labels:
        app: nginx-test
    spec:
      containers:
        - args:
          image: nginx:latest
          imagePullPolicy: IfNotPresent
          name: nginx-test
          ports:
            - containerPort: 80
              protocol: TCP

---
kind: Service
apiVersion: v1
metadata:
  name: nginx-test-svc
spec:
  ports:
  - name: "http-port"
    protocol: TCP
    port: 8080
    targetPort: 80
  selector:
    app: nginx-test

EOF
deployment.apps/nginx-test created
service/nginx-test-svc created

できた。

[root@localhost ~]# kubectl get all 
NAME                              READY   STATUS    RESTARTS   AGE
pod/nginx-test-54cdc496f7-zbg6p   1/1     Running   0          40s

NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP    17m
service/nginx-test-svc   ClusterIP   10.96.28.133   <none>        8080/TCP   5m45s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-test   1/1     1            1           5m45s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-test-54cdc496f7   1         1         1       40s
replicaset.apps/nginx-test-64d5bd95d7   0         0         0       5m45s

確認

[murata@localhost ~]$ curl 10.96.28.133:8080 -I
HTTP/1.1 200 OK
Server: nginx/1.23.3
Date: Thu, 19 Jan 2023 09:41:05 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Dec 2022 15:53:53 GMT
Connection: keep-alive
ETag: "6398a011-267"
Accept-Ranges: bytes

ちゃんとレスポンスも受け取れる。

まとめ。

まだIngress(ロードバランサ)の設定をしていないため外からの通信はできないけど、kubeadmを使って構築はこんな感じになる。
クラスタにノード(サーバ)を追加する際にはkubeadm joinのコマンドをやれば勝手につないでくれる。

6
5
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
6
5