6
4

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

GCE 上に kubeadm を使って 1-Control Plane, 3-Worker Node の kubernetes クラスタを構築する

Last updated at Posted at 2021-02-25

コントロールプレーンとして 1 台、ワーカーノードとして 3 台の GCE インスタンスを用意し、
kubeadm を使って kubernetes クラスタを構築していきます。
また、コンテナランタイムには containerd を選択します。

なお、ローカルマシン上では fish shell、SSH 先の GCE インスタンス上では bash shell を使用しています。
混同を避けるため、コードブロックのタイトル部にシェル名を記載しています。

各種バージョン

bash
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:09:38Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
$ containerd -v
containerd github.com/containerd/containerd 1.3.3-0ubuntu2.2
$ kubelet --version
Kubernetes v1.20.4
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
動作確認完了時のコンテナイメージ一覧
$ k get po -A -o jsonpath="{..image}" --kubeconfig ./admin.conf |\
  tr -s '[[:space:]]' '\n' |\
  sort |\
  uniq -c
  16 docker.io/calico/cni:v3.18.0
   2 docker.io/calico/kube-controllers:v3.18.0
   8 docker.io/calico/node:v3.18.0
   8 docker.io/calico/pod2daemon-flexvol:v3.18.0
   3 docker.io/library/nginx:latest
   4 k8s.gcr.io/coredns:1.7.0
   2 k8s.gcr.io/etcd:3.4.13-0
   2 k8s.gcr.io/kube-apiserver:v1.20.4
   2 k8s.gcr.io/kube-controller-manager:v1.20.4
   8 k8s.gcr.io/kube-proxy:v1.20.4
   2 k8s.gcr.io/kube-scheduler:v1.20.4

コンピュートリソースの作成

ここは Kubernetes the Hard WayProvisioninig Compute Resources と、CalicoSelf-managed Kubernetes in Google Compute Engine (GCE) を参考に進めていきます。

VPC の作成

クラスタを作成する VPC を作成します。

fish
$ gcloud compute networks create kubernetes-by-kubeadm --subnet-mode custom
Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/networks/kubernetes-by-kubeadm].
NAME                   SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
kubernetes-by-kubeadm  CUSTOM       REGIONAL
fish
$ gcloud compute networks subnets create kubernetes \
  --network kubernetes-by-kubeadm \
  --range 10.240.0.0/24
Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/regions/asia-northeast1/subnetworks/kubernetes].
NAME        REGION           NETWORK                RANGE
kubernetes  asia-northeast1  kubernetes-by-kubeadm  10.240.0.0/24

ファイアウォールルールの作成

クラスタ内では tcp, udp, icmp, ipip の通信を許可し、クラスタ外からは tcp:22, tcp:6443, icmp の通信を許可します。

fish
$ gcloud compute firewall-rules create kubernetes-by-kubeadm-allow-internal \
  --allow tcp,udp,icmp,ipip \
  --network kubernetes-by-kubeadm \
  --source-ranges 10.240.0.0/24,10.200.0.0/16
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/firewalls/kubernetes-by-kubeadm-allow-internal].
Creating firewall...done.
NAME                                  NETWORK                DIRECTION  PRIORITY  ALLOW              DENY  DISABLED
kubernetes-by-kubeadm-allow-internal  kubernetes-by-kubeadm  INGRESS    1000      tcp,udp,icmp,ipip        False
$ gcloud compute firewall-rules create kubernetes-by-kubeadm-allow-external \
  --allow tcp:22,tcp:6443,icmp \
  --network kubernetes-by-kubeadm \
  --source-ranges 0.0.0.0/0
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/firewalls/kubernetes-by-kubeadm-allow-external].
Creating firewall...done.
NAME                                  NETWORK                DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
kubernetes-by-kubeadm-allow-external  kubernetes-by-kubeadm  INGRESS    1000      tcp:22,tcp:443,tcp:6443,icmp        False

パブリック IP の払い出し

今回はコントロールプレーン1台なので、コントロールプレーンとして用意したインスタンスの外部 IP をそのまま API Endpoint としてもよいが、HA構成にした場合に LB で負荷分散することを考慮して、パブリック IP を払い出しておく。

fish
$ gcloud compute addresses create kubernetes-by-kubeadm \
  --region (gcloud config get-value compute/region)
Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/regions/asia-northeast1/addresses/kubernetes-by-kubeadm].
$ gcloud compute addresses list
NAME                   ADDRESS/RANGE  TYPE      PURPOSE  NETWORK  REGION           SUBNET  STATUS
kubernetes-by-kubeadm  34.85.15.20    EXTERNAL                    asia-northeast1          RESERVED

コンピュートインスタンスの作成

まずはコントロールプレーン用のインスタンス。

fish
$ gcloud compute instances create controller \
  --async \
  --boot-disk-size 200GB \
  --can-ip-forward \
  --image-family ubuntu-2004-lts \
  --image-project ubuntu-os-cloud \
  --machine-type n1-standard-4 \
  --private-network-ip 10.240.0.10 \
  --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
  --subnet kubernetes \
  --tags kubernetes-by-kubeadm,controller
NOTE: The users will be charged for public IPs when VMs are created.
Instance creation in progress for [controller]: https://www.googleapis.com/compute/v1/projects/sandbox-project/zones/asia-northeast1-a/operations/operation-1613193837534-5bb30f5a3dc98-ba859467-b3c39485
Use [gcloud compute operations describe URI] command to check the status of the operation(s).

次に、ワーカーノード用のインスタンスを3つ作成する。

fish
$ for i in 0 1 2
    gcloud compute instances create worker-{$i} \
      --async \
      --boot-disk-size 200GB \
      --can-ip-forward \
      --image-family ubuntu-2004-lts \
      --image-project ubuntu-os-cloud \
      --machine-type n1-standard-4 \
      --private-network-ip 10.240.0.2{$i} \
      --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
      --subnet kubernetes \
      --tags kubernetes-by-kubeadm,worker
  end
NOTE: The users will be charged for public IPs when VMs are created.
Instance creation in progress for [worker-0]: https://www.googleapis.com/compute/v1/projects/sandbox-project/zones/asia-northeast1-a/operations/operation-1613193959993-5bb30fcf070d9-aa954478-17a67d14
Use [gcloud compute operations describe URI] command to check the status of the operation(s).
NOTE: The users will be charged for public IPs when VMs are created.
Instance creation in progress for [worker-1]: https://www.googleapis.com/compute/v1/projects/sandbox-project/zones/asia-northeast1-a/operations/operation-1613193964271-5bb30fd31b636-5bc6d372-19d8209d
Use [gcloud compute operations describe URI] command to check the status of the operation(s).
NOTE: The users will be charged for public IPs when VMs are created.
Instance creation in progress for [worker-2]: https://www.googleapis.com/compute/v1/projects/sandbox-project/zones/asia-northeast1-a/operations/operation-1613193968163-5bb30fd6d1b22-133c16bc-8135db2a
Use [gcloud compute operations describe URI] command to check the status of the operation(s).

外部LBの設定

このタイミングで、先ほど払い出したパブリック IP に対して、LB の設定を行っておきます。
ヘルスチェックの設定と、ターゲットプールにコントロールプレーン用のインスタンスを設定します。

fish
$ set KUBERNETES_PUBLIC_ADDRESS (gcloud compute addresses describe kubernetes-by-kubeadm \
    --region (gcloud config get-value compute/region) \
    --format 'value(address)')

$ gcloud compute http-health-checks create kubernetes \
    --description "Kubernetes Health Check" \
    --host "kubernetes.default.svc.cluster.local" \
    --request-path "/healthz"
Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/httpHealthChecks/kubernetes].
NAME        HOST                                  PORT  REQUEST_PATH
kubernetes  kubernetes.default.svc.cluster.local  80    /healthz

$ gcloud compute firewall-rules create kubernetes-by-kubeadm-allow-health-check \
    --network kubernetes-by-kubeadm \
    --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
    --allow tcp
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/firewalls/kubernetes-by-kubeadm-allow-health-check].
Creating firewall...done.
NAME                                      NETWORK                DIRECTION  PRIORITY  ALLOW  DENY  DISABLED
kubernetes-by-kubeadm-allow-health-check  kubernetes-by-kubeadm  INGRESS    1000      tcp          False

$ gcloud compute target-pools create kubernetes-target-pool \
    --http-health-check kubernetes
Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/regions/asia-northeast1/targetPools/kubernetes-target-pool].
NAME                    REGION           SESSION_AFFINITY  BACKUP  HEALTH_CHECKS
kubernetes-target-pool  asia-northeast1  NONE                      kubernetes

$ gcloud compute target-pools add-instances kubernetes-target-pool \
    --instances controller
Updated [https://www.googleapis.com/compute/v1/projects/sandbox-project/regions/asia-northeast1/targetPools/kubernetes-target-pool].

$ gcloud compute forwarding-rules create kubernetes-forwarding-rule \
    --address $KUBERNETES_PUBLIC_ADDRESS \
    --ports 6443 \
    --region (gcloud config get-value compute/region) \
    --target-pool kubernetes-target-pool
Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/regions/asia-northeast1/forwardingRules/kubernetes-forwarding-rule].

各インスタンスの前準備

公式ドキュメントのInstalling kubeadm を参考に進めていきます。

対象となるインスタンスの確認

fish
$ gcloud compute instances list --filter="tags.items=kubernetes-by-kubeadm"
NAME        ZONE               MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
controller  asia-northeast1-a  n1-standard-4               10.240.0.10  35.221.119.152  RUNNING
worker-0    asia-northeast1-a  n1-standard-4               10.240.0.20  35.221.99.135   RUNNING
worker-1    asia-northeast1-a  n1-standard-4               10.240.0.21  34.84.119.161   RUNNING
worker-2    asia-northeast1-a  n1-standard-4               10.240.0.22  34.85.61.122    RUNNING

各インスタンスに gcloud compute ssh $INSTANCE_NAME で接続し、各種準備を進めていきます。
繰り返しになりますが、インスタンス内のシェルは bash になっています。

iptables がブリッジを通過するトラフィックを処理できるようにする

bash
hitsumabushi845@controller:~$ sudo modprobe br_netfilter
hitsumabushi845@controller:~$ lsmod | grep br_netfilter
br_netfilter           28672  0
bridge                176128  1 br_netfilter
hitsumabushi845@controller:~$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
hitsumabushi845@controller:~$ sudo sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/60-gce-network-security.conf ...
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 1
net.ipv4.conf.default.secure_redirects = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
kernel.randomize_va_space = 2
kernel.panic = 10
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /usr/lib/sysctl.d/protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.conf ...

containerd のインストール

ここは Container runtimes の containerd に関する部分に沿って進めていきます。

bash
hitsumabushi845@controller:~$ cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
> overlay
> br_netfilter
> EOF
overlay
br_netfilter
hitsumabushi845@controller:~$ sudo modprobe overlay
hitsumabushi845@controller:~$ sudo modprobe br_netfilter
hitsumabushi845@controller:~$ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
> net.bridge.bridge-nf-call-iptables  = 1
> net.ipv4.ip_forward                 = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> EOF
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
hitsumabushi845@controller:~$ sudo sysctl --system
(略)
hitsumabushi845@controller:~$ sudo apt-get update && sudo apt-get install -y containerd
(略)
hitsumabushi845@controller:~$ sudo mkdir -p /etc/containerd
hitsumabushi845@controller:~$ containerd config default | sudo tee /etc/containerd/config.toml
(略)
hitsumabushi845@controller:~$ sudo systemctl restart containerd
hitsumabushi845@controller:~$ sudo systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2021-02-21 10:51:28 UTC; 10s ago
       Docs: https://containerd.io
    Process: 14555 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 14562 (containerd)
      Tasks: 14
     Memory: 21.3M
     CGroup: /system.slice/containerd.service
             └─14562 /usr/bin/containerd

Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.302856785Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.303184344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.303712626Z" level=info msg="Start subscribing containerd event"
Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.303792746Z" level=info msg="Start recovering state"
Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.303902734Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.303999145Z" level=info msg="Start event monitor"
Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.304029295Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.304054635Z" level=info msg="containerd successfully booted in 0.042356s"
Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.304030429Z" level=info msg="Start snapshots syncer"
Feb 21 10:51:28 controller containerd[14562]: time="2021-02-21T10:51:28.304557561Z" level=info msg="Start streaming server"

kubeadm, kubelet, kubectl のインストール

公式ドキュメントのInstalling kubeadm に戻ります。

bash
hitsumabushi845@controller:~$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
(略)
hitsumabushi845@controller:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
hitsumabushi845@controller:~$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
> deb https://apt.kubernetes.io/ kubernetes-xenial main
> EOF
deb https://apt.kubernetes.io/ kubernetes-xenial main
hitsumabushi845@controller:~$ sudo apt-get update
(略)
hitsumabushi845@controller:~$ sudo apt-get install -y kubelet kubeadm kubectl
(略)
hitsumabushi845@controller:~$ sudo apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

ここまでの作業は controller, worker-0, worker-1, worker-2 それぞれのインスタンス上で行います。

control plane の作成

ここからは Creating a cluster with kubeadm に沿って進めていきます。

kubeadm init の実行

今回、外部IPを払い出し、そちらを API Server のエンドポイントとするため、--control-plane-endpoint には外部IPを指定します。また、Pod CIDR は 10.200.0.0/16 にしています1

bash
hitsumabushi845@controller:~$ sudo kubeadm init --control-plane-endpoint=34.85.15.20 --pod-network-cidr=10.200.0.0/16
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [controller kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.240.0.10 34.85.15.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controller localhost] and IPs [10.240.0.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controller localhost] and IPs [10.240.0.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 104.006101 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node controller as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node controller as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 3cznxo.v1ax148y0hjdzail
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 34.85.15.20:6443 --token 3cznxo.v1ax148y0hjdzail \
    --discovery-token-ca-cert-hash sha256:d778f85f07c092a196b77e1669dfceed74b9092587293274fcc8652a9936511f \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 34.85.15.20:6443 --token 3cznxo.v1ax148y0hjdzail \
    --discovery-token-ca-cert-hash sha256:d778f85f07c092a196b77e1669dfceed74b9092587293274fcc8652a9936511f

init に成功すると、最下部にさまざまなコマンドが表示される。
スーパーユーザ以外で kubectl を叩く場合は、

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

を実行する。

コントロールプレーンのノードを追加する場合は、

  kubeadm join 34.85.15.20:6443 --token 3cznxo.v1ax148y0hjdzail \
    --discovery-token-ca-cert-hash sha256:d778f85f07c092a196b77e1669dfceed74b9092587293274fcc8652a9936511f \
    --control-plane

を実行する。

ワーカーノードを追加する場合は、

kubeadm join 34.85.15.20:6443 --token 3cznxo.v1ax148y0hjdzail \
    --discovery-token-ca-cert-hash sha256:d778f85f07c092a196b77e1669dfceed74b9092587293274fcc8652a9936511f

を実行する。
特にワーカーノードを追加するコマンドはこの後の作業で必要になるので、控えておく。

kubectl 実行確認

先述のコマンドを利用して、controller インスタンスの通常ユーザで kubectl を実行できるようにしておく。

bash
hitsumabushi845@controller:~$ mkdir -p $HOME/.kube
hitsumabushi845@controller:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
hitsumabushi845@controller:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
hitsumabushi845@controller:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:03:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

kubectl get nodes で結果が返ってくるか確認する。

bash
hitsumabushi845@controller:~$ kubectl get nodes
NAME         STATUS     ROLES                  AGE   VERSION
controller   NotReady   control-plane,master   47m   v1.20.4

ノードとして controller が表示されているが、これは Taint で NoSchedule として設定されているため、通常このノードに Pod がスケジューリングされることはない。

bash
hitsumabushi845@controller:~$ kubectl describe node controller | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

Calico のインストール

calico.yaml をダウンロードする。

bash
hitsumabushi845@controller:~$ sudo curl -OL https://docs.projectcalico.org/manifests/calico.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 20847  100 20847    0     0  50112      0 --:--:-- --:--:-- --:--:-- 50233

kubeadm init の際に --pod-network-cidr を指定したため、calico.yamlCALICO_IPV4POOL_CIDR の値を書き換える。
また、念のため FELIX_IPTABLESBACKEND の値を NFT と明示的に指定しておく(デフォルトは Auto )。

calico.yaml
-            # - name: CALICO_IPV4POOL_CIDR
-            #   value: "192.168.0.0/16"
+            - name: CALICO_IPV4POOL_CIDR
+              value: "10.200.0.0/16"
+            - name: FELIX_IPTABLESBACKEND
+              value: NFT
bash
hitsumabushi845@controller:~$ kubectl apply -f calico.yaml
configmap/calico-config created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
hitsumabushi845@controller:~$ kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6b8f6f78dc-k84wt   1/1     Running   0          5m
kube-system   calico-node-4jrkc                          1/1     Running   0          5m
kube-system   coredns-74ff55c5b-5ctxr                    1/1     Running   0          52m
kube-system   coredns-74ff55c5b-95lxc                    1/1     Running   0          52m
kube-system   etcd-controller                            1/1     Running   0          52m
kube-system   kube-apiserver-controller                  1/1     Running   1          52m
kube-system   kube-controller-manager-controller         1/1     Running   0          52m
kube-system   kube-proxy-2sgv7                           1/1     Running   0          52m
kube-system   kube-scheduler-controller                  1/1     Running   0          52m
hitsumabushi845@controller:~$ kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
controller   Ready    control-plane,master   53m   v1.20.4

Worker Node の作成

ここまでの作業でコントロールプレーンの作成が完了したので、ここからはワーカーノードを追加していきます。

kubeadm join の実行

ワーカーノード用のインスタンス上で、先述の kubeadm join コマンドを実行する。

bash
hitsumabushi845@worker-0:~$ sudo kubeadm join 34.85.15.20:6443 --token 3cznxo.v1ax148y0hjdzail     --discovery-token-ca-cert-hash sha256:d778f85f07c092a196b77e1669dfceed74b9092587293274fcc8652a9936511f
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

確認

コントロールプレーンのインスタンス上で、kubectl get nodes を実行する。

bash
hitsumabushi845@controller:~$ kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
controller   Ready    control-plane,master   57m   v1.20.4
worker-0     Ready    <none>                 73s   v1.20.4

ワーカーノードが追加されていることが確認できました。
同様に、worker-1, worker-2 のインスタンスでも実行し、ワーカーノードが3つ存在する状態にします。

bash
hitsumabushi845@controller:~$ kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
controller   Ready    control-plane,master   72m     v1.20.4
worker-0     Ready    <none>                 16m     v1.20.4
worker-1     Ready    <none>                 4m49s   v1.20.4
worker-2     Ready    <none>                 26s     v1.20.4

ローカルからkubectlを叩けるようにする

コントロールプレーンのインスタンスから、kubeconfig ファイルをコピーします。

fish
$ gcloud compute scp root@controller:/etc/kubernetes/admin.conf .
admin.conf

--kubeconfig オプションでコピーしてきた admin.conf を指定して kubectl が叩けることを確認します。
都度 --kubeconfig を指定するのが面倒であれば、$KUBECONFIG 環境変数を設定するとよいです。

fish
$ k get nodes --kubeconfig ./admin.conf
NAME         STATUS   ROLES                  AGE     VERSION
controller   Ready    control-plane,master   74m     v1.20.4
worker-0     Ready    <none>                 18m     v1.20.4
worker-1     Ready    <none>                 6m56s   v1.20.4
worker-2     Ready    <none>                 2m33s   v1.20.4

動作確認

ここまでで、1-Control Plane, 3-Worker Nodes のクラスタが作成できました。
kubernetes-the-hard-way の Smoke Test を参考に、クラスタの動作確認を行っていきます。

Deployment の作成

Deployment リソースが作成できることを確認します。

fish
$ k create deploy nginx --image=nginx --replicas=3 --kubeconfig ./admin.conf
deployment.apps/nginx created
$ k get deploy --kubeconfig ./admin.conf
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/3     3            3           47s
$ k get po -owide --kubeconfig ./admin.conf
NAME                     READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-69bjt   1/1     Running   0          29m   10.200.43.2      worker-0   <none>           <none>
nginx-6799fc88d8-8gdqj   1/1     Running   0          29m   10.200.133.194   worker-2   <none>           <none>
nginx-6799fc88d8-d92bc   1/1     Running   0          29m   10.200.226.66    worker-1   <none>           <none>

ポートフォワードの確認

作成した nginx の Pod に対し、 port-forward を試します。

fish
$ set POD_NAME (k get po -l app=nginx -o jsonpath="{.items[0].metadata.name}" --kubeconfig ./admin.conf)
$ echo $POD_NAME
nginx-6799fc88d8-4ttf9
$ k port-forward $POD_NAME 8080:80 --kubeconfig ./admin.conf
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

別のターミナルウィンドウから、ポートフォワードした Pod に対して curl が叩けることを確認します。

fish
$ curl --head http://127.0.0.1:8080
HTTP/1.1 200 OK
Server: nginx/1.19.7
Date: Sun, 21 Feb 2021 13:46:42 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 16 Feb 2021 15:57:18 GMT
Connection: keep-alive
ETag: "602beb5e-264"
Accept-Ranges: bytes

念のため、対象の Pod のログを確認してみます。
ログ最下部に先ほどの curl リクエストのログが確認できます。

fish
$ k logs $POD_NAME --kubeconfig ./admin.conf
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
127.0.0.1 - - [21/Feb/2021:13:46:42 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.1" "-"

NodePort Service による Deployment の外部公開

NodePort Service を使って、先ほど作成した Deployment の外部公開を試してみます。

fish
$ k expose deploy nginx --port 80 --type NodePort --kubeconfig  ./admin.conf
service/nginx exposed
$ set NODE_PORT (k get svc nginx --output=jsonpath='{range .spec.ports[0]}{.nodePort}' --kubeconfig ./admin.conf)
$ echo $NODE_PORT
31120

31120 番ポートで Deployment が公開されましたが、このポートに対する外部疎通性は VPC のファイアウォールルールでまだ許可されていないので、ルールを追加します。

fish
$ gcloud compute firewall-rules create kubernetes-by-kubeadm-allow-nginx-service \
  --allow=tcp:{$NODE_PORT} \
  --network kubernetes-by-kubeadm
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/firewalls/kubernetes-by-kubeadm-allow-nginx-service].
Creating firewall...done.
NAME                                       NETWORK                DIRECTION  PRIORITY  ALLOW      DENY  DISABLED
kubernetes-by-kubeadm-allow-nginx-service  kubernetes-by-kubeadm  INGRESS    1000      tcp:31120        False

ルールを追加し、31120番ポートに対する外部疎通性が確保できたので、curl が叩けることを確認します。

fish
$ set EXTERNAL_IP (gcloud compute instances describe worker-0 --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
$ curl -I http://{$EXTERNAL_IP}:{$NODE_PORT}
HTTP/1.1 200 OK
Server: nginx/1.19.7
Date: Sun, 21 Feb 2021 13:54:45 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 16 Feb 2021 15:57:18 GMT
Connection: keep-alive
ETag: "602beb5e-264"
Accept-Ranges: bytes

NodePort Service を用いて Deployment を外部に公開できていることが確認できました。

後片付け

というわけで、GCE 上に k8s クラスタを構築することができました。
最後は後片付けということで、GCP 上の各種リソースを削除していきます。

GCE インスタンスの削除

fish
$ gcloud -q compute instances delete \
  controller worker-0 worker-1 worker-2 \
  --zone (gcloud config get-value compute/zone)
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/zones/asia-northeast1-a/instances/controller].
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/zones/asia-northeast1-a/instances/worker-0].
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/zones/asia-northeast1-a/instances/worker-1].
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/zones/asia-northeast1-a/instances/worker-2].

ネットワークリソースの削除

外部 LB の削除

fish
$ gcloud -q compute forwarding-rules list
NAME                        REGION           IP_ADDRESS   IP_PROTOCOL  TARGET
kubernetes-forwarding-rule  asia-northeast1  34.85.15.20  TCP          asia-northeast1/targetPools/kubernetes-target-pool
$ gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
  --region (gcloud config get-value compute/region)
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/regions/asia-northeast1/forwardingRules/kubernetes-forwarding-rule].
$ gcloud -q compute forwarding-rules list
Listed 0 items.

$ gcloud -q compute target-pools list
NAME                    REGION           SESSION_AFFINITY  BACKUP  HEALTH_CHECKS
kubernetes-target-pool  asia-northeast1  NONE                      kubernetes
$ gcloud -q compute target-pools delete kubernetes-target-pool
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/regions/asia-northeast1/targetPools/kubernetes-target-pool].
$ gcloud -q compute target-pools list
Listed 0 items.

$ gcloud -q compute http-health-checks list
NAME        HOST                                  PORT  REQUEST_PATH
kubernetes  kubernetes.default.svc.cluster.local  80    /healthz
$ gcloud -q compute http-health-checks delete kubernetes
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/httpHealthChecks/kubernetes].
$ gcloud -q compute http-health-checks list
Listed 0 items.

$ gcloud -q compute addresses list
NAME                   ADDRESS/RANGE  TYPE      PURPOSE  NETWORK  REGION           SUBNET  STATUS
kubernetes-by-kubeadm  34.85.15.20    EXTERNAL                    asia-northeast1          RESERVED
$ gcloud -q compute addresses delete kubernetes-by-kubeadm
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/regions/asia-northeast1/addresses/kubernetes-by-kubeadm].
$ gcloud -q compute addresses list
Listed 0 items.

ファイアウォールルールの削除

fish
$ gcloud -q compute firewall-rules list
NAME                                       NETWORK                DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
default-allow-icmp                         default                INGRESS    65534     icmp                                False
default-allow-internal                     default                INGRESS    65534     tcp:0-65535,udp:0-65535,icmp        False
default-allow-rdp                          default                INGRESS    65534     tcp:3389                            False
default-allow-ssh                          default                INGRESS    65534     tcp:22                              False
kubernetes-by-kubeadm-allow-external       kubernetes-by-kubeadm  INGRESS    1000      tcp:22,tcp:6443,icmp                False
kubernetes-by-kubeadm-allow-health-check   kubernetes-by-kubeadm  INGRESS    1000      tcp                                 False
kubernetes-by-kubeadm-allow-internal       kubernetes-by-kubeadm  INGRESS    1000      tcp,udp,icmp                        False
kubernetes-by-kubeadm-allow-nginx-service  kubernetes-by-kubeadm  INGRESS    1000      tcp:31120                           False

To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.

$ gcloud -q compute firewall-rules delete \
  kubernetes-by-kubeadm-allow-external \
  kubernetes-by-kubeadm-allow-internal \
  kubernetes-by-kubeadm-allow-health-check \
  kubernetes-by-kubeadm-allow-nginx-service
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/firewalls/kubernetes-by-kubeadm-allow-external].
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/firewalls/kubernetes-by-kubeadm-allow-internal].
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/firewalls/kubernetes-by-kubeadm-allow-health-check].
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/firewalls/kubernetes-by-kubeadm-allow-nginx-service].
$ gcloud -q compute firewall-rules list
NAME                    NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
default-allow-icmp      default  INGRESS    65534     icmp                                False
default-allow-internal  default  INGRESS    65534     tcp:0-65535,udp:0-65535,icmp        False
default-allow-rdp       default  INGRESS    65534     tcp:3389                            False
default-allow-ssh       default  INGRESS    65534     tcp:22                              False

To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.

VPC サブネットの削除

fish
$ gcloud -q compute networks subnets list --filter="network:kubernetes-by-kubeadm"
NAME        REGION           NETWORK                RANGE
kubernetes  asia-northeast1  kubernetes-by-kubeadm  10.240.0.0/24
$ gcloud -q compute networks subnets delete kubernetes
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/regions/asia-northeast1/subnetworks/kubernetes].
$ gcloud -q compute networks subnets list --filter="network:kubernetes-by-kubeadm"
Listed 0 items.

VPC ネットワークの削除

fish
$ gcloud -q compute networks list
NAME                   SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
default                AUTO         REGIONAL
kubernetes-by-kubeadm  CUSTOM       REGIONAL
$ gcloud -q compute networks delete kubernetes-by-kubeadm
Deleted [https://www.googleapis.com/compute/v1/projects/sandbox-project/global/networks/kubernetes-by-kubeadm].
$ gcloud -q compute networks list
NAME     SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
default  AUTO         REGIONAL
  1. Hard Way に準拠しているだけで、指定しなくても良い。

6
4
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
6
4

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?