概要
下表の構成で、オンプレ上に k8s を構築したときの記録である.
本記事では「kubeadm を使って k8sマスタを構築する手順」を記す.
基本的には、https://github.com/takara9/vagrant-kubernetes にて
Ansible で実行していた処理を、手動で実行しているのみです.
(若干、マニフェストを変更しています)
No | 用途 | ノード名 | 形態 | 公開IP | 内部IP | OS | 備考 |
---|---|---|---|---|---|---|---|
1 | k8sマスタ | master01 | VM | 192.168.1.91 | 172.24.20.11 | Ubuntu18.04 | |
2 | k8sノード | node01 | VM | 192.168.1.92 | 172.24.20.12 | Ubuntu18.04 |
参考にしたサイトおよび書籍
URL | 備考 |
---|---|
実践 Vagrant | |
15Stepで習得 Dockerから入るKubernetes | K8s だけでなく、Ansible, Vagrant, GlusterFS のことなども学べる. |
https://github.com/takara9/vagrant-k8s |
『15Stepで習得 Dockerから入るKubernetes』の著者が公開されている GitHub. Vagrant や Ansible コードを公開してくださっている. |
https://github.com/takara9/vagrant-kubernetes | 同上 |
https://github.com/takara9/codes_for_lessons | 同上 |
https://nextpublishing.jp/book/12197.html | 『解体kubeadm フェーズから読み解くKubernetesクラスタ構築ツールの全貌』を参考にして 1マスタ・1ノードを構築した. |
環境
物理PC および 仮想PC の OS
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.4 LTS"
物理PC
下記ソフトを導入済みであること.
- Vagrant
- VirtualBox
手順
1. VM「master01」に ssh 接続する
$ vagrant ssh master01
2. リポジトリを登録する
vagrant@master01:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
vagrant@master01:~$ cat << EOL | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOL
vagrant@master01:~$ sudo apt update -y
3. kubelet, kubeadm, kubectl をインストールする
vagrant@master01:~$ sudo apt install -y kubelet=1.21.1-00 kubeadm=1.21.1-00 kubectl=1.21.1-00
vagrant@master01:~$ sudo apt-mark hold kubelet kubeadm kubectl
4. バージョンを固定する
vagrant@master01:~$ sudo apt-mark hold kubelet kubeadm kubectl docker.io
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.
docker.io set on hold.
固定されたバージョンは次の通り
vagrant@master01:~$ sudo apt list |grep -e kubelet -e kubeadm -e kubectl -e docker.io
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
docker.io/bionic-updates 20.10.2-0ubuntu1~18.04.2 amd64
kubeadm/kubernetes-xenial,now 1.21.1-00 amd64 [installed]
kubectl/kubernetes-xenial,now 1.21.1-00 amd64 [installed]
kubelet/kubernetes-xenial,now 1.21.1-00 amd64 [installed]
5. 「/etc/systemd/system/kubelet.service.d/10-kubeadm.conf」を flannel に備えて変更する
末尾に「 --node-ip=172.24.20.11 --cluster-dns=10.32.0.10 」を追加する
$ sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=172.24.20.11 --cluster-dns=10.32.0.10
systemd の再読み込み
と kubelet の再起動
を行う. (ただし、この時点では kubelet は起動しない)
$ sudo systemctl daemon-reload
$ sudo systemctl restart kubelet #! この時点では restart を実行しても loaded のままである
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES #! docker も無い (あるいは hello-world のみ)
この時点で PC にインストールした kubernetes 関連の apt パッケージは次の通り
vagrant@master01:~$ sudo apt list |grep -i kube
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
cri-tools/kubernetes-xenial,now 1.13.0-01 amd64 [installed,automatic]
docker-engine/kubernetes-xenial 1.11.2-0~xenial amd64
golang-github-kubernetes-gengo-dev/bionic 0.0~git20170531.0.c79c13d-1 all
kubeadm/kubernetes-xenial,now 1.21.1-00 amd64 [installed]
kubectl/kubernetes-xenial,now 1.21.1-00 amd64 [installed]
kubelet/kubernetes-xenial,now 1.21.1-00 amd64 [installed]
kubernetes-cni/kubernetes-xenial,now 0.8.7-00 amd64 [installed,automatic]
python-kubernetes/bionic 2.0.0-2ubuntu1 all
python3-kubernetes/bionic 2.0.0-2ubuntu1 all
ruby-kubeclient/bionic 3.0.0-1 all
salt-formula-kubernetes/bionic 2016.12.1-1ubuntu1 all
6. カーネルパラメータを変更する
パラメータファイルを作成する
vagrant@master01:~$ cat <<EOL | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOL
反映する
vagrant@master01:~$ sudo sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-lxd-inotify.conf ...
fs.inotify.max_user_instances = 1024
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.tcp_syncookies = 1
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
net.ipv4.conf.all.promote_secondaries = 1
net.core.default_qdisc = fq_codel
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...
7. Swap を off にする
vagrant@master01:~$ sudo swapoff -a
8. kubeadm の初期化 (K8s Master の構築)
[a] マニフェストを使う場合
次のようにマニフェストにオプションを書き出しておく
vagrant@master01:~$ cat <<EOL | tee kubeadm-config.yml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.21.1
networking:
podSubnet: "10.244.0.0/16"
EOL
マニフェストを適用する。下記ログ末尾の「kubeadm join 〜
」を控えておくこと
vagrant@master01:~$ sudo kubeadm init --config kubeadm-config.yml
W0522 15:05:55.535577 12596 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "podSubnet"
[init] Using Kubernetes version: v1.21.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 10.0.2.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 66.503628 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qu104b.66ff4kkrma3chyqd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.2.15:6443 --token qu104b.66ff4kkrma3chyqd \
--discovery-token-ca-cert-hash sha256:3be8f04fd648a22f247203b5c7a87fb0e242b84f6a066f6c393f13cde56b9f52
以上.