今回の記事について
今回の記事は、kubeadmを用いて簡単にkubernetes環境を作るものです。
会社用に作ったものなので、英語で書いてしまっていたり、セキュリティ面の考慮をしていない部分がありますので、検証用等に用いていただければと思います。
基本的に、コピペをしていけばすぐ出来てしまうと思います。
Environment
Master ×1 (CentOS 7, 2 vCPU, 2 GB or more of RAM)
kubemaster1.mydom.local
Worker × 2 (CentOS 7, 2 vCPU, 2 GB or more of RAM)
kubeworker1.mydom.local
kubeworker2.mydom.local
Prerequisites
Modify hosts file and Make hosts.list
# vi /etc/hosts
192.168.1.20 kubemaster1.mydom.local kubemaster
192.168.1.21 kubeworker1.mydom.local kubeworker1
192.168.1.22 kubeworker2.mydom.local kubeworker2
# cat /etc/hosts | grep "192.168" | awk -F' ' '{print $2}' > /root/hosts.list
Invalidate Selinux
# for i in `cat hosts.list`; do ssh $i "sed -i 's/=enforcing/=disabled/' /etc/selinux/config"; done
Invalidate firewall
# for i in `cat hosts.list`; do ssh $i "systemctl stop firewalld; systemctl disable firewalld"; done
Invalidate IPv6
# for i in `cat hosts.list`; do ssh $i 'sed -i "s/GRUB_CMDLINE_LINUX=\"/GRUB_CMDLINE_LINUX=\"ipv6.disable=1 transparent_hugepage=never /" /etc/default/grub; grub2-mkconfig -o /boot/grub2/grub.cfg'; done
vm.swappness setting
# for i in `cat hosts.list`; do ssh $i 'echo "vm.swappiness=1" >> /etc/sysctl.conf'; done
SSH setting
# ssh-keygen
# cd ./.ssh/
# cat id_rsa.pub >> authorized_keys
# chmod 600 ~/.ssh/authorized_keys
# scp -rp .ssh root@kubeworker1.mydom.local:~/
# scp -rp .ssh root@kubeworker2.mydom.local:~/
Transport the hosts file
# scp -p /etc/hosts root@kubeworker1.mydom.local:/etc/hosts
# scp -p /etc/hosts root@kubeworker2.mydom.local:/etc/hosts
Othres
- localrepo (if you require)
- http (if you setting the local repo and want to distribute it from one node)
- ntp
Install Containerd (Master node)
preparation
# yum install -y yum-utils
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y yum-utils'; done
# yum install -y device-mapper-persistent-data
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y device-mapper-persistent-data'; done
# yum install -y lvm2
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y lvm2'; done
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo'; done
# for i in $(cat /root/hosts.list); do
ssh $i 'yum clean all'
ssh $i 'yum repolist'
done
# for i in $(cat hosts.list); do ssh $i 'yum update -y'; done
# yum install -y container-selinux
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y container-selinux'; done
# yum install -y docker-ce-18.09.6-3.el7.x86_64 docker-ce-cli-18.09.6-3.el7.x86_64 containerd.
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y docker-ce-18.09.6-3.el7.x86_64 docker-ce-cli-18.09.6-3.el7.x86_64 containerd.'; done
# mkdir /etc/docker
# for i in $(cat hosts.list | sed 1d); do ssh $i 'mkdir /etc/docker'; done
# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
Start the docker
# mkdir -p /etc/systemd/system/docker.service.d
# for i in $(cat hosts.list | sed 1d); do ssh $i 'mkdir -p /etc/systemd/system/docker.service.d'; done
# systemctl daemon-reload
# for i in $(cat hosts.list | sed 1d); do ssh $i 'systemctl daemon-reload'; done
# systemctl enable docker; systemctl start docker
# for i in $(cat hosts.list | sed 1d); do ssh $i 'systemctl enable docker; systemctl start docker'; done
# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since 日 2019-07-14 01:50:22 JST; 1min 13s ago
Docs: https://docs.docker.com
Main PID: 19164 (dockerd)
Tasks: 11
Memory: 29.0M
CGroup: /system.slice/docker.service
mq19164 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.317228884+09:00" level=info msg="pickfirstBalancer: HandleSubConnStateCh...ule=grpc
7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.318563845+09:00" level=warning msg="Using pre-4.0.0 kernel for overlay2,...overlay2
7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.337250856+09:00" level=info msg="Graph migration to content-addressabili...seconds"
7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.338023813+09:00" level=info msg="Loading containers: start."
7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.553369544+09:00" level=info msg="Default bridge (docker0) is assigned wi...address"
7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.597964337+09:00" level=info msg="Loading containers: done."
7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.621984713+09:00" level=info msg="Docker daemon" commit=481bc77 graphdriv...=18.09.6
7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.622089589+09:00" level=info msg="Daemon has completed initialization"
7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.630930072+09:00" level=info msg="API listen on /var/run/docker.sock"
7月 14 01:50:22 kubemaster1.mydom.local systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
Installing Kubernetes
Preparation (all node)
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
# yum clean all
# yum repolist
※ if kubernetes's amount is displayed as 0, you have to add sslverify=0
to kubernetes.repo
After that, you implement yum repolist
again and reply the question below.
Loading mirror speeds from cached hostfile
* base: mirrors.vcea.wsu.edu
* extras: centos.mirror.ndchost.com
* updates: mirror.team-cymru.com
kubernetes/signature | 454 B 00:00:00
https://packages.cloud.google.com/yum/doc/yum-key.gpg から鍵を取得中です。
Importing GPG key 0xA7317B0F:
Userid : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
上記の処理を行います。よろしいでしょうか? [y/N]y
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg から鍵を取得中です。
kubernetes/signature | 1.4 kB 00:00:07 !!!
kubernetes/primary | 52 kB 00:00:02
kubernetes 373/373
Install Kubernetes
Implement on master node
# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes'; done
# for i in $(cat hosts.list); do ssh $i 'docker info | grep Cgroup'; done
# for i in $(cat hosts.list); do ssh $i 'mkdir /var/lib/kubelet'; done
Implement on all node
# cat <<EOF > /var/lib/kubelet/config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: \"systemd\"
EOF
Implement on master node
# for i in $(cat hosts.list); do ssh $i 'mkdir /etc/systemd/system/kubelet.service.d'; done
Implement on all node
# cat <<EOF > /etc/systemd/system/kubelet.service.d/20-extra-args.conf
[Service]
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
EOF
Implement on master node
# for i in $(cat hosts.list); do ssh $i 'systemctl enable kubelet'; done
# for i in $(cat hosts.list); do ssh $i 'systemctl daemon-reload'; done
Configure Master node (Master node)
Initialize kubernetes master node
Pay atttention to note kubeadm join
command .
# kubeadm init --pod-network-cidr=192.168.0.0/16
(omit)
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.20:6443 --token owmrsk.d41wmnxetwhr2hsd \
--discovery-token-ca-cert-hash sha256:35e654eeeb9c125eaee57b12202aa0139729a8cd6209ef77a719c8977dc12905
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
# kubectl cluster-info
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster1.mydom.local NotReady master 16m v1.15.0
install Calico
# sysctl -n net.bridge.bridge-nf-call-iptables
1
# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster1.mydom.local Ready master 21m v1.15.0
Configure Worker node
Join the node into the cluster (done on the each worker node)
# sysctl -n net.bridge.bridge-nf-call-iptables
0
if 0
is comes out, you have to implement below
# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
# sysctl -n net.bridge.bridge-nf-call-iptables
1
Finally you use the command noted few minutes ago
# kubeadm join 192.168.1.20:6443 --token owmrsk.d41wmnxetwhr2hsd --discovery-token-ca-cert-hash sha256:35e654eeeb9c125eaee57b12202aa0139729a8cd6209ef77a719c8977dc12905
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Confirm the nodes
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster1.mydom.local Ready master 163m v1.15.0
kubeworker1.mydom.local Ready <none> 29s v1.15.0
kubeworker2.mydom.local Ready <none> 127m v1.15.0