LoginSignup
0
0

More than 3 years have passed since last update.

Single Control PlaneのKubernetesクラスタの作成

Last updated at Posted at 2021-04-07

概要

AWS上で、kubeadmを使って、シングルコントロールプレーンのkubernetesクラスタを作成します。構成としては、masterノード1台とworkerノード1台で、ネットワークは、AWS VPCを使います。自分のMACパソコンから、インタネット経由でsshでmasterノードにログインして、kubernetesクラスタを作成します。
参照手順:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

環境

  • AWS VPC
  • AWS EC2 (masterノード:1台、workerノード:1台)

    • Debian 10 (HVM), SSD Volume Type - ami-0ac97798ccf296e02 (64-bit x86)
    • t2.medium
    • 2 vCPUs
    • 4 GiB Memory
    • 8 GiB EBS SSD(gp2)
  • Squid proxy (masterノードのみ)
    masterノードは、public ipを持って、インターネットに接続できます。
    workerノードは、public ipを持っていません。masterノード経由で、外部に接続します。

  • Kubernetes version: v1.20.5 (masterノードのみ)

  • Flannel Pod network add-on (masterノードのみ)

  • Docker 20.10.5

    • docker-ce
    • docker-ce-cli
    • containerd.io
  • kubelet kubeadm kubectl

手順

0. AWS EC2インスタンスの作成

  • masterノード
    • Auto-assign Public IP : Enable
      自分のMACパソコンからsshログインしますので、public ipが必要です。
    • セキュリティーグループ launch-wizard-1 
      Inbound
Protocol Port range Source Purpose Used by
TCP 6443 0.0.0.0/0 Kubernetes API server All
TCP 2379 - 2380 launch-wizard-1 etcd server client API kube-apiserver, etcd
TCP 22 0.0.0.0/0 ssh port 自分のMACパソコン
TCP 3128 launch-wizard-1 Squid proxy port masterノード
TCP 3128 launch-wizard-2 Squid proxy port workerノード
TCP 10250 launch-wizard-1 kubelet API Self, Control plane
TCP 10251 launch-wizard-1 kube-scheduler Self
TCP 10252 launch-wizard-1 kube-controller-manager Self
  • workerノード
    • Auto-assign Public IP : Disable
      masterノード上のSquid proxy経由で、インタネットに接続しますので、Public IPが要りません。
    • セキュリティーグループ launch-wizard-2 
      Inbound
Protocol Port range Source Purpose Used by
TCP 30000-32767 launch-wizard-1 NodePort Services All
TCP 30000-32767 launch-wizard-2 NodePort Services All
TCP 22 launch-wizard-1 ssh port masterノード
TCP 22 launch-wizard-2 ssh port workerノード
TCP 10250 launch-wizard-1 kubelet API Self, Control plane

1. 各ノード共通構築(masterノードとworkerノード)

  • MACアドレスとproduct_uuidが全てのノードでユニークであることの検証
admin@ip-172-31-45-55:~$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 06:46:ee:09:eb:ad brd ff:ff:ff:ff:ff:ff
admin@ip-172-31-45-55:~$ 
admin@ip-172-31-45-55:~$ sudo cat /sys/class/dmi/id/product_uuid
ec248d4d-9654-26c6-4a3e-c83cedf1af89
admin@ip-172-31-45-55:~$
  • カーネルモジュールのロードとカーネルパラメータの設定
admin@ip-172-31-45-55:~$ lsmod | grep br_netfilter
admin@ip-172-31-45-55:~$ sudo modprobe br_netfilter
admin@ip-172-31-45-55:~$ lsmod | grep br_netfilter
br_netfilter           24576  0
bridge                188416  1 br_netfilter
admin@ip-172-31-45-55:~$
admin@ip-172-31-45-55:~$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
> br_netfilter
> EOF
br_netfilter
admin@ip-172-31-45-55:~$ 
admin@ip-172-31-45-55:~$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
admin@ip-172-31-45-55:~$ 
admin@ip-172-31-45-55:~$ sudo sysctl --system
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.d/protect-links.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.conf ...
admin@ip-172-31-45-55:~$ 
$ sudo apt-get remove docker docker-engine docker.io containerd runc
$ sudo apt-get update
$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
$ echo \
  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
  • Docker daemonの設定
    特にcgroupdriverをsystemdに設定する。
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
  • kubeadm, kubelet, kubectlのインストール
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

2. masterノードの構築

  • Squid proxyのインストール
sudo apt install squid
vi /etc/squid/squid.conf
#http_access deny all
http_access allow all
:wq
sudo systemctl restart squid
  • control-planeノードの初期化
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Podネットワークアドオンのインストール(flannel)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

3. 各workerノードの構築

  • http proxyの設定(Linux OS)
vi .bashrc
export HTTP_PROXY="172.31.45.55:3128"
export HTTPS_PROXY="172.31.45.55:3128"
export NO_PROXY="172.31.45.55,localhost,127.0.0.1,::1"
:wq
source .bashrc
  • http proxyの設定(Docker service)
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://myproxy.hostname:8080"
Environment="HTTPS_PROXY=https://myproxy.hostname:8080/"
Environment="NO_PROXY="172.31.45.55,localhost,127.0.0.1,::1"
:wq
sudo systemctl daemon-reload
sudo systemctl restart docker
  • workerノードをクラスタにjoin
sudo kubeadm join 172.31.45.55:6443 --token vsabag.sxrdi9sm0cc7g4vo \
    --discovery-token-ca-cert-hash sha256:1b4cb9607905756a8a2619aed6c3b12399667c596fbfab318b11c6813430678c 

4. 最終確認

admin@ip-172-31-45-55:~$ kubectl get nodes
NAME              STATUS   ROLES                  AGE    VERSION
ip-172-31-42-62   Ready    <none>                 138m   v1.20.5
ip-172-31-45-55   Ready    control-plane,master   22h    v1.20.5
admin@ip-172-31-45-55:~$ kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-22lpr                   1/1     Running   1          22h
coredns-74ff55c5b-cdkj8                   1/1     Running   1          22h
etcd-ip-172-31-45-55                      1/1     Running   1          22h
kube-apiserver-ip-172-31-45-55            1/1     Running   1          22h
kube-controller-manager-ip-172-31-45-55   1/1     Running   1          22h
kube-flannel-ds-h6xdc                     1/1     Running   11         15h
kube-flannel-ds-m55hw                     1/1     Running   36         138m
kube-proxy-rscnl                          1/1     Running   1          22h
kube-proxy-x5kt4                          1/1     Running   2          138m
kube-scheduler-ip-172-31-45-55            1/1     Running   1          22h
admin@ip-172-31-45-55:~$ 
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0