6
7

More than 5 years have passed since last update.

Kubernetes 1.12をUbuntu 16.04にインストールする

Last updated at Posted at 2018-09-28

Kubernetes 1.12が出たので、この週末はそれで遊ぼうと思ったので以下メモ。
Release noteまだないのか?CHANGELOGこれか?
https://github.com/kubernetes/kubernetes/blob/release-1.12/CHANGELOG-1.12.md
どうやらまだValidateされているDockerのバージョンは17.03.xまでらしい。まあ、18.xでも動くのだろうけど、Ubuntu 18.04だとDocker 17.03がインストールできないのでOSは今回Ubuntu 16.04で行こう。

kubeadmのインストールまで

導入に必要なスペックは以下。
https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-docker

まあ、ちょっと余裕を持たせてMasterの導入環境は、OS:Ubuntu 16.04.5、CPU:2コア、メモリ:4GB、HDD:20GB。SSHサーバー有効、インターネット接続あり、IPアドレス固定、swap無効。
OSインストール中に固定IPアドレス指定できるUbuntu 18.04は偉い、が16.04は固定IPアドレス設定するために毎回ググって設定しなければいけないので面倒。

/etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto ens32
iface ens32 inet static
address 192.168.0.131
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 192.168.0.1

swap無効も頻繁に忘れるのでついでに、

/etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=e993a74a-33e5-4d0e-bd7f-7a097a08d0f7 /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
#UUID=4cb6a305-c674-41b8-adfc-71c0d60d05a0 none            swap    sw              0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0

DHCP割り当てのアドレス消すのも良く忘れるためここで一回OSリブート。

Dockerのインストールコマンドは、Kubernetes 1.12のドキュメントでは現時点でリンクが切れてる。1.11のドキュメントを参照して、Docker-CE 17.03を導入。
https://v1-11.docs.kubernetes.io/docs/tasks/tools/install-kubeadm/#installing-docker

apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')

kubeadm、kubelet、kube-proxyを導入。

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Single Masterの導入

引き続き kubeadm で Kubernetes Master を導入する。
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
まあ、当然Single Master。ネットワークドライバはCalicoを使うが、VMのIPアドレスが192.168.x.xなので--pod-network-cidrは10.0.0.0/16に。ホスト名でノード登録されるとmetrics-serverの動作に支障があるかもしれないので--node-nameにMasterのIPアドレスを指定。
2~3分でインストール完了。

kubeadm init --pod-network-cidr=10.0.0.0/16 --node-name=192.168.0.131
(実行結果)
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [192.168.0.131 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [192.168.0.131 localhost] and IPs [192.168.0.131 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [192.168.0.131 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.131]
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 19.508242 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node 192.168.0.131 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node 192.168.0.131 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "192.168.0.131" as an annotation
[bootstraptoken] using token: iswf5k.yey45038we7wlruy
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.131:6443 --token nple4o.5pwyc8dblynre86i --discovery-token-ca-cert-hash sha256:f70bde8459037c34efb1b4599d2db40f6cc813a94e6b321fd52006026334ee4d

root@m16:~#

kubectlでノード確認。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node
(実行結果)
NAME            STATUS     ROLES    AGE     VERSION
192.168.0.131   NotReady   master   5m14s   v1.12.0

この時点ではネットワークドライバ未導入なのでcorednsが起動しない。

# kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-576cbf47c7-dtxwp                0/1     Pending   0          6m14s
coredns-576cbf47c7-kmt2m                0/1     Pending   0          6m14s
etcd-192.168.0.131                      1/1     Running   0          5m22s
kube-apiserver-192.168.0.131            1/1     Running   0          5m11s
kube-controller-manager-192.168.0.131   1/1     Running   0          5m38s
kube-proxy-bqvwd                        1/1     Running   0          6m15s
kube-scheduler-192.168.0.131            1/1     Running   0          5m24s

Calicoを導入。CALICO_IPV4POOL_CIDRは10.0.0.0/16に修正する。

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
curl https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml>calico.yaml
vi calico.yaml
---
■CALICO_IPV4POOL_CIDR修正
(修正前)
230             - name: CALICO_IPV4POOL_CIDR
231               value: "192.168.0.0/16"
(修正後)
230             - name: CALICO_IPV4POOL_CIDR
231               value: "10.0.0.0/16"
---
kubectl apply -f calico.yaml

2~3分するとcalico-nodeおよびcorednsが起動する。

# kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
calico-node-d2mmz                       2/2     Running   0          40s
coredns-576cbf47c7-dtxwp                1/1     Running   0          9m4s
coredns-576cbf47c7-kmt2m                1/1     Running   0          9m4s
etcd-192.168.0.131                      1/1     Running   0          8m12s
kube-apiserver-192.168.0.131            1/1     Running   0          8m1s
kube-controller-manager-192.168.0.131   1/1     Running   0          8m28s
kube-proxy-bqvwd                        1/1     Running   0          9m5s
kube-scheduler-192.168.0.131            1/1     Running   0          8m14s

ディスク容量

OSインストール直後のdfコマンド実行結果が以下で、

# df
Filesystem     1K-blocks    Used Available Use% Mounted on
...
/dev/sda1       19525500 1597236  16913380   9% /

Kubernetes Master構成後のdfコマンド実行結果が以下、

# df
Filesystem     1K-blocks    Used Available Use% Mounted on
...
/dev/sda1       19525500 3440816  15069800  19% /
...

なので?Kubernetes Masterを最小構成でインストールするのにかかるディスク容量は2GBぐらい。そんなもんか。

ノード追加

CPU:2コア、メモリ:2GB、HDD:20GB のノードを2台追加する。
Masterと同様、kubeadmコマンドまでを導入した後、ノード上で以下を実行。
なお、例によってノード名はIPアドレスで登録したいので --node-name オプションにノードのIPアドレスを指定。

kubeadm join 192.168.0.131:6443 --node-name <ノードのIPアドレス> --token nple4o.5pwyc8dblynre86i --discovery-token-ca-cert-hash sha256:f70bde8459037c34efb1b4599d2db40f6cc813a94e6b321fd52006026334ee4d

ノード追加した後get node実行すると以下。

# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.0.131   Ready    master   50m   v1.12.0
192.168.0.132   Ready    <none>   16m   v1.12.0
192.168.0.133   Ready    <none>   92s   v1.12.0

良いんじゃないか。今日はこのぐらい。

6
7
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
6
7