LoginSignup
6
10

More than 5 years have passed since last update.

kubernetes1.9 インストール on centos7

Last updated at Posted at 2018-04-28

はじめに

CentOS7にKubernetes(1.9)を導入していきます
下記の構成でKubernetesクラスタを構成していきます

sugi-kubernetes19-master01 : Master
sugi-kubernetes19-node01 : Node
sugi-kubernetes19-node02 : Node

事前設定 全てのサーバで実施

kubeadmを実行する際に、swapが有効だとinstallにFailしてしますので、
swapの無効化を実施します

swapoff -a

編集

vim /etc/fstab

kubeadmインストール 全てのサーバで実施

Dockerをinstall

yum install -y docker
systemctl enable docker && systemctl start docker

依存関係メモ

============================================================================================================================================================
Package                                    Arch                      Version                                               Repository                 Size
 ============================================================================================================================================================
Installing:
docker                                     x86_64                    2:1.13.1-53.git774336d.el7.centos                     extras                     16 M
Installing for dependencies:
audit-libs-python                          x86_64                    2.7.6-3.el7                                           base                       73 k
checkpolicy                                x86_64                    2.5-4.el7                                             base                      290 k
container-selinux                          noarch                    2:2.42-1.gitad8f0f7.el7                               extras                     32 k
container-storage-setup                    noarch                    0.8.0-3.git1d27ecf.el7                                extras                     33 k
docker-client                              x86_64                    2:1.13.1-53.git774336d.el7.centos                     extras                    3.7 M
docker-common                              x86_64                    2:1.13.1-53.git774336d.el7.centos                     extras                     86 k
libcgroup                                  x86_64                    0.41-13.el7                                           base                       65 k
libsemanage-python                         x86_64                    2.5-8.el7                                             base                      104 k
oci-register-machine                       x86_64                    1:0-6.git2b44233.el7                                  extras                    1.1 M
oci-systemd-hook                           x86_64                    1:0.1.15-2.gitc04483d.el7                             extras                     33 k
oci-umount                                 x86_64                    2:2.3.3-3.gite3c9055.el7                              extras                     32 k
policycoreutils-python                     x86_64                    2.5-17.1.el7                                          base                      446 k
python-IPy                                 noarch                    0.75-6.el7                                            base                       32 k
setools-libs                               x86_64                    3.3.8-1.1.el7                                         base                      612 k
skopeo-containers                          x86_64                    1:0.1.28-1.git0270e56.el7                             extras                     13 k
yajl                                       x86_64                    2.0.4-4.el7                                           base                       39 k

Transaction Summary
 ============================================================================================================================================================
Install  1 Package (+16 Dependent packages)

repository設定

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

kubelet, kubeadm, kubectlをinstall

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

依存関係メモ

============================================================================================================================================================
 Package                                 Arch                            Version                                  Repository                           Size
============================================================================================================================================================
Installing:
 kubeadm                                 x86_64                          1.9.4-0                                  kubernetes                           17 M
 kubectl                                 x86_64                          1.9.4-0                                  kubernetes                          8.9 M
 kubelet                                 x86_64                          1.9.4-0                                  kubernetes                           17 M
Installing for dependencies:
 kubernetes-cni                          x86_64                          0.6.0-0                                  kubernetes                          8.6 M
 socat                                   x86_64                          1.7.3.2-2.el7                            base                                290 k

Transaction Summary
============================================================================================================================================================
Install  3 Packages (+2 Dependent packages)

iptablesがバイパスしているため、正しくルーティングされない問題が、いくつか報告されているらしい。
以下のsysctlで回避

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

実行例

[root@sugi-kubernetes19-node02 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/99-docker.conf ...
fs.may_detach_mounts = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

dockerが使用している cgourp driver と kubeletが認識している cgoup driver が一致しているか確認する

Dockerは、 systemd を使用している

[root@sugi-kubernetes19-master01 ~]# docker info | grep -i cgroup
  WARNING: You're not using the default seccomp profile
Cgroup Driver: systemd

kubeletのdriver指定もsystemdとなっているので、問題なし

[root@sugi-kubernetes19-master01 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep cgroup
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

Master を kubeadm で構成

  • --pod-network-cidr の指定
    Kubernetesクラスタで使用するNetworkPluginに依存しますが、FlannelをはじめとしたOverlayNWで使用するNWを指定します
    kubeadm を使用して Flannelをインストールする場合、「10.244.0.0/16」固定となる

実行時間は約2分くらい

kubeadm init --pod-network-cidr '10.244.0.0/16'

実行例

[root@sugi-kubernetes19-master01 ~]# kubeadm init --pod-network-cidr '10.244.0.0/16'
[init] Using Kubernetes version: v1.9.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING Hostname]: hostname "sugi-kubernetes19-master01.localdomain" could not be reached
        [WARNING Hostname]: hostname "sugi-kubernetes19-master01.localdomain" lookup sugi-kubernetes19-master01.localdomain on 8.8.8.8:53: no such host
        [WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [sugi-kubernetes19-master01.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.120.220]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 114.503352 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node sugi-kubernetes19-master01.localdomain as master by adding a label and a taint
[markmaster] Master sugi-kubernetes19-master01.localdomain tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 99bcf2.e89be75362d8794b
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 99bcf2.e89be75362d8794b 192.168.120.220:6443 --discovery-token-ca-cert-hash sha256:c6094ae953604710d9f10fd4f248e35d7f8c4f0829eff777d677c9462fb83ce1

rootユーザでkubectlを実行するために環境変数を指定する必要がある

export KUBECONFIG=/etc/kubernetes/admin.conf

コマンド実行確認

[root@sugi-kubernetes19-master01 kubernetes]# kubectl get nodes
NAME                                     STATUS     ROLES     AGE       VERSION
sugi-kubernetes19-master01.localdomain   NotReady   master    4m        v1.9.4

.bash_profileに環境変数を指定する

cat <<'EOF' > /root/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

# add for kubernetes
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF

master に join するための kubeadm の トークン確認
Expire が 24時間制限となっている。
次回スケールアウトするときには、tokenのcreateが必要なのかも

[root@sugi-kubernetes19-master01 ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
99bcf2.e89be75362d8794b   23h       2018-03-18T22:00:39+09:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

Pod Network の install

Flannelをinstallしてみる

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

実行例

[root@sugi-kubernetes19-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

正常に実行されたかを確認するための、すべてのnamespaceでpodsを確認する
以下のように複数のpodが作成され、kube-dnsがRunningであることを確認する

[root@sugi-kubernetes19-master01 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-sugi-kubernetes19-master01.localdomain                      1/1       Running   0          6m
kube-system   kube-apiserver-sugi-kubernetes19-master01.localdomain            1/1       Running   0          7m
kube-system   kube-controller-manager-sugi-kubernetes19-master01.localdomain   1/1       Running   0          7m
kube-system   kube-dns-6f4fd4bdf-5tvpn                                         3/3       Running   0          7m
kube-system   kube-flannel-ds-z8btx                                            1/1       Running   0          2m
kube-system   kube-proxy-24mm5                                                 1/1       Running   0          7m
kube-system   kube-scheduler-sugi-kubernetes19-master01.localdomain            1/1       Running   0          6m

flannel.1が作成されている

[root@sugi-kubernetes19-master01 ~]# ip -d a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:98:17:ee brd ff:ff:ff:ff:ff:ff promiscuity 0
    inet 192.168.120.220/24 brd 192.168.120.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::98a0:413d:6b71:8fbd/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:f6:06:77:86 brd ff:ff:ff:ff:ff:ff promiscuity 0
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.2:42:f6:6:77:86 designated_root 8000.2:42:f6:6:77:86 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   89.73 vlan_default_pvid 1 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 4 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
    link/ether 8a:35:24:38:60:de brd ff:ff:ff:ff:ff:ff promiscuity 0
    vxlan id 1 local 192.168.120.220 dev ens192 srcport 0 0 dstport 8472 nolearning ageing 300
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::8835:24ff:fe38:60de/64 scope link
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
    link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff promiscuity 0
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.a:58:a:f4:0:1 designated_root 8000.a:58:a:f4:0:1 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer  255.62 vlan_default_pvid 1 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 4 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125
    inet 10.244.0.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::ac47:5ff:fe51:b4b2/64 scope link
       valid_lft forever preferred_lft forever
6: vethe30d042d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
    link/ether de:8f:ad:8b:9a:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.a:58:a:f4:0:1 designated_root 8000.a:58:a:f4:0:1 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on
    inet6 fe80::dc8f:adff:fe8b:9abd/64 scope link
       valid_lft forever preferred_lft forever

node サーバで実行

Masterに参加するように、kubeadmコマンドを実行する
Master側でkubeadmを実行した際に出力された最後の文字列から引用

kubeadm join --token 99bcf2.e89be75362d8794b 192.168.120.220:6443 --discovery-token-ca-cert-hash sha256:c6094ae953604710d9f10fd4f248e35d7f8c4f0829eff777d677c9462fb83ce1

実行例
2秒ほどで終了する

[root@sugi-kubernetes19-node01 ~]# kubeadm join --token 99bcf2.e89be75362d8794b 192.168.120.220:6443 --discovery-token-ca-cert-hash sha256:c6094ae953604710d9f10fd4f248e35d7f8c4f0829eff777d677c9462fb83ce1
[preflight] Running pre-flight checks.
        [WARNING Hostname]: hostname "sugi-kubernetes19-node01.localdomain" could not be reached
        [WARNING Hostname]: hostname "sugi-kubernetes19-node01.localdomain" lookup sugi-kubernetes19-node01.localdomain on 8.8.8.8:53: no such host
        [WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.120.220:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.120.220:6443"
[discovery] Requesting info from "https://192.168.120.220:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.120.220:6443"
[discovery] Successfully established connection with API Server "192.168.120.220:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

nodeサーバでもコンテナが実行されている

[root@sugi-kubernetes19-node01 ~]# docker ps
CONTAINER ID        IMAGE                                                                                                               COMMAND                  CREATED              STATUS              PORTS               NAMES
d47fa6aa99b3        quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891                      "/opt/bin/flanneld..."   46 seconds ago       Up 45 seconds                           k8s_kube-flannel_kube-flannel-ds-w9dz9_kube-system_2b9cd429-29e5-11e8-9843-0050569817ee_1
789d0beaac8e        gcr.io/google_containers/kube-proxy-amd64@sha256:424a9dfc295f26f9d1e8070836d6fa08c83f22d86e592dfccddc847a85b1ef20   "/usr/local/bin/ku..."   50 seconds ago       Up 49 seconds                           k8s_kube-proxy_kube-proxy-ddtfk_kube-system_2b9cbb8a-29e5-11e8-9843-0050569817ee_0
33cc95e86646        gcr.io/google_containers/pause-amd64:3.0                                                                            "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-proxy-ddtfk_kube-system_2b9cbb8a-29e5-11e8-9843-0050569817ee_0
513e60c0149b        gcr.io/google_containers/pause-amd64:3.0                                                                            "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-flannel-ds-w9dz9_kube-system_2b9cd429-29e5-11e8-9843-0050569817ee_0

ステータス確認

[root@sugi-kubernetes19-master01 ~]# kubectl get node
NAME                                     STATUS    ROLES     AGE       VERSION
sugi-kubernetes19-master01.localdomain   Ready     master    18m       v1.9.4
sugi-kubernetes19-node01.localdomain     Ready     <none>    4m        v1.9.4
sugi-kubernetes19-node02.localdomain     Ready     <none>    3m        v1.9.4

pods確認

[root@sugi-kubernetes19-master01 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-sugi-kubernetes19-master01.localdomain                      1/1       Running   0          28m
kube-system   kube-apiserver-sugi-kubernetes19-master01.localdomain            1/1       Running   0          29m
kube-system   kube-controller-manager-sugi-kubernetes19-master01.localdomain   1/1       Running   0          29m
kube-system   kube-dns-6f4fd4bdf-5tvpn                                         3/3       Running   0          29m
kube-system   kube-flannel-ds-tvvbj                                            1/1       Running   1          14m
kube-system   kube-flannel-ds-w9dz9                                            1/1       Running   1          15m
kube-system   kube-flannel-ds-z8btx                                            1/1       Running   0          24m
kube-system   kube-proxy-24mm5                                                 1/1       Running   0          29m
kube-system   kube-proxy-ddtfk                                                 1/1       Running   0          15m
kube-system   kube-proxy-gnnvw                                                 1/1       Running   0          14m
kube-system   kube-scheduler-sugi-kubernetes19-master01.localdomain            1/1       Running   0          28m

上記で構築完了です

参考URL

6
10
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
6
10