2
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

Vagrant上のkubernetesでWorker Nodeのコンテナにアタッチできない問題について

Posted at

Vagrant上でkubeadmを利用してkubernetes環境を構築すると、Worker Nodeのコンテナにアタッチできない、という問題に遭遇します。
問題の原因と解消をまとめましたので本記事にて紹介します。

なお、作成に利用したVagrantfileは以下の通りです。

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.require_version ">= 2.0.0"

CONFIG = 'vagrant/config.rb'

SUPPORTED_OS = {
  "ubuntu1604"          => {box: "generic/ubuntu1604",         user: "vagrant"},
  "ubuntu1804"          => {box: "generic/ubuntu1804",         user: "vagrant"},
  "ubuntu2004"          => {box: "geerlingguy/ubuntu2004",     user: "vagrant"},
}

# Defaults for config options defined in CONFIG
$num_instances ||= 2
$num_k8s_master ||= 1
$instance_name_prefix ||= "kube"
$vm_gui ||= false
$vm_memory ||= 2048
$vm_cpus ||= 2
$shared_folders ||= {}
$forwarded_ports ||= {}
$subnet ||= "192.168.128"
$os ||= "ubuntu1804"

$box = SUPPORTED_OS[$os][:box]

Vagrant.configure("2") do |config|

  config.vm.box = $box
  config.ssh.username = SUPPORTED_OS[$os][:user]

  (1..$num_instances).each do |i|
    config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|

      node.vm.hostname = vm_name

      node.vm.provider :virtualbox do |vb|
        vb.memory = $vm_memory
        vb.cpus = $vm_cpus
        vb.gui = $vm_gui
        vb.linked_clone = true
        vb.customize ["modifyvm", :id, "--vram", "8"] # ubuntu defaults to 256 MB which is a waste of precious RAM
      end

      node.vm.synced_folder ".", "/vagrant", disabled: false, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z'] , rsync__exclude: ['.git','venv']
      $shared_folders.each do |src, dst|
        node.vm.synced_folder src, dst, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
      end

      ip = "#{$subnet}.#{i+100}"
      node.vm.network :private_network, ip: ip

      #host_port = 10022+i
      #config.vm.network "forwarded_port", guest: 22, host: $host_port, host_ip: "127.0.0.1"

      node.vm.provision "shell", run: "always", inline: "swapoff -a"

      node.vm.provision "shell", inline: "apt-get update"
      node.vm.provision "shell", inline: "apt-get install -y docker.io apt-transport-https curl"
      node.vm.provision "shell", inline: "systemctl start docker"
      node.vm.provision "shell", inline: "systemctl enable docker"
      node.vm.provision "shell", inline: "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -"
      node.vm.provision "shell", inline: "apt-add-repository 'deb http://apt.kubernetes.io/ kubernetes-xenial main'"
      node.vm.provision "shell", inline: "apt-get update"
      node.vm.provision "shell", inline: "apt-get install -y kubelet kubeadm kubectl"

      if i == $num_k8s_master
        node.vm.network "forwarded_port", guest: 6443, host: 16443, host_ip: "127.0.0.1", auto_correct: true

        node.vm.provision "shell", inline: "kubeadm init --apiserver-advertise-address=#{$subnet}.#{i+100} --pod-network-cidr=10.244.10.0/16 --apiserver-cert-extra-sans=127.0.0.1"
        node.vm.provision "shell", inline: "mkdir /home/vagrant/.kube"
        node.vm.provision "shell", inline: "cp /etc/kubernetes/admin.conf /home/vagrant/.kube/config"
        node.vm.provision "shell", inline: "chown vagrant:vagrant /home/vagrant/.kube/config"
        node.vm.provision "shell", inline: "kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml"
#        node.vm.provision "shell", inline: "kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes --all node-role.kubernetes.io/master-"
      end

    end
  end
end

※Master Node/Worker Node1台ずつ、という構成です

原因

kubectl execでWorker Nodeのコンテナにアタッチしようとすると、該当のpodが見つからないというエラーが出てしまいます。

sho@Desktop $ kubectl get po -o wide                                                                              [~/workspace/vm/kubeadm]
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-deployment-5bf87f5f59-g5w8b   1/1     Running   1          68m   10.244.1.5   kube-2   <none>           <none>
nginx-deployment-5bf87f5f59-t6j7s   1/1     Running   1          68m   10.244.1.4   kube-2   <none>           <none>
sho@Desktop $ kubectl exec -it nginx-deployment-5bf87f5f59-g5w8b sh                                              [~/workspace/vm/kubeadm]
error: unable to upgrade connection: pod does not exist
sho@Desktop $                                                                                       [~/workspace/vm/kubeadm]

原因としては、Vagrantで作成したVMのIPが同じ値になっているためです。
これにより各NodeのINTERNAL-IPが同じになり、api serverからWorker Nodeのコンテナに到達できなくなっています。

vagrant@kube-1:~$ ip a show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:69:8a:cd brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 83836sec preferred_lft 83836sec
    inet6 fe80::a00:27ff:fe69:8acd/64 scope link 
       valid_lft forever preferred_lft forever
vagrant@kube-1:~$ 
vagrant@kube-2:~$ ip a show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:69:8a:cd brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 83873sec preferred_lft 83873sec
    inet6 fe80::a00:27ff:fe69:8acd/64 scope link 
       valid_lft forever preferred_lft forever
vagrant@kube-2:~$ 
sho@Desktop $ kubectl get no -o wide                                                                             [~/workspace/vm/kubeadm]
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kube-1   Ready    master   36m   v1.18.2   10.0.2.15     <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   docker://19.3.6
kube-2   Ready    <none>   29m   v1.18.2   10.0.2.15     <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   docker://19.3.6
sho@Desktop $ k get no -o wide                                                                             [~/workspace/vm/kubeadm]

解決法

eth1には異なるIPが設定されているため、eth1の値をINTERNAL-IPになるようkubeletの設定を変更することで解消できます。

vagrant@kube-1:~$ ip a show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:06:7e:a7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.128.101/24 brd 192.168.128.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe06:7ea7/64 scope link 
       valid_lft forever preferred_lft forever
vagrant@kube-1:~$ 
vagrant@kube-2:~$ ip a show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:b8:ec:3a brd ff:ff:ff:ff:ff:ff
    inet 192.168.128.102/24 brd 192.168.128.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb8:ec3a/64 scope link 
       valid_lft forever preferred_lft forever
vagrant@kube-2:~$ 

Master Node/Worker Nodeそれぞれで設定を変更していきます。
(設定自体は同じなので、解説は片方のみ掲載します)

vagrant@kube-1:~$ sudo cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_EXTRA_ARGS=--node-ip=192.168.128.101"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
vagrant@kube-1:~$ 

以下の部分が追加した箇所です。

Environment="KUBELET_EXTRA_ARGS=--node-ip=192.168.128.101"

kubeletはsystemdのサービスとして動作してるので、設定変更後はsystemdの流儀に沿ってサービス・リスタートしていきます。

$ sudo systemctl daemon-reload
$ sudo systemctl restart kubelet

INTERNAL-IPが異なる値になっているのが確認でき、Worker Nodeのコンテナにアタッチできるようになっています。

sho@Desktop $ kubectl get no -o wide                                                                              sho@Desktop $ kubectl get po -o wide                                                                       [~/workspace/vm/kubeadm]
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-deployment-5bf87f5f59-g5w8b   1/1     Running   1          71m   10.244.1.5   kube-2   <none>           <none>
nginx-deployment-5bf87f5f59-t6j7s   1/1     Running   1          71m   10.244.1.4   kube-2   <none>           <none>
sho@Desktop $ kubectl exec -it nginx-deployment-5bf87f5f59-g5w8b sh                                        [~/workspace/vm/kubeadm]
# 
2
2
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?