12
18

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

kubeadmで Pod-network-cidr を変更する

Last updated at Posted at 2018-06-09

kubeadmで Pod-network-cidr を変更する

kubeadm で kubernetesクラスタをinstallするときにFlannelを使用する場合、pod-network-cidr は決められた値を指定する必要があるとDocumentに記載されています。
--pod-network-cidr=10.244.0.0/16 を指定して、/16という比較的大きなセグメントを指定する必要があるという記載があります。

具体的には以下のイメージになります

001.png

吹き出しで記載しているとおり、/16の範囲はなかなか広いので、既存のNWと重複している可能性があります。そのため、変更方法を調査しました。
結果として、正常に cidr を変更できたため、以下に手順を残します

バージョン情報配下の通りです

  • Kubernetes 1.10
  • kubeadm 1.10
  • CentOS 7.5

事前設定 全てのサーバで実施

kubeadmを実行する際に、swapが有効だとinstallにFailしてしますので、
swapの無効化を実施します

swapoff -a

編集

vim /etc/fstab

kubeadmインストール 全てのサーバで実施

Dockerをinstall

yum install -y docker
systemctl enable docker && systemctl start docker

依存関係メモ

============================================================================================================================================
 Package                                Arch                  Version                                           Repository             Size
============================================================================================================================================
Installing:
 docker                                 x86_64                2:1.13.1-63.git94f4240.el7.centos                 extras                 16 M
Installing for dependencies:
 audit-libs-python                      x86_64                2.8.1-3.el7                                       base                   75 k
 checkpolicy                            x86_64                2.5-6.el7                                         base                  294 k
 container-selinux                      noarch                2:2.55-1.el7                                      extras                 34 k
 container-storage-setup                noarch                0.9.0-1.rhel75.gite0997c3.el7                     extras                 33 k
 docker-client                          x86_64                2:1.13.1-63.git94f4240.el7.centos                 extras                3.8 M
 docker-common                          x86_64                2:1.13.1-63.git94f4240.el7.centos                 extras                 88 k
 libcgroup                              x86_64                0.41-15.el7                                       base                   65 k
 libsemanage-python                     x86_64                2.5-11.el7                                        base                  112 k
 oci-register-machine                   x86_64                1:0-6.git2b44233.el7                              extras                1.1 M
 oci-systemd-hook                       x86_64                1:0.1.15-2.gitc04483d.el7                         extras                 33 k
 oci-umount                             x86_64                2:2.3.3-3.gite3c9055.el7                          extras                 32 k
 policycoreutils-python                 x86_64                2.5-22.el7                                        base                  454 k
 python-IPy                             noarch                0.75-6.el7                                        base                   32 k
 setools-libs                           x86_64                3.3.8-2.el7                                       base                  619 k
 skopeo-containers                      x86_64                1:0.1.29-3.dev.git7add6fc.el7.0                   extras                 15 k
 yajl                                   x86_64                2.0.4-4.el7                                       base                   39 k

Transaction Summary
============================================================================================================================================

repository設定

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

kubelet, kubeadm, kubectlをinstall

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

依存関係メモ

============================================================================================================================================
 Package                             Arch                        Version                              Repository                       Size
============================================================================================================================================
Installing:
 kubeadm                             x86_64                      1.10.4-0                             kubernetes                       17 M
 kubectl                             x86_64                      1.10.4-0                             kubernetes                      7.6 M
 kubelet                             x86_64                      1.10.4-0                             kubernetes                       17 M
Installing for dependencies:
 kubernetes-cni                      x86_64                      0.6.0-0                              kubernetes                      8.6 M
 socat                               x86_64                      1.7.3.2-2.el7                        base                            290 k

Transaction Summary
============================================================================================================================================

iptablesがバイパスしているため、正しくルーティングされない問題が、いくつか報告されているらしい。
以下のsysctlで回避

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

実行例

[root@sugi-kubernetes19-node02 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/99-docker.conf ...
fs.may_detach_mounts = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

dockerが使用している cgourp driver と kubeletが認識している cgoup driver が一致しているか確認する
Dockerは、 systemd を使用している

[root@sugi-kubernetes19-master01 ~]# docker info | grep -i cgroup
  WARNING: You're not using the default seccomp profile
Cgroup Driver: systemd

kubeletのdriver指定もsystemdとなっているので、問題なし

[root@sugi-kubernetes19-master01 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep cgroup
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

kubeletの引数を変更

全てのサーバで稼働する kubelet に、kube-dnsのアドレスを指定されている (10.96.0.10)

これを 10.1.0.10 に変更する

cat <<'EOF' > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.1.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
EOF
systemctl daemon-reload
systemctl restart kubelet.service

(やらなくても良い)etcdをMasterに手動で導入する

kubeadmを使用すると、Master上にPodとしてetcdが生成されるが、https通信となっており etcdctl を実行する方法がいまいちわからなかったため、
手動で実行出来る用にする

Masterにetcdをinstallする

yum install -y etcd

依存関係

============================================================================================================================================
 Package                       Arch                            Version                                Repository                       Size
============================================================================================================================================
Installing:
 etcd                          x86_64                          3.2.18-1.el7                           extras                          9.3 M

Transaction Summary
============================================================================================================================================

etcdの設定ファイルをバックアップする

cp -p /etc/etcd/etcd.conf{,.org}

etcdの設定ファイルを編集する

cat <<'EOF' > /etc/etcd/etcd.conf
# [Member]
# ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
# ETCD_WAL_DIR=""
# ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
# ETCD_MAX_SNAPSHOTS="5"
# ETCD_MAX_WALS="5"
ETCD_NAME="default"
# ETCD_SNAPSHOT_COUNT="100000"
# ETCD_HEARTBEAT_INTERVAL="100"
# ETCD_ELECTION_TIMEOUT="1000"
# ETCD_QUOTA_BACKEND_BYTES="0"
# ETCD_MAX_REQUEST_BYTES="1572864"
# ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
# ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
# ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
# [Clustering]
# ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
# ETCD_DISCOVERY=""
# ETCD_DISCOVERY_FALLBACK="proxy"
# ETCD_DISCOVERY_PROXY=""
# ETCD_DISCOVERY_SRV=""
# ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
# ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# ETCD_INITIAL_CLUSTER_STATE="new"
# ETCD_STRICT_RECONFIG_CHECK="true"
# ETCD_ENABLE_V2="true"
#
# [Proxy]
# ETCD_PROXY="off"
# ETCD_PROXY_FAILURE_WAIT="5000"
# ETCD_PROXY_REFRESH_INTERVAL="30000"
# ETCD_PROXY_DIAL_TIMEOUT="1000"
# ETCD_PROXY_WRITE_TIMEOUT="5000"
# ETCD_PROXY_READ_TIMEOUT="0"
#
# [Security]
# ETCD_CERT_FILE=""
# ETCD_KEY_FILE=""
# ETCD_CLIENT_CERT_AUTH="false"
# ETCD_TRUSTED_CA_FILE=""
# ETCD_AUTO_TLS="false"
# ETCD_PEER_CERT_FILE=""
# ETCD_PEER_KEY_FILE=""
# ETCD_PEER_CLIENT_CERT_AUTH="false"
# ETCD_PEER_TRUSTED_CA_FILE=""
# ETCD_PEER_AUTO_TLS="false"
#
# [Logging]
# ETCD_DEBUG="false"
# ETCD_LOG_PACKAGE_LEVELS=""
# ETCD_LOG_OUTPUT="default"
#
# [Unsafe]
# ETCD_FORCE_NEW_CLUSTER="false"
#
# [Version]
# ETCD_VERSION="false"
# ETCD_AUTO_COMPACTION_RETENTION="0"
#
# [Profiling]
# ETCD_ENABLE_PPROF="false"
# ETCD_METRICS="basic"
#
# [Auth]
# ETCD_AUTH_TOKEN="simple"
EOF

変更点確認

[root@sugi-kubeadm-master01 etcd]# diff -u /etc/etcd/etcd.conf /etc/etcd/etcd.conf.org
--- /etc/etcd/etcd.conf 2018-06-09 02:01:33.962445583 +0900
+++ /etc/etcd/etcd.conf.org     2018-05-19 00:55:57.000000000 +0900
@@ -3,7 +3,7 @@
 ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
 #ETCD_WAL_DIR=""
 #ETCD_LISTEN_PEER_URLS="http://localhost:2380"
-ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
+ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
 #ETCD_MAX_SNAPSHOTS="5"
 #ETCD_MAX_WALS="5"
 ETCD_NAME="default"
@@ -18,7 +18,7 @@
 #
 #[Clustering]
 #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
-ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
+ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
 #ETCD_DISCOVERY=""
 #ETCD_DISCOVERY_FALLBACK="proxy"
 #ETCD_DISCOVERY_PROXY=""

etcdを起動します

systemctl start etcd
systemctl status etcd
systemctl enable etcd

etcdctlコマンドでetcdのmember list を確認します

[root@sugi-kubeadm-master01 etcd]# etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://0.0.0.0:2379 isLeader=true

Master を kubeadm で構成

kubeadm コマンドに config ファイルを引数で渡します。
networking.podSubnet で pod-network-cidr を指定しています。
Kubernetesクラスタを構成する各マシンごとに、 /24 のサブネットを使用するので、/22と指定すると、4個のサーバがクラスタの上限となります

config ファイルを以下のように作成します

mkdir /root/kubeadm/
cat <<'EOF' > /root/kubeadm/config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  endpoints:
  - 'http://127.0.0.1:2379'
networking:
  serviceSubnet: '10.1.0.0/22'
  podSubnet: '10.1.4.0/22'
tokenTTL: '0'
EOF

kubeadm の実行

kubeadm init --config /root/kubeadm/config.yaml

実行時間は約2分くらい

実行例

[root@sugi-kubeadm-master01 kubeadm]# kubeadm init --config /root/kubeadm/config.yaml
[init] Using Kubernetes version: v1.10.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING Hostname]: hostname "sugi-kubeadm-master01.localdomain" could not be reached
        [WARNING Hostname]: hostname "sugi-kubeadm-master01.localdomain" lookup sugi-kubeadm-master01.localdomain on 8.8.8.8:53: no such host
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [sugi-kubeadm-master01.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.120.225]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 79.012574 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node sugi-kubeadm-master01.localdomain as master by adding a label and a taint
[markmaster] Master sugi-kubeadm-master01.localdomain tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: jsw3w2.ce3h3symthg4n8cb
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.120.225:6443 --token jsw3w2.ce3h3symthg4n8cb --discovery-token-ca-cert-hash sha256:38977016e9273b8140c50e0f40a06f70ff85c430ebe4c40bfb18d60ac3509aae

その他やっておくと便利な設定

bash_completionの有効化

kubectlのサブコマンドの自動補完を有効にするために、以下のパッケージを導入する

[root@sugi-kubernetes110-master01 ~]# yum install -y bash-completion
Loaded plugins: fastestmirror
base                                                                                                                                 | 3.6 kB  00:00:00     
extras                                                                                                                               | 3.4 kB  00:00:00     
kubernetes/signature                                                                                                                 |  454 B  00:00:00     
kubernetes/signature                                                                                                                 | 1.4 kB  00:00:00 !!! 
updates                                                                                                                              | 3.4 kB  00:00:00     
Loading mirror speeds from cached hostfile
 * base: ftp.riken.jp
 * extras: ftp.riken.jp
 * updates: ftp.riken.jp
Resolving Dependencies
--> Running transaction check
---> Package bash-completion.noarch 1:2.1-6.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================================
 Package                                    Arch                              Version                                 Repository                       Size
============================================================================================================================================================
Installing:
 bash-completion                            noarch                            1:2.1-6.el7                             base                             85 k

Transaction Summary
============================================================================================================================================================
Install  1 Package

Total download size: 85 k
Installed size: 259 k
Is this ok [y/d/N]: 

bashrcに以下の行を追記

echo "source <(kubectl completion bash)" >> ~/.bashrc

一度ターミナルをexitし、再度ログインすると、kubectlのcompletionが有効になる

kubectx と kubens のインストール

sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx
sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens

kube-prompt-bash を install

cd ~
git clone https://github.com/Sugi275/kube-prompt-bash.git
echo "source ~/kube-prompt-bash/kube-prompt-bash.sh" >> ~/.bashrc
echo 'export PS1='\''[\u@\h \W($(kube_prompt))]\$ '\' >> ~/.bashrc

kubectlのconfigをhomedirに作成

kubeadmで自動作成された config を ホームディレクトリにコピーします

mkdir ~/.kube
cp -p /etc/kubernetes/admin.conf ~/.kube/config

環境変数に、KUBECONFIG=/etc/kubernetes/admin.conf と指定されており、これを変更する

export KUBECONFIG=$HOME/.kube/config

bash_profile に上記環境変数が定義されており、これを変更する

vim ~/.bash_profile

コピーしてきたconfigにnamespaceの指定を追加する

vim ~/.kube/config
- context:
    cluster: kubernetes
    namespace: default  <-------------追加
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes

kubectl config で NAMESPACE に default が表示されたことを確認

[root@sugi-kubernetes110-master01 ~]# kubectl config get-contexts 
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin   default

memo DNSはpendingになっている

Flannel を導入する前のこのタイミングでは、dns pod は pending となっており、失敗しているように見えるが、これは正常

Flannel を導入すると、正常にRunning となる

[root@sugi-kubeadm-master01 ~(kubernetes kube-system kubernetes-admin)]# kubectl get pods -o wide
NAME                                                        READY     STATUS    RESTARTS   AGE       IP                NODE
kube-apiserver-sugi-kubeadm-master01.localdomain            1/1       Running   0          23m       192.168.120.225   sugi-kubeadm-master01.localdomain
kube-controller-manager-sugi-kubeadm-master01.localdomain   1/1       Running   0          23m       192.168.120.225   sugi-kubeadm-master01.localdomain
kube-dns-86f4d74b45-kx99q                                   0/3       Pending   0          23m       <none>            <none>
kube-proxy-tw2x4                                            1/1       Running   0          23m       192.168.120.225   sugi-kubeadm-master01.localdomain
kube-scheduler-sugi-kubeadm-master01.localdomain            1/1       Running   0          23m       192.168.120.225   sugi-kubeadm-master01.localdomain

Flannelのinstall

wget で git 上に公開されている Flannel の manifest ファイルをダウンロードします

cd /root/kubeadm
wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

以下のようにファイルを編集します

cp -p kube-flannel.yml{,.org}
vim kube-flannel.yml
snip

  net-conf.json: |
    {
      "Network": "10.1.4.0/22",
      "Backend": {
        "Type": "vxlan"
      }
    }

snip

差分を確認します

[root@sugi-kubeadm-master01 kubeadm(kubernetes default kubernetes-admin)]# diff -u kube-flannel.yml.org kube-flannel.yml
--- kube-flannel.yml.org        2018-06-09 15:09:22.294674317 +0900
+++ kube-flannel.yml    2018-06-09 15:10:18.013393294 +0900
@@ -73,7 +73,7 @@
     }
   net-conf.json: |
     {
-      "Network": "10.244.0.0/16",
+      "Network": "10.1.4.0/22",
       "Backend": {
         "Type": "vxlan"
       }

manifest ファイルから apply を実行します

kubectl apply -f /root/kubeadm/kube-flannel.yml

node サーバで実行

Masterに参加するように、kubeadmコマンドを実行する
Master側でkubeadmを実行した際に出力された最後の文字列から引用

kubeadm join 192.168.120.225:6443 --token jsw3w2.ce3h3symthg4n8cb --discovery-token-ca-cert-hash sha256:38977016e9273b8140c50e0f40a06f70ff85c430ebe4c40bfb18d60ac3509aae
kubeadm join --token rz20b8.xo9edptiky33606n --discovery-token-unsafe-skip-ca-verification 192.168.120.225:6443

実行例
2秒ほどで終了する

[root@sugi-kubeadm-node01 ~]# kubeadm join 192.168.120.225:6443 --token jsw3w2.ce3h3symthg4n8cb --discovery-token-ca-cert-hash sha256:38977016e9273b8140c50e0f40a06f70ff85c430ebe4c40bfb18d60ac3509aae
[preflight] Running pre-flight checks.
        [WARNING Hostname]: hostname "sugi-kubeadm-node01.localdomain" could not be reached
        [WARNING Hostname]: hostname "sugi-kubeadm-node01.localdomain" lookup sugi-kubeadm-node01.localdomain on 8.8.8.8:53: no such host
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "192.168.120.225:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.120.225:6443"
[discovery] Requesting info from "https://192.168.120.225:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.120.225:6443"
[discovery] Successfully established connection with API Server "192.168.120.225:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

nodeサーバでもコンテナが実行されている

[root@sugi-kubernetes110-node01 ~]# docker ps
CONTAINER ID        IMAGE                                                                                                 COMMAND                  CREATED              STATUS              PORTS               NAMES
6799376b5fa1        2b736d06ca4c                                                                                          "/opt/bin/flanneld..."   17 seconds ago       Up 17 seconds                           k8s_kube-flannel_kube-flannel-ds-d92qj_kube-system_436dce14-4ae3-11e8-bbe9-0050569817ee_0
35385775d1dd        k8s.gcr.io/kube-proxy-amd64@sha256:c7036a8796fd20c16cb3b1cef803a8e980598bff499084c29f3c759bdb429cd2   "/usr/local/bin/ku..."   About a minute ago   Up About a minute                       k8s_kube-proxy_kube-proxy-khhwf_kube-system_436d9ed6-4ae3-11e8-bbe9-0050569817ee_0
3f9179965ccf        k8s.gcr.io/pause-amd64:3.1                                                                            "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-flannel-ds-d92qj_kube-system_436dce14-4ae3-11e8-bbe9-0050569817ee_0
efde8e22d079        k8s.gcr.io/pause-amd64:3.1                                                                            "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-proxy-khhwf_kube-system_436d9ed6-4ae3-11e8-bbe9-0050569817ee_0

ステータス確認

[root@sugi-kubernetes110-master01 ~]# kubectl get nodes -o wide
NAME                                      STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
sugi-kubernetes110-master01.localdomain   Ready     master    6m        v1.10.2   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
sugi-kubernetes110-node01.localdomain     Ready     <none>    1m        v1.10.2   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
sugi-kubernetes110-node02.localdomain     Ready     <none>    1m        v1.10.2   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1

pods確認

[root@sugi-kubernetes110-master01 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-sugi-kubernetes110-master01.localdomain                      1/1       Running   0          5m
kube-system   kube-apiserver-sugi-kubernetes110-master01.localdomain            1/1       Running   0          6m
kube-system   kube-controller-manager-sugi-kubernetes110-master01.localdomain   1/1       Running   0          6m
kube-system   kube-dns-86f4d74b45-bvps2                                         3/3       Running   0          6m
kube-system   kube-flannel-ds-5tgh7                                             1/1       Running   0          1m
kube-system   kube-flannel-ds-d92qj                                             1/1       Running   0          2m
kube-system   kube-flannel-ds-rb6ll                                             1/1       Running   0          4m
kube-system   kube-proxy-khhwf                                                  1/1       Running   0          2m
kube-system   kube-proxy-l8pbk                                                  1/1       Running   0          1m
kube-system   kube-proxy-zblxq                                                  1/1       Running   0          6m
kube-system   kube-scheduler-sugi-kubernetes110-master01.localdomain            1/1       Running   0          5m

上記で構築完了です

参考URL

12
18
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
12
18

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?