はじめに
KubernetesのPod間ネットワークには、様々な通信方法が存在します。これは、KubernetesのCNI Plugin で何を選択するかによって構成が大きく変わります。
今回は比較的有名(?)な CNI Plugin の Calico をオンプレミスで構築するための手順を確認します。
Calicoには、2つのモードが存在します
- IPIPトンネリングモード
- BGPモード
IPIPトンネリングモードは、IPパケットをIPパケットでカプセル化するため、通信のオーバーヘッドが気になります。性能の観点からBGPモードでの構成を見ていきます。
BGPモードとという名称ですが、PodNetworkを直接BGPで外部に公開しない場合、かつ、全てのKubernetesクラスタがL2で通信が出来る場合、外部BGPルータは必要ありません。
Service NodePort などPodを公開する方法と併用することで、サーバ側だけでCalicoを構成出来ます。
なお、全クラスタのL2接続については以下を参照ください
https://www.slideshare.net/techblogyahoo/yahoo-japan-meetup-8-71847314
外部BGPルータが必要となる場合は、以下の使用用途となると考えられます
- 全てのKubernetesNodeがL2で通信出来ない場合、かつ、性能を考慮しIPIPトンネリングを実施したくない場合
- 外部のNWから直接Podにアクセスする特殊な要件が有る場合 (Serviceを使用してアクセスするのが普通)
つまり、KuberneretesクラスタをL2で接続さえしてあげれば、外部のBGPルータは必要ではなさそうに思います
構成
- Master1台
- Node2台
事前設定
kubeadmを実行する際に、swapが有効だとinstallにFailしてしますので、
swapの無効化を実施します
swapoff -a
編集 (swapがあれば)
vim /etc/fstab
kubeadmインストール 全てのサーバで実施
Dockerをinstall
yum install -y docker
systemctl enable docker && systemctl start docker
依存関係メモ
===============================================================================================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================================================================================
Installing:
docker x86_64 2:1.13.1-63.git94f4240.el7.centos extras 16 M
Installing for dependencies:
container-selinux noarch 2:2.55-1.el7 extras 34 k
container-storage-setup noarch 0.9.0-1.rhel75.gite0997c3.el7 extras 33 k
device-mapper-event x86_64 7:1.02.146-4.el7 base 185 k
device-mapper-event-libs x86_64 7:1.02.146-4.el7 base 184 k
device-mapper-persistent-data x86_64 0.7.3-3.el7 base 405 k
docker-client x86_64 2:1.13.1-63.git94f4240.el7.centos extras 3.8 M
docker-common x86_64 2:1.13.1-63.git94f4240.el7.centos extras 88 k
lvm2 x86_64 7:2.02.177-4.el7 base 1.3 M
lvm2-libs x86_64 7:2.02.177-4.el7 base 1.0 M
oci-register-machine x86_64 1:0-6.git2b44233.el7 extras 1.1 M
oci-systemd-hook x86_64 1:0.1.15-2.gitc04483d.el7 extras 33 k
oci-umount x86_64 2:2.3.3-3.gite3c9055.el7 extras 32 k
skopeo-containers x86_64 1:0.1.29-3.dev.git7add6fc.el7.0 extras 15 k
yajl x86_64 2.0.4-4.el7 base 39 k
Transaction Summary
===============================================================================================================================================================================================================
repository設定
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
kubelet, kubeadm, kubectlをinstall
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
依存関係メモ
===============================================================================================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================================================================================
Installing:
kubeadm x86_64 1.11.0-0 kubernetes 7.5 M
kubectl x86_64 1.11.0-0 kubernetes 7.5 M
kubelet x86_64 1.11.0-0 kubernetes 18 M
Installing for dependencies:
cri-tools x86_64 1.11.0-0 kubernetes 4.2 M
ebtables x86_64 2.0.10-16.el7 base 123 k
kubernetes-cni x86_64 0.6.0-0 kubernetes 8.6 M
socat x86_64 1.7.3.2-2.el7 base 290 k
Transaction Summary
===============================================================================================================================================================================================================
iptablesがバイパスしているため、正しくルーティングされない問題が、いくつか報告されているらしい。
以下のsysctlで回避
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
実行例
[root@ntw-k8s-master01 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/99-docker.conf ...
fs.may_detach_mounts = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
dockerが使用している cgourp driver と kubeletが認識している cgoup driver が一致しているか確認する
Dockerは、 systemd を使用している
[root@ntw-k8s-master01 ~]# docker info | grep -i cgroup
WARNING: You're not using the default seccomp profile
Cgroup Driver: systemd
なお、この時点で kubelet は起動していないが問題はない。kubeadm init 後に起動する。
Snapshot取得
念のためKVMなどでSnapshotを取得しておく
virsh snapshot-create-as calico-k8s-master01 001_before_kubeadm
virsh snapshot-create-as calico-k8s-node01 001_before_kubeadm
virsh snapshot-create-as calico-k8s-node02 001_before_kubeadm
Master で kubeadm を構成
mkdir /root/kubeadm/
cat <<'EOF' > /root/kubeadm/config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
networking:
serviceSubnet: '10.1.0.0/22'
podSubnet: '10.1.4.0/22'
tokenTTL: '0'
EOF
実行時間は約2分くらい
kubeadm init --config /root/kubeadm/config.yaml
実行例
[root@calico-k8s-master01 ~]# kubeadm init --config /root/kubeadm/config.yaml
I0714 11:43:56.011986 1529 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0714 11:43:56.040648 1529 kernel_validator.go:81] Validating kernel version
I0714 11:43:56.041097 1529 kernel_validator.go:96] Validating kernel config
[WARNING Hostname]: hostname "calico-k8s-master01.maas" could not be reached
[WARNING Hostname]: hostname "calico-k8s-master01.maas" lookup calico-k8s-master01.maas on 8.8.8.8:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [calico-k8s-master01.maas kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 10.44.194.67]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [calico-k8s-master01.maas localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [calico-k8s-master01.maas localhost] and IPs [10.44.194.67 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 41.001936 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node calico-k8s-master01.maas as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node calico-k8s-master01.maas as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "calico-k8s-master01.maas" as an annotation
[bootstraptoken] using token: g548mi.nt8u59mgdg3ro7ve
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.44.194.67:6443 --token g7ahjt.l7lcgd9i1pfxcbmk --discovery-token-ca-cert-hash sha256:73fda0cd56b925617ec18f0bd3933594afee2398e2b6811fdfe60b3dc5192826
[root@calico-k8s-master01 ~]#
kubectl の実行環境設定
環境変数設定
rootユーザでkubectlを実行するために環境変数を指定します
export KUBECONFIG=/etc/kubernetes/admin.conf
KubernetesクラスタのNode状態確認
[root@calico-k8s-master01 kubeadm(default kubernetes-admin)]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
calico-k8s-master01.maas NotReady master 36m v1.11.0 192.168.204.100 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://1.13.1
master に join するための kubeadm の トークン確認
Defaultでは、Expire が 24時間制限となっているが、無制限としている。
[root@calico-k8s-master01 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
g7ahjt.l7lcgd9i1pfxcbmk <forever> <never> authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
bash_completionの有効化
kubectlのサブコマンドの自動補完を有効にするために、以下のパッケージを導入する
yum install -y bash-completion
依存関係メモ
===============================================================================================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================================================================================
Installing:
bash-completion noarch 1:2.1-6.el7 base 85 k
Transaction Summary
===============================================================================================================================================================================================================
bashrcに以下の行を追記
echo "source <(kubectl completion bash)" >> ~/.bashrc
一度ターミナルをexitし、再度ログインすると、kubectlのcompletionが有効になる
kubectx と kubens のインストール
Kubernetesクラスタを管理する際に便利な物を導入する。
- kubextc : kubectlの実行クラスタを変更する際に、楽に変更可能とするためのもの
- kubens : kubectlで namespace を変更する際に、楽に変更可能とするためのもの
sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx
sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens
kube-prompt-bash を install
bash-promt に kubernetes クラスタや namespace を表示するツールを導入
cd ~
git clone https://github.com/Sugi275/kube-prompt-bash.git
echo "source ~/kube-prompt-bash/kube-prompt-bash.sh" >> ~/.bashrc
echo 'export PS1='\''[\u@\h \W($(kube_prompt))]\$ '\' >> ~/.bashrc
kubectlのconfigをhomedirに作成
kubeadmで自動作成された config を ホームディレクトリにコピーします
mkdir ~/.kube
cp -p /etc/kubernetes/admin.conf ~/.kube/config
環境変数に、KUBECONFIG=/etc/kubernetes/admin.conf と指定されており、これを変更する
export KUBECONFIG=$HOME/.kube/config
.bash_profileに環境変数を指定する
cat <<'EOF' > /root/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# add for kubernetes
export KUBECONFIG=$HOME/.kube/config
EOF
コピーしてきたconfigにnamespaceの指定を追加する
vim ~/.kube/config
- context:
cluster: kubernetes
namespace: default <-------------追加
user: kubernetes-admin
name: kubernetes-admin@kubernetes
kubectl config で NAMESPACE に default が表示されたことを確認
[root@calico-k8s-master01 kubeadm(default kubernetes-admin)]# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin default
memo この段階では、DNSコンテナはpendingとなっているが正しいです
[root@calico-k8s-master01 kubeadm(default kubernetes-admin)]# kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system coredns-78fcdf6894-6r6h6 0/1 Pending 0 35m <none> <none>
kube-system coredns-78fcdf6894-fvlzv 0/1 Pending 0 35m <none> <none>
kube-system etcd-calico-k8s-master01.maas 1/1 Running 0 34m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-apiserver-calico-k8s-master01.maas 1/1 Running 0 34m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-controller-manager-calico-k8s-master01.maas 1/1 Running 0 34m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-proxy-jn546 1/1 Running 0 35m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-scheduler-calico-k8s-master01.maas 1/1 Running 0 34m 192.168.204.100 calico-k8s-master01.maas
Calicoのinstall
wget で git 上に公開されている Calico の manifest ファイルをダウンロードします
cd /root/kubeadm
wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
以下のようにファイルを編集します
cp -p calico.yaml{,.org}
vim calico.yaml
編集前
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# The location of your etcd cluster. This uses the Service clusterIP defined below.
etcd_endpoints: "http://10.96.232.136:6666"
# Configure the Calico backend to use.
calico_backend: "bird"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"plugins": [
{
"type": "calico",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"log_level": "info",
"mtu": 1500,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
------ snip ------
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
- name: CALICO_IPV4POOL_IPIP
value: "Always"
------ snip ------
# This manifest installs the Service which gets traffic to the Calico
# etcd.
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: calico-etcd
name: calico-etcd
namespace: kube-system
spec:
# Select the calico-etcd pod running on the master.
selector:
k8s-app: calico-etcd
# This ClusterIP needs to be known in advance, since we cannot rely
# on DNS to get access to etcd.
clusterIP: 10.96.232.136
ports:
- port: 6666
編集後
- PodNetworkの変更 (動作検証のため短めなRangeにしてみる。本番環境では/16といった大きな範囲にするべき)
- Calicoで必要としているEtcdEndpointのClusterIPを、上記の変更に合わせて変更
- IPIPモードをOFFにする
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# The location of your etcd cluster. This uses the Service clusterIP defined below.
etcd_endpoints: "http://10.1.3.250:6666" <---------------- here
# Configure the Calico backend to use.
calico_backend: "bird"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"plugins": [
{
"type": "calico",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"log_level": "info",
"mtu": 1500,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
------ snip ------
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.1.4.0/22" <---------------- here
- name: CALICO_IPV4POOL_IPIP
value: "off" <---------------- here
------ snip ------
# This manifest installs the Service which gets traffic to the Calico
# etcd.
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: calico-etcd
name: calico-etcd
namespace: kube-system
spec:
# Select the calico-etcd pod running on the master.
selector:
k8s-app: calico-etcd
# This ClusterIP needs to be known in advance, since we cannot rely
# on DNS to get access to etcd.
clusterIP: 10.1.3.250 <---------------- here
ports:
- port: 6666
差分を確認します
[root@calico-k8s-master01 kubeadm(default kubernetes-admin)]# diff -u calico.yaml.org calico.yaml
--- calico.yaml.org 2018-07-12 09:14:13.000000000 +0900
+++ calico.yaml 2018-07-14 12:28:20.865564746 +0900
@@ -13,7 +13,7 @@
namespace: kube-system
data:
# The location of your etcd cluster. This uses the Service clusterIP defined below.
- etcd_endpoints: "http://10.96.232.136:6666"
+ etcd_endpoints: "http://10.1.3.250:6666"
# Configure the Calico backend to use.
calico_backend: "bird"
@@ -129,7 +129,7 @@
k8s-app: calico-etcd
# This ClusterIP needs to be known in advance, since we cannot rely
# on DNS to get access to etcd.
- clusterIP: 10.96.232.136
+ clusterIP: 10.1.3.250
ports:
- port: 6666
@@ -214,9 +214,9 @@
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
- value: "192.168.0.0/16"
+ value: "10.1.4.0/22"
- name: CALICO_IPV4POOL_IPIP
- value: "Always"
+ value: "off"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
manifest ファイルから apply を実行します
kubectl apply -f /root/kubeadm/calico.yaml
kube-dnsやCalico用のetcdPodが作成されています
[root@calico-k8s-master01 kubeadm(default kubernetes-admin)]# kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 44m
kube-system calico-etcd ClusterIP 10.1.3.250 <none> 6666/TCP 11s
kube-system kube-dns ClusterIP 10.1.0.10 <none> 53/UDP,53/TCP 44m
Podも正常に作成されています
[root@calico-k8s-master01 kubeadm]# kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-jc5st 1/1 Running 0 1m 192.168.204.100 calico-k8s-master01.maas
kube-system calico-kube-controllers-84fd4db7cd-pxndj 1/1 Running 0 1m 192.168.204.100 calico-k8s-master01.maas
kube-system calico-node-cqcvb 2/2 Running 0 1m 192.168.204.100 calico-k8s-master01.maas
kube-system coredns-78fcdf6894-5m2qn 1/1 Running 0 5m 10.1.6.64 calico-k8s-master01.maas
kube-system coredns-78fcdf6894-k5tf4 1/1 Running 0 5m 10.1.6.65 calico-k8s-master01.maas
kube-system etcd-calico-k8s-master01.maas 1/1 Running 0 4m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-apiserver-calico-k8s-master01.maas 1/1 Running 0 4m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-controller-manager-calico-k8s-master01.maas 1/1 Running 0 4m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-proxy-pvnk8 1/1 Running 0 5m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-scheduler-calico-k8s-master01.maas 1/1 Running 0 4m 192.168.204.100 calico-k8s-master01.maas
Node で kubeadm を構成
Masterに参加するように、kubeadmコマンドを実行する
Master側でkubeadmを実行した際に出力された最後の文字列から引用
kubeadm join 10.44.194.67:6443 --token g7ahjt.l7lcgd9i1pfxcbmk --discovery-token-ca-cert-hash sha256:73fda0cd56b925617ec18f0bd3933594afee2398e2b6811fdfe60b3dc5192826
実行例
2秒ほどで終了する
[root@calico-k8s-node01 ~]# kubeadm join 10.44.194.67:6443 --token g7ahjt.l7lcgd9i1pfxcbmk --discovery-token-ca-cert-hash sha256:73fda0cd56b925617ec18f0bd3933594afee2398e2b6811fdfe60b3dc5192826
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0714 12:45:06.349270 5362 kernel_validator.go:81] Validating kernel version
I0714 12:45:06.349574 5362 kernel_validator.go:96] Validating kernel config
[WARNING Hostname]: hostname "calico-k8s-node01.maas" could not be reached
[WARNING Hostname]: hostname "calico-k8s-node01.maas" lookup calico-k8s-node01.maas on 8.8.8.8:53: no such host
[discovery] Trying to connect to API Server "10.44.194.67:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.44.194.67:6443"
[discovery] Requesting info from "https://10.44.194.67:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.44.194.67:6443"
[discovery] Successfully established connection with API Server "10.44.194.67:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "calico-k8s-node01.maas" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
nodeサーバでもコンテナが実行されている
[root@calico-k8s-node01 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d4eb932f7b5a quay.io/calico/cni@sha256:ed172c28bc193bb09bce6be6ed7dc6bfc85118d55e61d263cee8bbb0fd464a9d "/install-cni.sh" About a minute ago Up About a minute k8s_install-cni_calico-node-4wbfq_kube-system_4d516c63-8718-11e8-ae94-525400139e30_0
0d87c5f45c36 quay.io/calico/node@sha256:a35541153f7695b38afada46843c64a2c546548cd8c171f402621736c6cf3f0b "start_runit" About a minute ago Up About a minute k8s_calico-node_calico-node-4wbfq_kube-system_4d516c63-8718-11e8-ae94-525400139e30_0
7ccd9d22a4ff k8s.gcr.io/kube-proxy-amd64@sha256:3c908257f494b60c0913eae6db3d35fa99825d487b2bcf89eed0a7d8e34c1539 "/usr/local/bin/ku..." 2 minutes ago Up 2 minutes k8s_kube-proxy_kube-proxy-rznnx_kube-system_4d533942-8718-11e8-ae94-525400139e30_0
36544a240048 k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-proxy-rznnx_kube-system_4d533942-8718-11e8-ae94-525400139e30_0
4da65b1c380c k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_calico-node-4wbfq_kube-system_4d516c63-8718-11e8-ae94-525400139e30_0
ステータス確認
[root@calico-k8s-master01 kubeadm]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
calico-k8s-master01.maas Ready master 11m v1.11.0 192.168.204.100 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://1.13.1
calico-k8s-node01.maas Ready <none> 2m v1.11.0 192.168.204.101 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://1.13.1
pods確認
[root@calico-k8s-master01 kubeadm]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-jc5st 1/1 Running 0 7m 192.168.204.100 calico-k8s-master01.maas
kube-system calico-kube-controllers-84fd4db7cd-pxndj 1/1 Running 0 7m 192.168.204.100 calico-k8s-master01.maas
kube-system calico-node-4wbfq 2/2 Running 0 1m 192.168.204.101 calico-k8s-node01.maas
kube-system calico-node-cqcvb 2/2 Running 0 7m 192.168.204.100 calico-k8s-master01.maas
kube-system coredns-78fcdf6894-5m2qn 1/1 Running 0 10m 10.1.6.64 calico-k8s-master01.maas
kube-system coredns-78fcdf6894-k5tf4 1/1 Running 0 10m 10.1.6.65 calico-k8s-master01.maas
kube-system etcd-calico-k8s-master01.maas 1/1 Running 0 10m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-apiserver-calico-k8s-master01.maas 1/1 Running 0 10m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-controller-manager-calico-k8s-master01.maas 1/1 Running 0 10m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-proxy-pvnk8 1/1 Running 0 10m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-proxy-rznnx 1/1 Running 0 1m 192.168.204.101 calico-k8s-node01.maas
kube-system kube-scheduler-calico-k8s-master01.maas 1/1 Running 0 10m 192.168.204.100 calico-k8s-master01.maas
他のNodeが有る場合は、同様に追加します。
私の環境ではNodeを2台としているので、以下のようになりました。
[root@calico-k8s-master01 kubeadm]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
calico-k8s-master01.maas Ready master 14m v1.11.0 192.168.204.100 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://1.13.1
calico-k8s-node01.maas Ready <none> 5m v1.11.0 192.168.204.101 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://1.13.1
calico-k8s-node02.maas Ready <none> 1m v1.11.0 192.168.204.102 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://1.13.1
[root@calico-k8s-master01 kubeadm]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-jc5st 1/1 Running 0 11m 192.168.204.100 calico-k8s-master01.maas
kube-system calico-kube-controllers-84fd4db7cd-pxndj 1/1 Running 0 11m 192.168.204.100 calico-k8s-master01.maas
kube-system calico-node-4wbfq 2/2 Running 0 6m 192.168.204.101 calico-k8s-node01.maas
kube-system calico-node-8zwjd 2/2 Running 0 2m 192.168.204.102 calico-k8s-node02.maas
kube-system calico-node-cqcvb 2/2 Running 0 11m 192.168.204.100 calico-k8s-master01.maas
kube-system coredns-78fcdf6894-5m2qn 1/1 Running 0 15m 10.1.6.64 calico-k8s-master01.maas
kube-system coredns-78fcdf6894-k5tf4 1/1 Running 0 15m 10.1.6.65 calico-k8s-master01.maas
kube-system etcd-calico-k8s-master01.maas 1/1 Running 0 14m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-apiserver-calico-k8s-master01.maas 1/1 Running 0 14m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-controller-manager-calico-k8s-master01.maas 1/1 Running 0 14m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-proxy-mrj59 1/1 Running 0 2m 192.168.204.102 calico-k8s-node02.maas
kube-system kube-proxy-pvnk8 1/1 Running 0 15m 192.168.204.100 calico-k8s-master01.maas
kube-system kube-proxy-rznnx 1/1 Running 0 6m 192.168.204.101 calico-k8s-node01.maas
kube-system kube-scheduler-calico-k8s-master01.maas 1/1 Running 0 14m 192.168.204.100 calico-k8s-master01.maas
[root@calico-k8s-master01 kubeadm]# kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 15m <none>
kube-system calico-etcd ClusterIP 10.1.3.250 <none> 6666/TCP 11m k8s-app=calico-etcd
kube-system kube-dns ClusterIP 10.1.0.10 <none> 53/UDP,53/TCP 15m k8s-app=kube-dns
calicoctlのInstall
KubernetesのPodとしてDeployする
cd /root/calicoctl
wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calicoctl.yaml
kubectl apply -f calicoctl.yaml
作成したPodでshを立ち上げる
kubectl exec -it -n kube-system calicoctl /bin/sh
calicoctlコマンドで各種確認
~ # calicoctl get nodes -o wide
NAME ASN IPV4 IPV6
calico-k8s-master01.maas (unknown) 192.168.217.1/24
calico-k8s-node01.maas (unknown) 192.168.217.2/24
calico-k8s-node02.maas (unknown) 192.168.217.3/24
~ # calicoctl get bgpPeer -o wide
NAME PEERIP NODE ASN
~ # calicoctl get ippool -o wide
NAME CIDR NAT IPIPMODE DISABLED
default-ipv4-ippool 10.1.4.0/22 true Never false
~ # calicoctl get bgpConfiguration -o wide
NAME LOGSEVERITY MESHENABLED ASNUMBER
動作確認
nginxを作成し、NodePortからアクセス
mkdir /root/manifests/
cat <<'EOF' > /root/manifests/nginx_test.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-test
spec:
selector:
matchLabels:
app: nginx-test
replicas: 2
template:
metadata:
labels:
app: nginx-test
spec:
containers:
- name: master
image: nginx
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
restartPolicy: Always
EOF
kubectl apply -f /root/manifests/nginx_test.yaml
作成中
[root@calico-k8s-master01 ~(default kubernetes-admin)]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-test-58586b9f9c-xnk24 0/1 ContainerCreating 0 11s <none> calico-k8s-node01.maas
nginx-test-58586b9f9c-xsv9k 0/1 ContainerCreating 0 11s <none> calico-k8s-node02.maas
作成完了
[root@calico-k8s-master01 ~(default kubernetes-admin)]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-test-58586b9f9c-xnk24 1/1 Running 0 2m 10.1.7.0 calico-k8s-node01.maas
nginx-test-58586b9f9c-xsv9k 1/1 Running 0 2m 10.1.5.0 calico-k8s-node02.maas
NodePortサービスを作成
cat <<'EOF' > /root/manifests/nginx_nodeport.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: nginx-test
name: nginx-test
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
nodePort: 32003
selector:
app: nginx-test
EOF
kubectl apply -f /root/manifests/nginx_nodeport.yaml
アクセステスト
→ Welcome to nginx! と表示
CentOSPodを作成し、Node間・NameSpace間通信テスト
demo1 namcespaceを作成
kubectl create namespace demo1
切り替え
[root@calico-k8s-master01 ~(default kubernetes-admin)]# kubens demo1
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "demo1".
[root@calico-k8s-master01 ~(demo1 kubernetes-admin)]#
mkdir /root/manifests/
cat <<'EOF' > /root/manifests/centos_test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: centos-deployment
spec:
selector:
matchLabels:
app: centos
replicas: 2 # tells deployment to run 2 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: centos
spec:
containers:
- name: centos
image: centos:latest
command: [ "sleep", "360000000" ]
EOF
kubectl apply -f /root/manifests/centos_test.yaml
作成完了
[root@calico-k8s-master01 ~(demo1 kubernetes-admin)]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
centos-deployment-7d7d7bcb56-td2c5 1/1 Running 0 6s 10.1.5.2 calico-k8s-node02.maas
centos-deployment-7d7d7bcb56-w6pc8 1/1 Running 0 6s 10.1.7.2 calico-k8s-node01.maas
centosでbashを立ち上げる
# kubectl exec -it centos-deployment-7d7d7bcb56-td2c5 bash
[root@centos-deployment-7d7d7bcb56-td2c5 /]#
解析のため以下パッケージを入れる
yum install -y iproute vim tcpdump
別のNodeで稼働しているコンテナへPingが届く
[root@centos-deployment-7d7d7bcb56-td2c5 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 6e:21:21:da:49:5a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.5.2/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::6c21:21ff:feda:495a/64 scope link
valid_lft forever preferred_lft forever
[root@centos-deployment-7d7d7bcb56-td2c5 /]#
[root@centos-deployment-7d7d7bcb56-td2c5 /]# ping 10.1.7.2
PING 10.1.7.2 (10.1.7.2) 56(84) bytes of data.
64 bytes from 10.1.7.2: icmp_seq=1 ttl=62 time=0.298 ms
64 bytes from 10.1.7.2: icmp_seq=2 ttl=62 time=0.393 ms
また、defaultではNetworkPolicyを設定していないので、別Nacespaceで動作しているPodへアクセス可能
[root@centos-deployment-7d7d7bcb56-td2c5 /]# curl http://10.1.7.0/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
参考URL
Calico系
Calico NW Customize