title: k8s Router Nodeの追加
tags: k8s kubernetes nginx-ingress
author: murata-tomohide
slide: false
前回はオンプレのサーバでk8sのクラスタを組みました。このクラスタの1台にラベルを付けてネットワーク周りのpodsを集約させてみようと思います。
TL;DR
1台のノードにrouterというラベルとTaintを付けてnginx-ingressが特定のノードにプロビジョニングされるよう設定します。
特定のノードに配置されるようにする方法としては、Node Selector, Node Affinity, Taints Tolerationsとあるようです。それぞれ目的が違うようです。(Taint: 汚れという意味。)
nodeにラベルをつける
k8s-node02.deroris.localだけrouter Roleにしてみませう。
[root@k8s-node01 ~]# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node01.deroris.local Ready master 81m v1.15.3 172.31.3.101 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.1
k8s-node02.deroris.local Ready <none> 78m v1.15.3 172.31.3.102 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.1
k8s-node05.deroris.local Ready <none> 70m v1.15.3 172.31.3.105 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.2
node03.k8s.deroris.local Ready <none> 73m v1.15.3 172.31.3.103 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.2
node04.k8s.deroris.local Ready <none> 70m v1.15.3 172.31.3.104 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.2
# kubectl label node k8s-node02.deroris.local node-role.kubernetes.io/router=
# kubectl label node node03.k8s.deroris.local node-role.kubernetes.io/worker=
# kubectl label node node04.k8s.deroris.local node-role.kubernetes.io/worker=
# kubectl label node k8s-node05.deroris.local node-role.kubernetes.io/worker=
ラベルがついた。ラベルだけなので別に意味はない。
[root@k8s-node01 ~]# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node01.deroris.local Ready master 81m v1.15.3 172.31.3.101 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.1
k8s-node02.deroris.local Ready router 78m v1.15.3 172.31.3.102 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.1
k8s-node05.deroris.local Ready worker 70m v1.15.3 172.31.3.105 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.2
node03.k8s.deroris.local Ready worker 73m v1.15.3 172.31.3.103 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.2
node04.k8s.deroris.local Ready worker 70m v1.15.3 172.31.3.104 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://19.3.2
Taintを設定
router nodeにTaintをつけて、特定のノードにPodが配備されるようにする。(routerノードにネットワーク系を集約する。)
Taintは =:という書式です。
Valueに何か入れたほうがいい気がするけど今回は空で。
[root@k8s-node01 ~]# kubectl describe node k8s-node01.deroris.local | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
[root@k8s-node01 ~]# kubectl describe node k8s-node02.deroris.local | grep Taints
Taints: <none>
[root@k8s-node01 ~]# kubectl taint nodes k8s-node02.deroris.local node-role.kubernetes.io/router=:NoSchedule
node/k8s-node02.deroris.local tainted
[root@k8s-node01 ~]# kubectl describe node k8s-node02.deroris.local | grep Taints
Taints: node-role.kubernetes.io/router:NoSchedule
MetalLBを入れる
デフォルトのyamlを編集して上記のTaintがついたノードにだけプロビジョニングされるようにする。
https://raw.githubusercontent.com/google/metallb/master/manifests/metallb.yaml
[root@k8s-node01 ~]# diff metallb.yaml metallb-onlyrouternode.yaml
302c302,303
< key: node-role.kubernetes.io/master
---
> key: node-role.kubernetes.io/router
> operator: Equal
353a355,358
> tolerations:
> - key: "node-role.kubernetes.io/router"
> operator: Equal
> effect: NoSchedule
\ No newline at end of file
metallbをL2モードで使い、使うIPアドレスの範囲を指定してあげる。範囲内のIPアドレスをIngressなどのtype: LoadBalancerのPodsが利用することになる。外からルーティングできるIPアドレスを設定しよう。今回の構成の場合、ノードの直のサブネット内にした。
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: my-ip-space
protocol: layer2
addresses:
- 172.31.3.111-172.31.3.120 # 使う外側のIPアドレスの範囲を指定する
EOF
nginx-ingress-controller インストール
ingress-controllerもrouter Taintがついてるノードのにみする。
[root@k8s-node01 ~]# diff mandatory.yaml mandatory-onlyrouternode.yaml
275c275,278
<
---
> tolerations:
> - key: "node-role.kubernetes.io/router"
> operator: "Equal"
> effect: "NoSchedule"
↑のマニフェストを食わせるとこんな感じになる。
[root@k8s-node01 ~]# kubectl get all -o wide -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-ingress-controller-5659d54c8d-66ssh 1/1 Running 0 3d17h 10.13.2.3 node03.k8s.deroris.local <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx LoadBalancer 10.108.113.11 172.31.3.111 80:31403/TCP,443:31350/TCP 3d17h app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nginx-ingress-controller 1/1 1 1 3d17h nginx-ingress-controller quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nginx-ingress-controller-5659d54c8d 1 1 1 3d17h nginx-ingress-controller quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,pod-template-hash=5659d54c8d
確認
試しにこんな感じのマニフェストを食わせる。
/apacheでapacheに、/nginxでnginxに行く。
cat <<EOF | kubectl create -f -
# namespace
apiVersion: v1
kind: Namespace
metadata:
name: ns-test
---
# ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: lb
namespace: ns-test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: ns-test.172.31.3.111.nip.io # ここのIPアドレスはすでに付与されているものを入れたが指定したい場合は?アノテーションとかに書くのかな。
http:
paths:
- path: /apache
backend:
serviceName: apache-svc
servicePort: 80
- path: /nginx
backend:
serviceName: nginx-svc
servicePort: 80
- path: /
backend:
serviceName: blackhole
servicePort: 80
---
# apache
apiVersion: v1
kind: Service
metadata:
name: apache-svc
namespace: ns-test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: httpd
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd
namespace: ns-test
spec:
replicas: 1
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- image: httpd:alpine
name: httpd
ports:
- containerPort: 80
---
# nginx
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: ns-test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: ns-test
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:alpine
name: nginx
ports:
- containerPort: 80
EOF
ちゃんと動いた。
[root@k8s-node01 ~]# kubectl get all,ingress -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx pod/nginx-ingress-controller-5659d54c8d-66ssh 1/1 Running 0 3d17h 10.13.2.3 node03.k8s.deroris.local <none> <none>
kube-system pod/canal-6zxqp 2/2 Running 0 3d18h 172.31.3.104 node04.k8s.deroris.local <none> <none>
kube-system pod/canal-7pqsp 2/2 Running 0 3d18h 172.31.3.105 k8s-node05.deroris.local <none> <none>
kube-system pod/canal-grk7c 2/2 Running 0 3d18h 172.31.3.102 k8s-node02.deroris.local <none> <none>
kube-system pod/canal-lw5nh 2/2 Running 0 3d18h 172.31.3.101 k8s-node01.deroris.local <none> <none>
kube-system pod/canal-rbsxv 2/2 Running 0 3d18h 172.31.3.103 node03.k8s.deroris.local <none> <none>
kube-system pod/coredns-5c98db65d4-5cmnn 1/1 Running 0 3d19h 10.13.1.2 k8s-node02.deroris.local <none> <none>
kube-system pod/coredns-5c98db65d4-zprpl 1/1 Running 0 3d19h 10.13.0.2 k8s-node01.deroris.local <none> <none>
kube-system pod/etcd-k8s-node01.deroris.local 1/1 Running 0 3d19h 172.31.3.101 k8s-node01.deroris.local <none> <none>
kube-system pod/kube-apiserver-k8s-node01.deroris.local 1/1 Running 0 3d19h 172.31.3.101 k8s-node01.deroris.local <none> <none>
kube-system pod/kube-controller-manager-k8s-node01.deroris.local 1/1 Running 0 3d19h 172.31.3.101 k8s-node01.deroris.local <none> <none>
kube-system pod/kube-proxy-525d9 1/1 Running 0 3d19h 172.31.3.104 node04.k8s.deroris.local <none> <none>
kube-system pod/kube-proxy-62svl 1/1 Running 0 3d19h 172.31.3.101 k8s-node01.deroris.local <none> <none>
kube-system pod/kube-proxy-6j95z 1/1 Running 0 3d19h 172.31.3.103 node03.k8s.deroris.local <none> <none>
kube-system pod/kube-proxy-s7llh 1/1 Running 0 3d19h 172.31.3.105 k8s-node05.deroris.local <none> <none>
kube-system pod/kube-proxy-zd8qt 1/1 Running 0 3d19h 172.31.3.102 k8s-node02.deroris.local <none> <none>
kube-system pod/kube-scheduler-k8s-node01.deroris.local 1/1 Running 0 3d19h 172.31.3.101 k8s-node01.deroris.local <none> <none>
metallb-system pod/controller-7d9775b7bd-9mt2s 1/1 Running 0 3d18h 10.13.2.2 node03.k8s.deroris.local <none> <none>
metallb-system pod/speaker-b5kj9 1/1 Running 0 3d18h 172.31.3.105 k8s-node05.deroris.local <none> <none>
metallb-system pod/speaker-cjkdx 1/1 Running 0 3d18h 172.31.3.104 node04.k8s.deroris.local <none> <none>
metallb-system pod/speaker-q969g 1/1 Running 0 3d18h 172.31.3.103 node03.k8s.deroris.local <none> <none>
metallb-system pod/speaker-shmq4 1/1 Running 0 3d18h 172.31.3.102 k8s-node02.deroris.local <none> <none>
ns-test pod/httpd-77655b8cf7-sdknm 1/1 Running 0 44s 10.13.2.4 node03.k8s.deroris.local <none> <none>
ns-test pod/nginx-5c69f5ccbf-zx8dv 1/1 Running 0 44s 10.13.2.5 node03.k8s.deroris.local <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h <none>
ingress-nginx service/ingress-nginx LoadBalancer 10.108.113.11 172.31.3.111 80:31403/TCP,443:31350/TCP 3d17h app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3d19h k8s-app=kube-dns
ns-test service/apache-svc NodePort 10.97.207.142 <none> 80:31192/TCP 44s app=httpd
ns-test service/nginx-svc NodePort 10.101.222.70 <none> 80:30933/TCP 44s app=nginx
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-system daemonset.apps/canal 5 5 5 5 5 beta.kubernetes.io/os=linux 3d18h calico-node,kube-flannel calico/node:v3.9.2,quay.io/coreos/flannel:v0.11.0 k8s-app=canal
kube-system daemonset.apps/kube-proxy 5 5 5 5 5 beta.kubernetes.io/os=linux 3d19h kube-proxy k8s.gcr.io/kube-proxy:v1.15.5 k8s-app=kube-proxy
metallb-system daemonset.apps/speaker 4 4 4 4 4 beta.kubernetes.io/os=linux 3d18h speaker metallb/speaker:master app=metallb,component=speaker
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
ingress-nginx deployment.apps/nginx-ingress-controller 1/1 1 1 3d17h nginx-ingress-controller quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
kube-system deployment.apps/coredns 2/2 2 2 3d19h coredns k8s.gcr.io/coredns:1.3.1 k8s-app=kube-dns
metallb-system deployment.apps/controller 1/1 1 1 3d18h controller metallb/controller:master app=metallb,component=controller
ns-test deployment.apps/httpd 1/1 1 1 44s httpd httpd:alpine app=httpd
ns-test deployment.apps/nginx 1/1 1 1 44s nginx nginx:alpine app=nginx
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
ingress-nginx replicaset.apps/nginx-ingress-controller-5659d54c8d 1 1 1 3d17h nginx-ingress-controller quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,p
od-template-hash=5659d54c8d
kube-system replicaset.apps/coredns-5c98db65d4 2 2 2 3d19h coredns k8s.gcr.io/coredns:1.3.1 k8s-app=kube-dns,pod-template-hash=5c98db65d4
metallb-system replicaset.apps/controller-7d9775b7bd 1 1 1 3d18h controller metallb/controller:master app=metallb,component=controller,pod-template-hash=7d9775b7bd
ns-test replicaset.apps/httpd-77655b8cf7 1 1 1 44s httpd httpd:alpine app=httpd,pod-template-hash=77655b8cf7
ns-test replicaset.apps/nginx-5c69f5ccbf 1 1 1 44s nginx nginx:alpine app=nginx,pod-template-hash=5c69f5ccbf
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
ns-test ingress.extensions/lb ns-test.172.31.3.111.nip.io 80 44s
murata:~ $ curl http://ns-test.172.31.3.111.nip.io/nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
172.31.3.111のIPアドレスを持っているノードがいるはずである。
むむ。node02をrouterにしたはずなのに、Ingressはnode03にいるぞ・・・
[root@k8s-node01 ~]# kubectl describe service/ingress-nginx -n ingress-nginx
Name: ingress-nginx
Namespace: ingress-nginx
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/par...
Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type: LoadBalancer
IP: 10.108.113.11
LoadBalancer Ingress: 172.31.3.111
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31403/TCP
Endpoints: 10.13.2.3:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31350/TCP
Endpoints: 10.13.2.3:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31060
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal nodeAssigned 14m (x343 over 3d18h) metallb-speaker announcing from node "node03.k8s.deroris.local"
一度deployment.apps/nginx-ingress-controllerを消してマニフェストを食わせなおしたらnode02にデプロイされた。(順番間違えたかな。
結局のところ、metallbのpod/controller-xxxとingress-nginx pod/nginx-ingress-controller-xxxが当該のノードに上がっていればよしということ。
[root@k8s-node01 ~]# kubectl get all,ing -A -o wide | grep node02
ingress-nginx pod/nginx-ingress-controller-5659d54c8d-x4svj 1/1 Running 0 59m 10.13.1.4 k8s-node02.deroris.local <none> <none>
kube-system pod/canal-grk7c 2/2 Running 0 3d21h 172.31.3.102 k8s-node02.deroris.local <none> <none>
kube-system pod/coredns-5c98db65d4-5cmnn 1/1 Running 0 3d22h 10.13.1.2 k8s-node02.deroris.local <none> <none>
kube-system pod/kube-proxy-zd8qt 1/1 Running 0 3d22h 172.31.3.102 k8s-node02.deroris.local <none> <none>
metallb-system pod/controller-7d9775b7bd-v25q5 1/1 Running 0 78m 10.13.1.3 k8s-node02.deroris.local <none> <none>
metallb-system pod/speaker-shmq4 1/1 Running 0 3d21h 172.31.3.102 k8s-node02.deroris.local <none> <none>
まとめ
Router nodeを固定できると外側に出すサーバを固定できるということになるので、ノードの配置が楽になるはずです。
devとprodなどのTaintをつけて運用する場合もあるみたいだけど、物理的に分けてしまえばいいじゃんって思う。EKSやGKE等の場合はよいのかしら。