38
21

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

KubernetesでLiveness probeのHTTP GETはNW的にどこから来ているか?

Last updated at Posted at 2018-07-28

モチベ

先日登壇した際に表題の質問を受けたとき「確かにどうなってるんだろうか」と思い、
またどこにもその解答らしき記事がないのでやってみました。

ゴール

kubernetesでは実際にカーネルの経路情報とiptablesの情報が入り交じるので、
表題の答えは様々な言い方をできると思いますが、必ずどこかのIPが出発点としてソースIP(ソースインターフェイス)になっているはずなのでそれを探しに行きます。今回は ポッドノード の両方から tcpdump を叩いてみて何が起きているか事象からせめて、考察してみます。

あ、結論は見出しに書いておきました。
以下は暇な人は読んでください。

ドキュメントでは、、、

The kubelet uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a Container in such a state can help to make the application more available despite bugs.

The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

kubeletがやってるぜって書いてある😃
どうなんでしょ。

実検証編

百聞は一見に如かず。やる。

構成

Screen Shot 0030-07-28 at 23.37.21.png
まぁ外部にサービスとか出しているけど関係のある所だけ書きました。
書くまでも無いですが、k8s-node-001はマスターノードです。

あと重要な事なのですが、今回kubeletはk8s-node-[001-003]上でsystemdで動いています。

Component Name Version
Host VM Linux Distribution CentOS 7.5.1804 (Core)
Host VM Linux Kernel 3.10.0-862.3.2.el7.x86_64
Container Orchestrator Kubernetes 1.11.1
Container Runtime Docker 1.13.1
CNI Weave net 2.3.0

検証ポッドの設定

#### Dockerfile

Dockerfile
FROM ubuntu:14.04

#Install nginx
RUN apt-get update && apt-get -y install nginx curl tcpdump
RUN echo "daemon off;" >> /etc/nginx/nginx.conf

#Install Supervisor and config
RUN apt-get install -y supervisor
RUN touch /etc/supervisord.conf
RUN echo '[supervisord]'  >> /etc/supervisord.conf
RUN echo 'nodaemon=true'  >> /etc/supervisord.conf
RUN echo '[program:nginx]' >> /etc/supervisord.conf
RUN echo 'command=nginx'   >> /etc/supervisord.conf
RUN echo 'stdout_logfile=/dev/fd/1'   >> /etc/supervisord.conf
RUN echo 'stdout_logfile_maxbytes=0'  >> /etc/supervisord.conf
RUN echo '[program:tcpdump]' >> /etc/supervisord.conf
RUN echo 'command=tcpdump -nn'   >> /etc/supervisord.conf
RUN echo 'stdout_logfile=/dev/fd/1'   >> /etc/supervisord.conf
RUN echo 'stdout_logfile_maxbytes=0'  >> /etc/supervisord.conf

EXPOSE 80
CMD /usr/bin/supervisord -c /etc/supervisord.conf

デバッグのためにtcpdumpとnginxをフォアグランドで動かします。
なお、ログをどーのこーのするのがだるいので、標準出力に出します。
これで kubectl get logs で出力を確認できます。

#### サービスとデプロイメント

tcpdump-svc.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: tcpdump-svc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tcpdump-svc
  template:
    metadata:
      labels:
        app: tcpdump-svc
    spec:
      containers:
      - name: tcpdump-svc
        image: nnao45/tcpdump
        livenessProbe:
          httpGet: # make an HTTP request
            port: 80 # port to use
            path: / # endpoint to hit
            scheme: HTTP # or HTTPS
          initialDelaySeconds: 5 # how long to wait before checking
          periodSeconds: 5 # how long to wait between checks
          successThreshold: 1 # how many successes to hit before accepting
          failureThreshold: 3 # how many failures to accept before failing
          timeoutSeconds: 10 # how long to wait for a response
        ports:
        - name: http
          containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: tcpdump-svc
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: tcpdump-svc
  type: LoadBalancer

なんてこたぁない、ただ80番でexposeするサービスと、
80番に5秒間隔でliveness probeして、2個のレプリカセットのデプロイメントです。
サービスとして作った意味はほぼ無いですが・・・すみません前使ったのを流用したのでこうなりました。

さあやってみる。

まず普通にデプロイ

k8s-node-001
$ kubectl apply -f tcpdump-svc.yaml

デプロイされました。

k8s-node-001
$ kubectl get pods --all-namespaces -o wide
NAMESPACE        NAME                                   READY     STATUS    RESTARTS   AGE       IP                NODE
default          tcpdump-svc-975c494f-4b7vs             1/1       Running   0          8m        10.44.0.2         k8s-node-003
default          tcpdump-svc-975c494f-gw6gh             1/1       Running   0          8m        10.36.0.1         k8s-node-002

そしてログをみてみます、

tcpdump-svc-975c494f-gw6ghのログ

k8s-node-001
$ kubectl logs tcpdump-svc-975c494f-gw6gh
07:16:31.452312 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [S], seq 679272549, win 26720, options [mss 1336,sackOK,TS val 232796391 ecr 0,nop,wscale 7], length 0
07:16:31.452357 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [S.], seq 3261305993, ack 679272550, win 26480, options [mss 1336,sackOK,TS val 232796391 ecr 232796391,nop,wscale 7], length 0
07:16:31.452381 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [.], ack 1, win 209, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452508 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [P.], seq 1:110, ack 1, win 209, options [nop,nop,TS val 232796391 ecr 232796391], length 109: HTTP: GET / HTTP/1.1
07:16:31.452514 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [.], ack 110, win 207, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452700 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [P.], seq 1:625, ack 110, win 207, options [nop,nop,TS val 232796391 ecr 232796391], length 624: HTTP: HTTP/1.1 200 OK
07:16:31.452728 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [.], ack 625, win 219, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452744 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [F.], seq 625, ack 110, win 207, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452879 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [F.], seq 110, ack 626, win 219, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452895 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [.], ack 111, win 207, options [nop,nop,TS val 232796391 ecr 232796391], length 0

tcpdump-svc-975c494f-4b7vsのログ

k8s-node-001
$ kubectl logs tcpdump-svc-975c494f-4b7vs
07:16:30.072701 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [S], seq 34790703, win 26720, options [mss 1336,sackOK,TS val 232767982 ecr 0,nop,wscale 7], length 0
07:16:30.072745 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [S.], seq 1984534437, ack 34790704, win 26480, options [mss 1336,sackOK,TS val 232767982 ecr 232767982,nop,wscale 7], length 0
07:16:30.072773 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [.], ack 1, win 209, options [nop,nop,TS val 232767982 ecr 232767982], length 0
07:16:30.072937 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [P.], seq 1:110, ack 1, win 209, options [nop,nop,TS val 232767982 ecr 232767982], length 109: HTTP: GET / HTTP/1.1
07:16:30.072944 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [.], ack 110, win 207, options [nop,nop,TS val 232767982 ecr 232767982], length 0
07:16:30.073119 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [P.], seq 1:625, ack 110, win 207, options [nop,nop,TS val 232767983 ecr 232767982], length 624: HTTP: HTTP/1.1 200 OK
07:16:30.073147 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [.], ack 625, win 219, options [nop,nop,TS val 232767983 ecr 232767983], length 0
07:16:30.073162 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [F.], seq 625, ack 110, win 207, options [nop,nop,TS val 232767983 ecr 232767983], length 0
07:16:30.073304 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [F.], seq 110, ack 626, win 219, options [nop,nop,TS val 232767983 ecr 232767983], length 0
07:16:30.073313 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [.], ack 111, win 207, options [nop,nop,TS val 232767983 ecr 232767983], length 0

次に、ノードから weave インターフェースを指定して tcpdump をしてみます。

k8s-node-002
$ tcpdump -i weave port 80
07:16:31.452297 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [S], seq 679272549, win 26720, options [mss 1336,sackOK,TS val 232796391 ecr 0,nop,wscale 7], length 0
07:16:31.452358 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [S.], seq 3261305993, ack 679272550, win 26480, options [mss 1336,sackOK,TS val 232796391 ecr 232796391,nop,wscale 7], length 0
07:16:31.452380 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [.], ack 1, win 209, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452505 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [P.], seq 1:110, ack 1, win 209, options [nop,nop,TS val 232796391 ecr 232796391], length 109: HTTP: GET / HTTP/1.1
07:16:31.452515 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [.], ack 110, win 207, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452703 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [P.], seq 1:625, ack 110, win 207, options [nop,nop,TS val 232796391 ecr 232796391], length 624: HTTP: HTTP/1.1 200 OK
07:16:31.452726 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [.], ack 625, win 219, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452744 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [F.], seq 625, ack 110, win 207, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452874 IP 10.36.0.0.47218 > 10.36.0.1.80: Flags [F.], seq 110, ack 626, win 219, options [nop,nop,TS val 232796391 ecr 232796391], length 0
07:16:31.452895 IP 10.36.0.1.80 > 10.36.0.0.47218: Flags [.], ack 111, win 207, options [nop,nop,TS val 232796391 ecr 232796391], length 0
k8s-node-003
$ tcpdump -i weave port 80
07:16:30.072687 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [S], seq 34790703, win 26720, options [mss 1336,sackOK,TS val 232767982 ecr 0,nop,wscale 7], length 0
07:16:30.072745 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [S.], seq 1984534437, ack 34790704, win 26480, options [mss 1336,sackOK,TS val 232767982 ecr 232767982,nop,wscale 7], length 0
07:16:30.072768 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [.], ack 1, win 209, options [nop,nop,TS val 232767982 ecr 232767982], length 0
07:16:30.072930 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [P.], seq 1:110, ack 1, win 209, options [nop,nop,TS val 232767982 ecr 232767982], length 109: HTTP: GET / HTTP/1.1
07:16:30.072945 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [.], ack 110, win 207, options [nop,nop,TS val 232767982 ecr 232767982], length 0
07:16:30.073121 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [P.], seq 1:625, ack 110, win 207, options [nop,nop,TS val 232767983 ecr 232767982], length 624: HTTP: HTTP/1.1 200 OK
07:16:30.073145 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [.], ack 625, win 219, options [nop,nop,TS val 232767983 ecr 232767983], length 0
07:16:30.073162 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [F.], seq 625, ack 110, win 207, options [nop,nop,TS val 232767983 ecr 232767983], length 0
07:16:30.073299 IP 10.44.0.0.47348 > 10.44.0.2.80: Flags [F.], seq 110, ack 626, win 219, options [nop,nop,TS val 232767983 ecr 232767983], length 0
07:16:30.073313 IP 10.44.0.2.80 > 10.44.0.0.47348: Flags [.], ack 111, win 207, options [nop,nop,TS val 232767983 ecr 232767983], length 0

結果

!?

今回作ったnginxに対しては何もしてないですが、
どうやら workerweave インターフェースから来ています。

k8s-node-002
$ ip a show dev weave
7: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 7a:3d:05:61:92:b0 brd ff:ff:ff:ff:ff:ff
    inet 10.36.0.0/12 brd 10.47.255.255 scope global weave
       valid_lft forever preferred_lft forever
    inet6 fe80::783d:5ff:fe61:92b0/64 scope link
       valid_lft forever preferred_lft forever
k8s-node-003
$ ip a show dev weave
7: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 42:a0:a1:10:94:63 brd ff:ff:ff:ff:ff:ff
    inet 10.44.0.0/12 brd 10.47.255.255 scope global weave
       valid_lft forever preferred_lft forever
    inet6 fe80::40a0:a1ff:fe10:9463/64 scope link
       valid_lft forever preferred_lft forever

え、、、これって、weaveインターフェースからprobeされてるって事なの・・・・???

考察編 〜どのインターフェースからprobeされてる??〜

さて、事象は確認できたので、もう一度 weave net での構成を再確認してみましょうか。
https://www.weave.works/docs/net/latest/concepts/ip-addresses/
weave net は次の構成をとります。

weavenet.png

つまり、外部のネットワークからアクセスしようとした場合は、
ポッドへの通信はweave routerを経由して通るという事になります。

えっと、、、今回ですと、確かにprobeはweaveインターフェースからやりとりしていると思いますので、つまり今回tcpdumpで見た箇所はここになるはずです。

weavenet-2.png

ううん、まぁここまでは間違ってなさそうです、、、が、、、だからと言って probeの出発点がweaveインターフェースだとは言えません。困りました。やけくそですが、片っ端からtcpdumpで情報を取りにいってやりましょう。
workerのk8s-node-002はこういうインターフェイスになっています。

k8s-node-002
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 56:00:01:99:98:3f brd ff:ff:ff:ff:ff:ff
    inet XX.XX.XX.XX/23 brd 45.77.15.255 scope global dynamic eth0
       valid_lft 64369sec preferred_lft 64369sec
    inet6 fe80::5400:1ff:fe99:983f/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 5a:00:01:99:98:3f brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.2/16 brd 192.168.255.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::5800:1ff:fe99:983f/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:20:c4:40:98 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
5: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 7a:e0:c8:26:7d:11 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::78e0:c8ff:fe26:7d11/64 scope link
       valid_lft forever preferred_lft forever
7: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 7a:3d:05:61:92:b0 brd ff:ff:ff:ff:ff:ff
    inet 10.36.0.0/12 brd 10.47.255.255 scope global weave
       valid_lft forever preferred_lft forever
    inet6 fe80::783d:5ff:fe61:92b0/64 scope link
       valid_lft forever preferred_lft forever
8: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 1e:a4:88:59:8c:f4 brd ff:ff:ff:ff:ff:ff
10: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default
    link/ether 5a:b9:f1:d8:4b:66 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::58b9:f1ff:fed8:4b66/64 scope link
       valid_lft forever preferred_lft forever
11: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether ea:f0:53:cb:d8:75 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e8f0:53ff:fecb:d875/64 scope link
       valid_lft forever preferred_lft forever
12: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether f2:eb:15:d2:52:8e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f0eb:15ff:fed2:528e/64 scope link
       valid_lft forever preferred_lft forever
68: vethwepl7a02de0@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether e6:4e:f7:ee:8c:ec brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::e44e:f7ff:feee:8cec/64 scope link
       valid_lft forever preferred_lft forever

eth1

k8s-node-002
$ tcpdump -i eth1 port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

無し🙅‍♀️

docker0

k8s-node-002
$ tcpdump -i docker0 port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

無し🙅‍♀️

datapath

k8s-node-002
$ tcpdump -i datapath port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on datapath, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

無し🙅‍♀️

vethwe-datapath@vethwe-bridge

k8s-node-002
$ tcpdump -i vethwe-datapath@vethwe-bridge port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vethwe-datapath@vethwe-bridge, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

無し🙅‍♀️

vxlan-6784

k8s-node-002
$ tcpdump -i vxlan-6784 port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vxlan-6784, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

無し🙅‍♀️

え・・・・!?

vethwepl7a02de0@if67

k8s-node-002
$ tcpdump -i vethwepl7a02de0@if67 port 80 -nn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vethwepl7a02de0@if67, link-type EN10MB (Ethernet), capture size 262144 bytes
08:04:31.452347 IP 10.36.0.0.48946 > 10.36.0.1.80: Flags [S], seq 3786421879, win 26720, options [mss 1336,sackOK,TS val 235676391 ecr 0,nop,wscale 7], length 0
08:04:31.452413 IP 10.36.0.1.80 > 10.36.0.0.48946: Flags [S.], seq 252615539, ack 3786421880, win 26480, options [mss 1336,sackOK,TS val 235676391 ecr 235676391,nop,wscale 7], length 0
08:04:31.452438 IP 10.36.0.0.48946 > 10.36.0.1.80: Flags [.], ack 1, win 209, options [nop,nop,TS val 235676391 ecr 235676391], length 0
08:04:31.452639 IP 10.36.0.0.48946 > 10.36.0.1.80: Flags [P.], seq 1:110, ack 1, win 209, options [nop,nop,TS val 235676391 ecr 235676391], length 109: HTTP: GET / HTTP/1.1
08:04:31.452667 IP 10.36.0.1.80 > 10.36.0.0.48946: Flags [.], ack 110, win 207, options [nop,nop,TS val 235676391 ecr 235676391], length 0
08:04:31.452944 IP 10.36.0.1.80 > 10.36.0.0.48946: Flags [P.], seq 1:625, ack 110, win 207, options [nop,nop,TS val 235676392 ecr 235676391], length 624: HTTP: HTTP/1.1 200 OK
08:04:31.452968 IP 10.36.0.0.48946 > 10.36.0.1.80: Flags [.], ack 625, win 219, options [nop,nop,TS val 235676392 ecr 235676392], length 0
08:04:31.452988 IP 10.36.0.1.80 > 10.36.0.0.48946: Flags [F.], seq 625, ack 110, win 207, options [nop,nop,TS val 235676392 ecr 235676392], length 0
08:04:31.453087 IP 10.36.0.0.48946 > 10.36.0.1.80: Flags [F.], seq 110, ack 626, win 219, options [nop,nop,TS val 235676392 ecr 235676392], length 0
08:04:31.453105 IP 10.36.0.1.80 > 10.36.0.0.48946: Flags [.], ack 111, win 207, options [nop,nop,TS val 235676392 ecr 235676392], length 0
08:04:36.452327 IP 10.36.0.0.48950 > 10.36.0.1.80: Flags [S], seq 302422440, win 26720, options [mss 1336,sackOK,TS val 235681391 ecr 0,nop,wscale 7], length 0
08:04:36.452393 IP 10.36.0.1.80 > 10.36.0.0.48950: Flags [S.], seq 2089306582, ack 302422441, win 26480, options [mss 1336,sackOK,TS val 235681391 ecr 235681391,nop,wscale 7], length 0
08:04:36.452419 IP 10.36.0.0.48950 > 10.36.0.1.80: Flags [.], ack 1, win 209, options [nop,nop,TS val 235681391 ecr 235681391], length 0
08:04:36.452560 IP 10.36.0.0.48950 > 10.36.0.1.80: Flags [P.], seq 1:110, ack 1, win 209, options [nop,nop,TS val 235681391 ecr 235681391], length 109: HTTP: GET / HTTP/1.1
08:04:36.452571 IP 10.36.0.1.80 > 10.36.0.0.48950: Flags [.], ack 110, win 207, options [nop,nop,TS val 235681391 ecr 235681391], length 0
08:04:36.452773 IP 10.36.0.1.80 > 10.36.0.0.48950: Flags [P.], seq 1:625, ack 110, win 207, options [nop,nop,TS val 235681391 ecr 235681391], length 624: HTTP: HTTP/1.1 200 OK
08:04:36.452789 IP 10.36.0.0.48950 > 10.36.0.1.80: Flags [.], ack 625, win 219, options [nop,nop,TS val 235681391 ecr 235681391], length 0
08:04:36.452804 IP 10.36.0.1.80 > 10.36.0.0.48950: Flags [F.], seq 625, ack 110, win 207, options [nop,nop,TS val 235681391 ecr 235681391], length 0
08:04:36.452908 IP 10.36.0.0.48950 > 10.36.0.1.80: Flags [F.], seq 110, ack 626, win 219, options [nop,nop,TS val 235681391 ecr 235681391], length 0
08:04:36.452920 IP 10.36.0.1.80 > 10.36.0.0.48950: Flags [.], ack 111, win 207, options [nop,nop,TS val 235681392 ecr 235681391], length 0

有り🙆‍♀️

おっとこいつはヒットしました、最後の最後で。
つまり 80番ポートの通信が確認出来るのweaveとvethwepl7a02de0@if67となります。

・・・こいつはなんだ?

考察編 〜kubernetes networking deepdive〜

さてここからはガチのデバッグして行きます。
見て行きましょう。

##ポッド

そもそも、今回のポッドはインターフェイスは以下のようになっています。

k8s-node-001
$ kubectl exec tcpdump-svc-975c494f-4b7vs 'ifconfig'
eth0      Link encap:Ethernet  HWaddr 5e:fc:b9:cb:e1:71
          inet addr:10.44.0.2  Bcast:10.47.255.255  Mask:255.240.0.0
          inet6 addr: fe80::5cfc:b9ff:fecb:e171/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:3502 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3486 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:312660 (312.6 KB)  TX bytes:669936 (669.9 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)```

うん、eth0が1個だけ。

namespace内

どういう風にこのeth0に繋がっているか見るのは少し面倒です。
kubernetes上のネットワークネームスペースはプロセスIDに紐づいているので、
あれをこうしてこうします。
https://thenewstack.io/hackers-guide-kubernetes-networking/

k8s-node-001
$ kubectl get pod tcpdump-svc-975c494f-gw6gh  -o  jsonpath='{.status.containerStatuses[0].containerID}'  | cut -c 10-21
eef5943fa597
k8s-node-001
$ docker inspect --format '{{ .State.Pid }}' eef5943fa597
5427
$ nsenter -t 5427 -n ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
67: eth0@if68: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default
    link/ether f6:87:b0:f4:76:38 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.36.0.1/12 brd 10.47.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f487:b0ff:fef4:7638/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever

これをnodeから得た情報と合わせれば、

k8s-node-002
$ nsenter -t 5427 -n ethtool -S eth0
NIC statistics:
     peer_ifindex: 68
$ ip -d link|grep ^68
68: vethwepl7a02de0@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default

インターフェースのインデックス67と68同士、つまりpodのeth0とethwepl7a02de0が繋がっている事がわかりました。

bridge

まず予想だとweave netもbridgeを使っていると思うので見ます

k8s-node-002
$ brctl show weave
bridge name     bridge id               STP enabled     interfaces
weave           8000.7a3d056192b0       no              vethwe-bridge
                                                        vethwepl7a02de0
$ ip a show dev vethwe-bridge
11: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether ea:f0:53:cb:d8:75 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e8f0:53ff:fecb:d875/64 scope link
       valid_lft forever preferred_lft forever
$ ip a show dev vethwepl7a02de0
68: vethwepl7a02de0@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether e6:4e:f7:ee:8c:ec brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::e44e:f7ff:feee:8cec/64 scope link
       valid_lft forever preferred_lft forever

・・・・vethwe-datapathってなんでしょうか?以下を見てください。

k8s-node-002
$ ip -d link
5: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 7a:e0:c8:26:7d:11 brd ff:ff:ff:ff:ff:ff promiscuity 1
    openvswitch addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
10: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP mode DEFAULT group default
    link/ether 5a:b9:f1:d8:4b:66 brd ff:ff:ff:ff:ff:ff promiscuity 1
    veth
    openvswitch_slave addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
11: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
    link/ether ea:f0:53:cb:d8:75 brd ff:ff:ff:ff:ff:ff promiscuity 1
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.7a:3d:5:61:92:b0 designated_root 8000.7a:3d:5:61:92:b0 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
12: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc noqueue master datapath state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether f2:eb:15:d2:52:8e brd ff:ff:ff:ff:ff:ff promiscuity 1
    vxlan id 0 srcport 0 0 dstport 6784 nolearning ageing 300 udpcsum noudp6zerocsumtx udp6zerocsumrx external
    openvswitch_slave addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535

はぁ、、、要はですね、 datapath っていうopenvswitchがいて、そいつにぶら下がっているのが vethwe-datapathvxlan-6784 で、さらに vethwe-datapathvethwe-bridge はvethによって繋がっていると。。。

んでま、結局公式のソースとか読んでやるとこういうカオスなNWになるんだけど、

Screen Shot 0030-07-29 at 3.51.56.png

先ほどの、

つまり 80番ポートの通信が確認出来るのweaveとvethwepl7a02de0@if67となります。

という情報から、やはりweaveブリッジと、検証用ポッド間でしか80番ポートの通信が見られない事がわかりました。

うーん・・・・多分weaveブリッジからprobeが出てるんでしょう・・・?。

iptables

公式から、

weavenet-002.png
https://archive.fosdem.org/2017/schedule/event/weave_net_npc/attachments/slides/1528/export/events/attachments/weave_net_npc/slides/1528/Weave_npc_Fosdem_Presentation.pdf

iptablesも重要なルーティング要素なのはそりゃそうです。
見て見ましょう、 conntrack コマンドで iptables のデバッグが見れます。

k8s-node-002
$ conntrack -L -d 10.36.0.1
tcp      6 102 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33666 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33666 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 87 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33658 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33658 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 72 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33632 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33632 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 37 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33612 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33612 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 82 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33654 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33654 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 92 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33660 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33660 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 52 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33620 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33620 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 57 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33624 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33624 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 47 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33618 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33618 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 107 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33670 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33670 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 27 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33606 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33606 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 12 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33596 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33596 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 67 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33630 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33630 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 117 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33676 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33676 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 32 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33608 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33608 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 112 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33672 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33672 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 7 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33594 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33594 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 62 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33626 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33626 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 42 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33614 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33614 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 17 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33600 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33600 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 2 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33590 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33590 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 77 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33638 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33638 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 22 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33602 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33602 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 97 TIME_WAIT src=10.36.0.0 dst=10.36.0.1 sport=33664 dport=80 src=10.36.0.1 dst=10.36.0.0 sport=80 dport=33664 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
conntrack v1.4.4 (conntrack-tools): 24 flow entries have been shown.

これしか無いので、やっぱりweaveブリッジからprobeが出てる・・・?。

考察編 〜そういえば〜

もし誰かがprobeしてたとしても、ヘルスチェックは一瞬でコネクションが終了するので、ssとかlsofとかで送信元のプロセスを捉えるのはほぼ不可能。うーん困った。

考察編 〜よく分からんので結局ソースコード読んだ〜

さて本題です!!(白目)
本当にweaveブリッジからprobeが出てるのか核心に迫るため、ソース読もう(初めからそうしろ)。

// Probe returns a ProbeRunner capable of running an http check.
func (pr httpProber) Probe(url *url.URL, headers http.Header, timeout time.Duration) (probe.Result, string, error) {
    return DoHTTPProbe(url, headers, &http.Client{Timeout: timeout, Transport: pr.transport})
}

type HTTPGetInterface interface {
    Do(req *http.Request) (*http.Response, error)
}

// DoHTTPProbe checks if a GET request to the url succeeds.
// If the HTTP response code is successful (i.e. 400 > code >= 200), it returns Success.
// If the HTTP response code is unsuccessful or HTTP communication fails, it returns Failure.
// This is exported because some other packages may want to do direct HTTP probes.
func DoHTTPProbe(url *url.URL, headers http.Header, client HTTPGetInterface) (probe.Result, string, error) {
    req, err := http.NewRequest("GET", url.String(), nil)
    if err != nil {
        // Convert errors into failures to catch timeouts.
        return probe.Failure, err.Error(), nil
    }
    if _, ok := headers["User-Agent"]; !ok {
        if headers == nil {
            headers = http.Header{}
        }
        // explicitly set User-Agent so it's not set to default Go value
        v := version.Get()
        headers.Set("User-Agent", fmt.Sprintf("kube-probe/%s.%s", v.Major, v.Minor))
    }
    req.Header = headers
    if headers.Get("Host") != "" {
        req.Host = headers.Get("Host")
    }
    res, err := client.Do(req)
    if err != nil {
        // Convert errors into failures to catch timeouts.
        return probe.Failure, err.Error(), nil
    }
    defer res.Body.Close()
    b, err := ioutil.ReadAll(res.Body)
    if err != nil {
        return probe.Failure, "", err
    }
    body := string(b)
    if res.StatusCode >= http.StatusOK && res.StatusCode < http.StatusBadRequest {
        glog.V(4).Infof("Probe succeeded for %s, Response: %v", url.String(), *res)
        return probe.Success, body, nil
    }
    glog.V(4).Infof("Probe failed for %s with request headers %v, response body: %v", url.String(), headers, body)
    return probe.Failure, fmt.Sprintf("HTTP probe failed with statuscode: %d", res.StatusCode), nil
}

特にソースの指定も無く client.Do(req) をしているので、この関数を叩いたホスト自身がprobeする運びになるはずです。

あとは"誰が"なんですが、結論はkubeletです。
上のソースを呼び出してくるのはkubeletバイナリに含まれるライブラリなので自明っちゃあ自明です。

kubeletは初期化すると以下のようにstatusManagerとprobeManagerが作成されます。
statusManagerは状態情報を保持し、ポッド状態を定期的に更新する役割を担いますが、ポッド状態への変更を監視するのではなく、プローブマネージャなどの他のコンポーネント呼び出し用のインタフェースを提供します。

プローブマネージャは、ポッド内のコンテナのステータスを定期的に監視します。ステータスの変更が検出されると、statusManagerによって提供されるステータスが呼び出され、ポッドのステータスが更新されます。

klet.livenessManager = proberesults.NewManager()
klet.statusManager = status.NewManager(klet.kubeClient, klet.podManager, klet)

probemanagerは最終的には以下のような関数を経て最終的にDoHTTPProbeを叩く運びとなる。

func (m *manager) AddPod(pod *v1.Pod) {
       m.workerLock.Lock()
       defer m.workerLock.Unlock()

       key := probeKey{podUID: pod.UID}
       for _, c := range pod.Spec.Containers {
              key.containerName = c.Name

              if c.ReadinessProbe != nil {
                     key.probeType = readiness  if _, ok := m.workers[key]; ok {
                            glog.Errorf("Readiness probe already exists! %v - %v",  format.Pod(pod), c.Name)
                            return  }
                     w := newWorker(m, readiness, pod, c) m.workers[key] = w go w.run()
              }

              if c.LivenessProbe != nil {
                     key.probeType = liveness  if _, ok := m.workers[key]; ok {
                            glog.Errorf("Liveness probe already exists! %v - %v",
                                   format.Pod(pod), c.Name)
                            return  }
                     w := newWorker(m, liveness, pod, c)
                     m.workers[key] = w
                     go w.run()
              }
       }
}

http://www.voidcn.com/article/p-wupznmyz-bmq.html
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kubelet.go
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/prober/prober_manager.go#L141

、、、という訳で、最終的にkubeletバイナリで動いているkubeletサービスがProbeを行なっているという話が結論であろう。そして今回の構成ではこれをホスト自身が叩く=ルーティングとしてはポッドと同一のネットワーク帯を持つインターフェースであるweaveインターフェイスが飛ばすという流れになるはずなのです。

なので本作業でのtcpdumpとの辻褄が合う、という話となります。

結論 kubelet(のprobeManager)が動いているホストがprobeを飛ばしている。

という事でしたーズイ (ง˘ω˘)วズイ
多分readiness probeもどのソケット通信も仕組みは同じなんだろうが、
execはkubectl execと同じ仕組みとかかな?知らんけど。

マサカリあればイカよろしく〜〜〜。

38
21
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
38
21

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?