1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

IBM Cloud: Red Hat OpenShift on IBM Cloud(ROKS)でicr.ioにpublic gatewayなしでアクセスできる理由

Last updated at Posted at 2021-09-14

1. はじめに

Red Hat OpenShift on IBM Cloud(ROKS)はVPC上でPrivate Only環境を構築することができる。もっと言えば、このROKS上のWorker nodeが配置されるSubnetは、Public Gatewayと紐づける必要もない。これにより、インターネットにさえアクセスできない閉域環境を構築することが可能である。
コンテナーの中には、icr.ioというレジストリにアクセスしてイメージを取得しているものがあるが、、、なぜかicr.ioには上記のようにPublic Gatewayなしでアクセスできている。icr.ioを普通に名前解決してみると、Global IPが返ってくるにも関わらずだ。今回はその理由を調べてみた。

$ dig +short icr.io
169.60.98.86
169.63.104.236
169.62.37.246

2. Worker Nodeの/etc/hosts情報

どうもicr.ioに関しては、明示的にWorker Nodeの/etc/hostsに設定されているようだ。

$ oc debug node/10.0.0.25 -- /bin/bash -c "cat /host/etc/hosts"
Creating debug namespace/openshift-debug-node-6vzkx ...
Starting pod/100025-debug ...
To use host binaries, run `chroot /host`
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.redhat.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
#     /etc/cloud/cloud.cfg or cloud-config from user-data
#
# The following lines are desirable for IPv4 capable hosts
127.0.0.1 kube-c3j463dt0o165mnm7be0-privonlysya-default-000009a0 kube-c3j463dt0o165mnm7be0-privonlysya-default-000009a0
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4

# The following lines are desirable for IPv6 capable hosts
::1 kube-c3j463dt0o165mnm7be0-privonlysya-default-000009a0 kube-c3j463dt0o165mnm7be0-privonlysya-default-000009a0
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

172.20.0.2 registry.ng.bluemix.net us.icr.io
172.20.0.3 jp.icr.io
172.20.0.4 de.icr.io registry.eu-de.bluemix.net
172.20.0.5 uk.icr.io registry.eu-gb.bluemix.net
172.20.0.6 registry.au-syd.bluemix.net au.icr.io
172.20.0.7 registry.bluemix.net icr.io cp.icr.io
172.20.0.9 jp2.icr.io
172.20.0.10 ca.icr.io
172.20.0.11 br.icr.io
172.21.75.245 image-registry.openshift-image-registry.svc image-registry.openshift-image-registry.svc.cluster.local # openshift-generated-node-resolver

Removing debug pod ...
Removing debug namespace/openshift-debug-node-6vzkx ...

では、例えば172.20.0.7という宛先はどこにあるのだろうか?

2. Worker Nodeの/etc/hosts情報

icr.ioなどのアドレスは全部ホストのloopback interfaceに紐づいていた。つまり宛先は自分自身であることがわかる。

$ oc debug node/10.0.0.25 -- /bin/bash -c "ip a show dev lo"
Creating debug namespace/openshift-debug-node-z676g ...
Starting pod/100025-debug ...
To use host binaries, run `chroot /host`
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.1/32 brd 172.20.0.1 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.2/32 brd 172.20.0.2 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.3/32 brd 172.20.0.3 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.4/32 brd 172.20.0.4 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.5/32 brd 172.20.0.5 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.6/32 brd 172.20.0.6 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.7/32 brd 172.20.0.7 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.9/32 brd 172.20.0.9 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.10/32 brd 172.20.0.10 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.20.0.11/32 brd 172.20.0.11 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

Removing debug pod ...
Removing debug namespace/openshift-debug-node-z676g ...

では自ホスト宛のリクエストをどのように処理しているのだろうか。

3. Proxy Podの中身

https://cloud.ibm.com/docs/openshift?topic=openshift-service-arch&locale=en#service-architecture_vpc
によると、IBM拡張のシステムコンポーネントはkube-systemネームスペースに存在する。
どうやらホストごとに存在するibm-master-proxy-static-xx.xx.xx.xxというProxy PodがHAProxyで構成されており、以下のような設定をしているためPrivate NW経由でアクセスできているようだ。

$ oc get pods -A |grep ibm
kube-system                                        ibm-keepalived-watcher-82mqp                              1/1     Running            4          13h
kube-system                                        ibm-keepalived-watcher-xzt42                              1/1     Running            4          13h
kube-system                                        ibm-master-proxy-static-10.0.0.25                         2/2     Running            8          13h
kube-system                                        ibm-master-proxy-static-10.1.0.17                         2/2     Running            8          13h
kube-system                                        ibm-vpc-block-csi-controller-0                            4/4     Running            16         14h
kube-system                                        ibm-vpc-block-csi-node-29vz4                              3/3     Running            12         13h
kube-system                                        ibm-vpc-block-csi-node-98jm4                              3/3     Running            12         13h
openshift-marketplace                              ibm-router-operators-59kj6                                1/1     Running            1          10h
$ oc describe pod ibm-master-proxy-static-10.0.0.25 -n kube-system
(途中略)
Containers:
  ibm-master-proxy-static:
    Container ID:  cri-o://d4e6277bdce3245628e3c44f6755b4b80f16c33b45c24fcd4191782c3922db64
    Image:         registry.au-syd.bluemix.net/armada-master/haproxy:9c98dc5651022fe1e20e569aa2d89f32a51ccec3
    Image ID:      registry.au-syd.bluemix.net/armada-master/haproxy@sha256:7e49b0aec881f07d2ecb77a996b568e22cbd9b314417a9f8723da77ea4664eac
    Port:          2040/TCP
    Host Port:     2040/TCP
    Command:
      /docker-entrypoint.sh
      -f
      /usr/local/etc/haproxy/haproxy-static-pod.cfg
      -V
      -dR
(途中略)

$ oc exec ibm-master-proxy-static-10.0.0.25 -n kube-system -- cat /usr/local/etc/haproxy/haproxy-static-pod.cfg
(途中略)
frontend internationalregistryfrontend
    bind 172.20.0.7:443
    mode tcp
    log global
    option tcplog
    default_backend internationalregistrybackend

backend internationalregistrybackend
    mode tcp
    balance roundrobin
    log global
    option tcp-check
    option log-health-checks
    default-server inter 60s  fall 3 rise 3
    server z1-1.private.icr.io  z1-1.private.icr.io:443 check on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions
    server z2-1.private.icr.io  z2-1.private.icr.io:443 check on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions
    server z3-1.private.icr.io  z3-1.private.icr.io:443 check on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions

frontend internationalregistrysigningfrontend
    bind 172.20.0.7:4443
    mode tcp
    log global
    option tcplog
    default_backend internationalregistrysigningbackend

backend internationalregistrysigningbackend
    mode tcp
    balance roundrobin
    log global
    option tcp-check
    option log-health-checks
    default-server inter 60s  fall 3 rise 3
    server z1-1.private.icr.io  z1-1.private.icr.io:4443 check on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions
    server z2-1.private.icr.io  z2-1.private.icr.io:4443 check on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions
    server z3-1.private.icr.io  z3-1.private.icr.io:4443 check on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions

(途中略)
1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?