4
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

Satellite Connectorを利用したIBM Cloudからオンプレミス環境への接続構成

Last updated at Posted at 2023-10-18

1. はじめに

IBM Cloud では、 Satellite Connectorという仕組みを利用することで、IBM CloudのVPC/Classic InfrastructureといったIaaS環境だけでなくPaaS/SaaS環境からも、オンプレミスのネットワークに接続することができる。

https://cloud.ibm.com/docs/satellite?topic=satellite-run-agent-locally&interface=ui&locale=en
https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=ui&locale=en

本稿では、その実機検証を行なってみた。

2. Satellite Connectorの仕組み

以下にその概要を紹介する。

  • IBM Cloudには、Service networkと呼ばれる、任意のIBM CloudのVPC/Classic InfrastructureといったIaaS環境からアクセス可能なネットワーク帯が存在する。IBM Cloud上のPaaS/SaaS環境もVPCやClassic Infrastructure上に作成されているため、同様にこのService networkに一般的にアクセス可能である。
  • Satellite Connectorを利用すると、Service network上にendpointを作成してくれる。そして、そのendpointにIBM Cloud上のサービスからアクセスすれば、あらかじめそれぞれのendpointに関連づけておいたターゲットのサービスに対してsecure tunnel経由で、その終端であるdockerホストの環境から接続できるという仕組みである。
  • Secure tunnelを終端するクライアントは、docker コンテナである。
    • 現在はx86環境のみがサポートされている。
    • Secure tunnelの接続はクライアント側から開始するため、クライアント側のステートフルファイアーウォールではOutbound通信のみを許可すれば良い。Outbound接続のためのネットワーク要件はこちら
    • このdockerコンテナは可用性を持たせるために複数用意して良い(結果、Secure Tunnelも複数張られる)。サービス側からのリクエストはこの複数のクライアントに自動的に負荷分散され、障害時には稼働中のコンテナにのみリクエストを割り振る。
  • IBM Cloud上のIaaS/PaaSからEndpointへの接続に対して、接続元を制限するACL機能が存在する。

image.png

今回の検証では、ターゲットサーバーとしてWeb Server(172.16.0.4)を用意し、そこにIBM Cloud側からアクセス確認してみた。

3. ターゲットサーバーのセットアップ

ターゲットサーバーは今回はWebサーバーを構築した(構築手順は省略)。
以下のように、docker コンテナをホストする環境からは接続性が担保されていることをあらかじめ確認済みである。

[root@syasuda-sc1 ~]# curl 172.16.0.4
Hello World!

4. Satellite Connectorのセットアップ(IBM Cloudコンソール)

4-1. Satellite Connectorの作成

image.png
image.png
image.png

4-2. endpointの作成

image.png
Secure tunnelを通して接続するWebサーバーは、上述の通り172.16.0.4である。
image.png
image.png
image.png
image.png
image.png

以下のように、Secure Tunnel経由で接続するターゲットのサーバーは172.16.0.4であり、Endpoint address(c-01.private.jp-tok.link.satellite.cloud.ibm.com:32786)が割り当てられた。このEndpoint addressにアクセスすることで、ターゲットに接続できるようになる予定である。
image.png

5. Satellite Connectorのセットアップ(Dockerホスト)

今回はCentOS Stream release 9を利用している。

環境確認
[root@syasuda-sc1 ~]# cat /etc/redhat-release
CentOS Stream release 9
[root@syasuda-sc1 ~]# uname -a
Linux syasuda-sc1 5.14.0-361.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Aug 24 13:40:45 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
dockerのインストール
[root@syasuda-sc1 ~]# curl -fsSL https://get.docker.com -o get-docker.sh
[root@syasuda-sc1 ~]# sudo sh get-docker.sh
[root@syasuda-sc1 ~]# systemctl enable docker
[root@syasuda-sc1 ~]# systemctl start docker
[root@syasuda-sc1 ~]# docker version
Client: Docker Engine - Community
 Version:           24.0.6
 API version:       1.43
 Go version:        go1.20.7
 Git commit:        ed223bc
 Built:             Mon Sep  4 12:33:18 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.6
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.7
  Git commit:       1a79695
  Built:            Mon Sep  4 12:31:49 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.24
  GitCommit:        61f9fd88f79f081d64d6fa3bb1a0dc71ec870523
 runc:
  Version:          1.1.9
  GitCommit:        v1.1.9-0-gccaecfc
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

以下のように環境ファイルをセットする。docsではapikeyをファイルに配置し、それをapikeyを配置したファイルシステムをdocker起動時にマウントして、再度そこから環境変数に設定するような手順をガイドしているが、そもそも環境変数ファイルを指定するのであれば、以下のように構成した方が、ファイルシステムのマウントも必要ないためシンプルだと考え、そのような手順となっている。

[root@syasuda-sc1 ~]# mkdir -p ~/agent/env-files
~/agent/env-files/env.txt (API Keyは適切なものをセットしてください)
SATELLITE_CONNECTOR_ID=U2F0ZWxsaXRlQ29ubmVjdG9yOiJja25pdGtidDFmcW5qdjUwOXBlZyI
SATELLITE_CONNECTOR_IAM_APIKEY=xxxxxxxxxxx
SATELLITE_CONNECTOR_REGION=jp-tok
SATELLITE_CONNECTOR_TAGS=sample tag
[root@syasuda-sc1 ~]# ls -l ~/agent/env-files/
total 4
-rw-------. 1 root root 225 Oct 18 01:08 env.txt
icr.ioへのログイン。パスワードはAPI Keyの値を利用する。
[root@syasuda-sc1 ~]# docker login -u iamapikey icr.io
Password:
Login Succeeded

[root@syasuda-sc1 ~]# docker pull icr.io/ibm/satellite-connector/satellite-connector-agent:latest

[root@syasuda-sc1 ~]# docker image ls
REPOSITORY                                                 TAG       IMAGE ID       CREATED      SIZE
icr.io/ibm/satellite-connector/satellite-connector-agent   latest    66b5c7f42805   7 days ago   415MB
参考: このSatellite Connectorのコンテナはnode.jsで動いていそうだ。
[root@syasuda-sc1 ~]# docker inspect icr.io/ibm/satellite-connector/satellite-connector-agent
(途中抜粋)
        "DockerVersion": "20.10.25",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "node",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "NODE_VERSION=18.18.0",
                "YARN_VERSION=1.22.19",
                "NODE_ENV=production"
            ],
            "Cmd": null,
            "Image": "sha256:48b9b56e061b239428c6c95084882d6de72093a137375722cb300860573ebd0c",
            "Volumes": null,
            "WorkingDir": "/connector_agent",
            "Entrypoint": [
                "/usr/local/bin/node",
                "/connector_agent/master.js",
                "/connector_agent/connector_agent.js"
            ],
(途中抜粋)
参考: 利用可能なバージョン一覧の出力
[root@syasuda-sc1 ~]# ibmcloud cr images --include-ibm|grep connector
icr.io/ibm/connector-for-splunk                                                                       latest                                                  acf330e850a4   ibm        2 years ago     259 MB   -
icr.io/ibm/satellite-connector/satellite-connector-agent                                              latest                                                  0caddb11b1c1   ibm        6 days ago      125 MB   -
icr.io/ibm/satellite-connector/satellite-connector-agent                                              v1.1.0                                                  5f4e42c8d53e   ibm        3 months ago    124 MB   -
icr.io/ibm/satellite-connector/satellite-connector-agent                                              v1.1.1                                                  0caddb11b1c1   ibm        6 days ago      125 MB   -

6. Secure Tunnelの接続

上記でセットアップしたdockerホスト上で、対象のdocker コンテナを起動すれば良い。

[root@syasuda-sc1 ~]# docker run -d --env-file ~/agent/env-files/env.txt icr.io/ibm/satellite-connector/satellite-connector-agent:latest
64bc568b3e5a3413aea38bc971cfcfbeca6545d79042404e15cfdbec538db7e2

[root@syasuda-sc1 ~]# docker ps
CONTAINER ID   IMAGE                                                             COMMAND                  CREATED          STATUS          PORTS     NAMES
64bc568b3e5a   icr.io/ibm/satellite-connector/satellite-connector-agent:latest   "/usr/local/bin/node…"   21 seconds ago   Up 20 seconds             stoic_chandrasekhar

[root@syasuda-sc1 ~]# docker logs 64bc568b3e5a
{"level":30,"time":"2023-10-18T01:14:54.287Z","pid":13,"hostname":"64bc568b3e5a","name":"agentOps","msgid":"A02","msg":"Load SATELLITE_CONNECTOR_ID value from SATELLITE_CONNECTOR_ID environment variable."}
{"level":30,"time":"2023-10-18T01:14:54.288Z","pid":13,"hostname":"64bc568b3e5a","name":"agentOps","msgid":"A02","msg":"Load SATELLITE_CONNECTOR_IAM_APIKEY value from SATELLITE_CONNECTOR_IAM_APIKEY environment variable."}
{"level":30,"time":"2023-10-18T01:14:54.288Z","pid":13,"hostname":"64bc568b3e5a","name":"agentOps","msgid":"A02","msg":"Load SATELLITE_CONNECTOR_TAGS value from SATELLITE_CONNECTOR_TAGS environment variable."}
{"level":30,"time":"2023-10-18T01:14:54.288Z","pid":13,"hostname":"64bc568b3e5a","name":"agentOps","msgid":"A02","msg":"Load SATELLITE_CONNECTOR_REGION value from SATELLITE_CONNECTOR_REGION environment variable."}
{"level":30,"time":"2023-10-18T01:14:54.288Z","pid":13,"hostname":"64bc568b3e5a","name":"connector-agent","msgid":"LA2","msg":"Connector id: U2F0ZWxsaXRlQ29ubmVjdG9yOiJja25pdGtidDFmcW5qdjUwOXBlZyI, region: jp-tok, release info: 20231010-7355f4583da5a6f197fee496016c351711fe380c_A."}
{"level":30,"time":"2023-10-18T01:14:54.346Z","pid":13,"hostname":"64bc568b3e5a","name":"tunneldns","msgid":"D04","msg":"DoTunnelDNSLookup DNS resolve iam.cloud.ibm.com to 23.46.229.72"}
{"level":30,"time":"2023-10-18T01:14:55.624Z","pid":13,"hostname":"64bc568b3e5a","name":"agent_tunnel","msg":"MakeLinkAPICall GET /v1/connectors/U2F0ZWxsaXRlQ29ubmVjdG9yOiJja25pdGtidDFmcW5qdjUwOXBlZyI status code 200"}
{"level":30,"time":"2023-10-18T01:14:55.624Z","pid":13,"hostname":"64bc568b3e5a","name":"agent_tunnel","msgid":"LAT03","msg":"Got configuration"}
{"level":30,"time":"2023-10-18T01:14:55.626Z","pid":13,"hostname":"64bc568b3e5a","name":"agent_tunnel","msgid":"LAT12","msg":"Saved region \"jp-tok\" to /connector_agent/x_link_region"}
{"level":30,"time":"2023-10-18T01:14:55.626Z","pid":13,"hostname":"64bc568b3e5a","name":"agent_tunnel","msgid":"LAT04-wss://c-01-ws.jp-tok.link.satellite.cloud.ibm.com/ws","msg":"Connecting to wss://c-01-ws.jp-tok.link.satellite.cloud.ibm.com/ws"}
{"level":30,"time":"2023-10-18T01:14:55.631Z","pid":13,"hostname":"64bc568b3e5a","name":"tunneldns","msgid":"D04","msg":"DoTunnelDNSLookup DNS resolve c-01-ws.jp-tok.link.satellite.cloud.ibm.com to 128.168.89.146"}
{"level":30,"time":"2023-10-18T01:14:55.877Z","pid":13,"hostname":"64bc568b3e5a","name":"TunnelCore","msgid":"TC24","msg":"Tunnel open","connector_id":"U2F0ZWxsaXRlQ29ubmVjdG9yOiJja25pdGtidDFmcW5qdjUwOXBlZyI"}
{"level":30,"time":"2023-10-18T01:14:55.878Z","pid":13,"hostname":"64bc568b3e5a","name":"connector_tunnel_base","msgid":"CTB26-U2F0ZWxsaXRlQ29ubmVjdG9yOiJja25pdGtidDFmcW5qdjUwOXBlZyI","msg":"Send connector information to tunnel server"}
{"level":30,"time":"2023-10-18T01:14:55.879Z","pid":13,"hostname":"64bc568b3e5a","name":"connector_tunnel_base","msgid":"CTB27","msg":"Tunnel connected","connector_id":"U2F0ZWxsaXRlQ29ubmVjdG9yOiJja25pdGtidDFmcW5qdjUwOXBlZyI","cipher":{"name":"TLS_AES_256_GCM_SHA384","standardName":"TLS_AES_256_GCM_SHA384","version":"TLSv1.3"}}

結果、IBM Cloudポータル上では、Active agentsのAgent nameとして上記のContainer ID情報が表示されるようになった。
image.png

7. 接続検証(ACL設定なし)

ACLを設定していない状態で接続検証をしてみる。今回は、VPC上のサーバー(10.0.0.4)からendpointにアクセスしてみる

接続元はVPC
[root@vpc1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 02:00:01:02:70:58 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.4/24 brd 10.0.0.255 scope global dynamic eth0
       valid_lft 244sec preferred_lft 244sec
    inet6 fe80::1ff:fe02:7058/64 scope link
       valid_lft forever preferred_lft forever
無事アクセスできた。
[root@vpc1 ~]# curl http://c-01.private.jp-tok.link.satellite.cloud.ibm.com:32786
Hello World!
dockerホスト上で取得したtcpdumpの結果
[root@syasuda-sc1 ~]# tcpdump -i any port 80 -nn
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
01:28:01.684237 vethc1783d7 P   IP 172.17.0.2.37890 > 172.16.0.4.80: Flags [S], seq 627242790, win 64240, options [mss 1460,sackOK,TS val 2793548404 ecr 0,nop,wscale 7], length 0
01:28:01.684250 docker0 In  IP 172.17.0.2.37890 > 172.16.0.4.80: Flags [S], seq 627242790, win 64240, options [mss 1460,sackOK,TS val 2793548404 ecr 0,nop,wscale 7], length 0
01:28:01.684263 eth0  Out IP 10.0.0.17.37890 > 172.16.0.4.80: Flags [S], seq 627242790, win 64240, options [mss 1460,sackOK,TS val 2793548404 ecr 0,nop,wscale 7], length 0
01:28:01.684818 eth0  In  IP 172.16.0.4.80 > 10.0.0.17.37890: Flags [S.], seq 1734867414, ack 627242791, win 65160, options [mss 1460,sackOK,TS val 2698726300 ecr 2793548404,nop,wscale 7], length 0
01:28:01.684826 docker0 Out IP 172.16.0.4.80 > 172.17.0.2.37890: Flags [S.], seq 1734867414, ack 627242791, win 65160, options [mss 1460,sackOK,TS val 2698726300 ecr 2793548404,nop,wscale 7], length 0
01:28:01.684829 vethc1783d7 Out IP 172.16.0.4.80 > 172.17.0.2.37890: Flags [S.], seq 1734867414, ack 627242791, win 65160, options [mss 1460,sackOK,TS val 2698726300 ecr 2793548404,nop,wscale 7], length 0
01:28:01.684844 vethc1783d7 P   IP 172.17.0.2.37890 > 172.16.0.4.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 2793548404 ecr 2698726300], length 0
01:28:01.684846 docker0 In  IP 172.17.0.2.37890 > 172.16.0.4.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 2793548404 ecr 2698726300], length 0
01:28:01.684849 eth0  Out IP 10.0.0.17.37890 > 172.16.0.4.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 2793548404 ecr 2698726300], length 0
(以下省略)
  • 172.17.0.2はコンテナーに割り振られたIPアドレスである。つまり、IBM Cloud側の接続元情報(今回の場合はVPC上のVSIサーバーのIPアドレス)はdockerホスト上では既に判別できない。
  • vethはコンテナーに接続しているインターフェース。docker0はブリッジでありvethが接続しているインターフェース。どちらもホストOSから認識されているインターフェースであり、コンテナ -> veth ->docker0とパケットは流れてくる。
    • src: 172.17.0.2
    • dst: 172.16.0.4
  • 一旦ホストに届いたパケットは、MASQUERADEでsrcが変更され、ルーティングテーブルに従って適切なインターフェース(eth0)から外部に出ていく(後述のネットワーク詳細情報も参考)
    • src: 10.0.0.17
    • dst: 172.16.0.4

8. 接続検証(ACL設定あり)

Satellite Connectorでは、ACLを設定することで、endpointにアクセスするsource IPを制限することができる。
今回は、VPC上のサーバー(10.0.0.4)と、Classic Infrastructure上のサーバー(10.212.15.57)からアクセスしてみた。

以下に先にまとめを書いておく。

  • Classic InfrastructureはそもそもIPアドレスの重複がないように構成されているので、アクセス元そのものがsource IPになる。
  • VPCの場合はユーザーごとに自由にアドレス空間を構成できるため、Service networkにアクセスする前に内部的にはNATされており、VPCのCloud Service Endpoint Source addressに変換される。よって、ACLで指定するIPアドレスとしては、このアドレスを利用すれば良い。https://cloud.ibm.com/docs/vpc?topic=vpc-vpc-behind-the-curtain
    image.png
VPC上のVSI
[root@vpc1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 02:00:01:02:70:58 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.4/24 brd 10.0.0.255 scope global dynamic eth0
       valid_lft 275sec preferred_lft 275sec
    inet6 fe80::1ff:fe02:7058/64 scope link
       valid_lft forever preferred_lft forever
Classic Infrastructure上のVSI
[root@classic ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 06:b5:7b:71:d5:c1 brd ff:ff:ff:ff:ff:ff
    inet 10.212.15.57/26 brd 10.212.15.63 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4b5:7bff:fe71:d5c1/64 scope link
       valid_lft forever preferred_lft forever

8-1. Case1: ACLにVPC上のVSIのIPアドレスをそのまま指定した場合

VPC上のVSIからもClassic Infrastrucutre上のVSIからもアクセスできない。
image.png

VPC上のVSIからのアクセス(失敗)

[root@vpc1 ~]# curl http://c-01.private.jp-tok.link.satellite.cloud.ibm.com:32786
curl: (52) Empty reply from server
[root@syasuda-sc1 ~]# tcpdump -i any port 80 -nn
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
(出力なし)

Classic Infrastrucutre上のVSIからのアクセス(失敗)

[root@classic ~]# curl http://c-01.private.jp-tok.link.satellite.cloud.ibm.com:32786
curl: (52) Empty reply from server
[root@syasuda-sc1 ~]# tcpdump -i any port 80 -nn
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
(出力なし)

8-2. Case2: ACLにClassic Infrastrucutre上のVSIのIPアドレスをそのまま指定した場合

image.png

VPC上のVSIからのアクセス(失敗)

[root@vpc1 ~]# curl http://c-01.private.jp-tok.link.satellite.cloud.ibm.com:32786
curl: (52) Empty reply from server
[root@syasuda-sc1 ~]# tcpdump -i any port 80 -nn
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
(出力なし)

Classic Infrastrucutre上のVSIからのアクセス(成功)

[root@classic ~]# curl http://c-01.private.jp-tok.link.satellite.cloud.ibm.com:32786
Hello World!
[root@syasuda-sc1 ~]# tcpdump -i any port 80 -nn
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
01:40:40.717067 vethc1783d7 P   IP 172.17.0.2.40606 > 172.16.0.4.80: Flags [S], seq 3852267794, win 64240, options [mss 1460,sackOK,TS val 2794307436 ecr 0,nop,wscale 7], length 0
01:40:40.717081 docker0 In  IP 172.17.0.2.40606 > 172.16.0.4.80: Flags [S], seq 3852267794, win 64240, options [mss 1460,sackOK,TS val 2794307436 ecr 0,nop,wscale 7], length 0
01:40:40.717097 eth0  Out IP 10.0.0.17.40606 > 172.16.0.4.80: Flags [S], seq 3852267794, win 64240, options [mss 1460,sackOK,TS val 2794307436 ecr 0,nop,wscale 7], length 0
01:40:40.721395 eth0  In  IP 172.16.0.4.80 > 10.0.0.17.40606: Flags [S.], seq 2311332279, ack 3852267795, win 65160, options [mss 1460,sackOK,TS val 2699485333 ecr 2794307436,nop,wscale 7], length 0
01:40:40.721407 docker0 Out IP 172.16.0.4.80 > 172.17.0.2.40606: Flags [S.], seq 2311332279, ack 3852267795, win 65160, options [mss 1460,sackOK,TS val 2699485333 ecr 2794307436,nop,wscale 7], length 0
01:40:40.721409 vethc1783d7 Out IP 172.16.0.4.80 > 172.17.0.2.40606: Flags [S.], seq 2311332279, ack 3852267795, win 65160, options [mss 1460,sackOK,TS val 2699485333 ecr 2794307436,nop,wscale 7], length 0
01:40:40.721425 vethc1783d7 P   IP 172.17.0.2.40606 > 172.16.0.4.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 2794307441 ecr 2699485333], length 0
01:40:40.721427 docker0 In  IP 172.17.0.2.40606 > 172.16.0.4.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 2794307441 ecr 2699485333], length 0
01:40:40.721431 eth0  Out IP 10.0.0.17.40606 > 172.16.0.4.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 2794307441 ecr 2699485333], length 0
(以下省略)

8-3. Case3: ACLにVPCのCloud Service Endpoint Source addressを指定した場合

image.png

VPC上のVSIからのアクセス(成功)

[root@vpc1 ~]# curl http://c-01.private.jp-tok.link.satellite.cloud.ibm.com:32786
Hello World!
[root@syasuda-sc1 ~]# tcpdump -i any port 80 -nn
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
01:44:02.373711 vethc1783d7 P   IP 172.17.0.2.47416 > 172.16.0.4.80: Flags [S], seq 2929078308, win 64240, options [mss 1460,sackOK,TS val 2794509093 ecr 0,nop,wscale 7], length 0
01:44:02.373723 docker0 In  IP 172.17.0.2.47416 > 172.16.0.4.80: Flags [S], seq 2929078308, win 64240, options [mss 1460,sackOK,TS val 2794509093 ecr 0,nop,wscale 7], length 0
01:44:02.373735 eth0  Out IP 10.0.0.17.47416 > 172.16.0.4.80: Flags [S], seq 2929078308, win 64240, options [mss 1460,sackOK,TS val 2794509093 ecr 0,nop,wscale 7], length 0
01:44:02.375370 eth0  In  IP 172.16.0.4.80 > 10.0.0.17.47416: Flags [S.], seq 3931512258, ack 2929078309, win 65160, options [mss 1460,sackOK,TS val 2699686986 ecr 2794509093,nop,wscale 7], length 0
01:44:02.375384 docker0 Out IP 172.16.0.4.80 > 172.17.0.2.47416: Flags [S.], seq 3931512258, ack 2929078309, win 65160, options [mss 1460,sackOK,TS val 2699686986 ecr 2794509093,nop,wscale 7], length 0
01:44:02.375387 vethc1783d7 Out IP 172.16.0.4.80 > 172.17.0.2.47416: Flags [S.], seq 3931512258, ack 2929078309, win 65160, options [mss 1460,sackOK,TS val 2699686986 ecr 2794509093,nop,wscale 7], length 0
01:44:02.375407 vethc1783d7 P   IP 172.17.0.2.47416 > 172.16.0.4.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 2794509095 ecr 2699686986], length 0
01:44:02.375409 docker0 In  IP 172.17.0.2.47416 > 172.16.0.4.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 2794509095 ecr 2699686986], length 0
01:44:02.375413 eth0  Out IP 10.0.0.17.47416 > 172.16.0.4.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 2794509095 ecr 2699686986], length 0
(以下省略)

Classic Infrastrucutre上のVSIからのアクセス(失敗)

[root@classic ~]# curl http://c-01.private.jp-tok.link.satellite.cloud.ibm.com:32786
curl: (52) Empty reply from server
[root@syasuda-sc1 ~]# tcpdump -i any port 80 -nn
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
(なし)

9. 可用性構成確認

複数のdockerホスト環境を構成して、Secure Tunnelを2つ接続すると、以下のように複数のActive agentsが登録される。

image.png

9-1. 正常時検証

繰り返しEndpointにアクセスしてみると、以下のように負荷分散されていることがわかる。

ターゲットのWebサーバー上でのアクセスログ
[root@syasuda-web1 ~]# tail -f /var/log/httpd/access_log
10.0.0.15 - - [18/Oct/2023:01:51:56 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:51:56 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:51:57 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:51:57 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:52:28 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:52:28 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:52:28 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:52:28 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:52:29 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:52:29 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:52:36 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:52:36 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:52:36 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:52:36 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:52:37 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:52:37 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"

9-2. 障害時検証

強制的にコンテナをkillする。

[root@syasuda-sc2 ~]# docker ps
CONTAINER ID   IMAGE                                                             COMMAND                  CREATED         STATUS         PORTS                                             NAMES
e0ee6618a3af   icr.io/ibm/satellite-connector/satellite-connector-agent:latest   "/usr/local/bin/node…"   5 minutes ago   Up 5 minutes                                                     vibrant_williams

[root@syasuda-sc2 ~]# docker kill e0ee6618a3af

しかし、この間一度もエラーは発生しなかった。

[root@vpc1 ~]# while true; do curl http://c-01.private.jp-tok.link.satellite.cloud.ibm.com:32786 ; sleep 1; done
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
アクセスログは途中から方系からしか来なくなった
[root@syasuda-web1 ~]# tail -f /var/log/httpd/access_log
10.0.0.17 - - [18/Oct/2023:01:55:46 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:46 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:55:47 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:55:47 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:55:48 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:55:48 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:55:49 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.15 - - [18/Oct/2023:01:55:49 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:50 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:50 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:51 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:51 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:52 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:52 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:53 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:53 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:54 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:54 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:55 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:55 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:56 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:56 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:57 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:57 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:58 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:58 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:59 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:55:59 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:00 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:00 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:01 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:01 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:02 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:02 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:03 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:03 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:04 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"
10.0.0.17 - - [18/Oct/2023:01:56:04 +0000] "GET / HTTP/1.1" 200 13 "-" "curl/7.29.0"

10. (参考): Network Deep Dive

[root@syasuda-sc1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:00:0e:02:70:58 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    altname ens3
    inet 10.0.0.17/24 brd 10.0.0.255 scope global dynamic noprefixroute eth0
       valid_lft 341sec preferred_lft 341sec
    inet6 fe80::eff:fe02:7058/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:5e:c4:1c:e4 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5eff:fec4:1ce4/64 scope link
       valid_lft forever preferred_lft forever
7: vethc1783d7@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether de:1a:59:92:e3:bc brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::dc1a:59ff:fe92:e3bc/64 scope link
       valid_lft forever preferred_lft forever
[root@syasuda-sc1 ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
0fe097749989   bridge    bridge    local
bdb2d78ef979   host      host      local
526e4ae5cf3a   none      null      local
[root@syasuda-sc1 ~]# docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "0fe097749989c19922b849dfd662cff38daf137e1cd3d50b1598447dc988dfd5",
        "Created": "2023-10-18T00:58:14.499816314Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "64bc568b3e5a3413aea38bc971cfcfbeca6545d79042404e15cfdbec538db7e2": {
                "Name": "stoic_chandrasekhar",
                "EndpointID": "4fd1ca91a01e9b4b58a2e92963ca173465186c352a4d0210270da0bbd184a9de",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
[root@syasuda-sc1 ~]# iptables -L -v -n -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    1    60 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
   20  1400 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0
[root@syasuda-sc1 ~]# docker exec -it 64bc568b3e5a netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
default         172.17.0.1      0.0.0.0         UG        0 0          0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 eth0

[root@syasuda-sc1 ~]# docker exec -it 64bc568b3e5a ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.2  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:ac:11:00:02  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
4
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
4
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?