LinuxサーバでGRE張りたい
解決したいこと
2台のLinuxサーバがあります。この2台でGRE(point-to-point)を張りたいです。
(ゆくゆくはOSPF張りたいですが、まずはGRE)
2台のサーバはOpenstack上で動作しています。(conoha VPS)
2台とも、eth0がインターネット向け、eth1がプライベートネットワーク向けです
(eth2がDatabaseネットワーク向けでありますが、使ってないので割愛します)
2台のeth0はそれぞれ別のネットワークに属しています
下図には記載されていませんが、インターネットとサーバの間にルータがいます
Redhatのページ1を参考に設定しましたが、Pingすら通らない状況です
GRE interface作成後にip a
(ip address show
)を行ったところinterface名の後ろに@NONE
という接尾辞がついていました
解決方法ご存じの方、お手数おかけしますがご教示いただけますと幸いです。
--------+----------[Internet]----------+--------
| |
| | eth0:
| | node1: 203.0.113.2/27 (↓違うネットワーク)
| | node2: 203.0.113.34/27 (↑違うネットワーク)
| | ※マスクしてます
|(eth0) |(eth0) eth1:
+-------+ (gre1) (gre1) +-------+ node1: 172.18.1.1/27 (↓同じネットワーク)
| node1 |---+ +---| node2 | node2: 172.18.1.2/27 (↑同じネットワーク)
+-------+ | | +-------+ ※実際の値
|(eth1) | | |(eth1) gre1:(GRE)
| | | | node1: 172.21.84.37/30
| +--------------+ | node2: 172.21.84.38/30
| | ※LAN経由でGRE接続
--------+-------[PrivateNetwork]-------+--------
cat /etc/os-release
@node1 ~ $ cat /etc/os-release
NAME="AlmaLinux"
VERSION="9.5 (Teal Serval)"
ID="almalinux"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.5"
PLATFORM_ID="platform:el9"
PRETTY_NAME="AlmaLinux 9.5 (Teal Serval)"
ANSI_COLOR="0;34"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:almalinux:almalinux:9::baseos"
HOME_URL="https://almalinux.org/"
DOCUMENTATION_URL="https://wiki.almalinux.org/"
BUG_REPORT_URL="https://bugs.almalinux.org/"
ALMALINUX_MANTISBT_PROJECT="AlmaLinux-9"
ALMALINUX_MANTISBT_PROJECT_VERSION="9.5"
REDHAT_SUPPORT_PRODUCT="AlmaLinux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.5"
SUPPORT_END=2032-06-01
@node1 ~ $
@node2 ~ $ cat /etc/os-release
NAME="AlmaLinux"
VERSION="9.5 (Teal Serval)"
ID="almalinux"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.5"
PLATFORM_ID="platform:el9"
PRETTY_NAME="AlmaLinux 9.5 (Teal Serval)"
ANSI_COLOR="0;34"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:almalinux:almalinux:9::baseos"
HOME_URL="https://almalinux.org/"
DOCUMENTATION_URL="https://wiki.almalinux.org/"
BUG_REPORT_URL="https://bugs.almalinux.org/"
ALMALINUX_MANTISBT_PROJECT="AlmaLinux-9"
ALMALINUX_MANTISBT_PROJECT_VERSION="9.5"
REDHAT_SUPPORT_PRODUCT="AlmaLinux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.5"
SUPPORT_END=2032-06-01
@node2 ~ $
発生している問題・エラー
ip a show dev gre1
@node1 ~ $ ip a show dev gre1
25: gre1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1476 qdisc noqueue state UNKNOWN group default qlen 1000
link/gre 172.18.1.1 peer 172.18.1.2
inet 172.21.84.37/30 brd 172.21.84.39 scope global noprefixroute gre1
valid_lft forever preferred_lft forever
inet6 fe80::4904:b92:6aa:7c30/64 scope link noprefixroute
valid_lft forever preferred_lft forever
@node1 ~ $
@node2 ~ $ ip a show dev gre1
34: gre1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1476 qdisc noqueue state UNKNOWN group default qlen 1000
link/gre 172.18.1.2 peer 172.18.1.1
inet 172.21.84.38/30 brd 172.21.84.39 scope global noprefixroute gre1
valid_lft forever preferred_lft forever
inet6 fe80::b3e5:13e9:c83f:c38f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
@node2 ~ $
@node1 ~ $ ping -M do -s 1420 -c 3 -W 1 172.21.84.37
PING 172.21.84.37 (172.21.84.37) 1420(1448) bytes of data.
1428 bytes from 172.21.84.37: icmp_seq=1 ttl=64 time=0.051 ms
1428 bytes from 172.21.84.37: icmp_seq=2 ttl=64 time=0.079 ms
1428 bytes from 172.21.84.37: icmp_seq=3 ttl=64 time=0.090 ms
--- 172.21.84.37 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.051/0.073/0.090/0.016 ms
@node1 ~ $
@node1 ~ $ ping -M do -s 1420 -c 3 -W 1 172.21.84.38
PING 172.21.84.38 (172.21.84.38) 1420(1448) bytes of data.
--- 172.21.84.38 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2038ms
@node1 ~ $
@node2 ~ $ ping -M do -s 1420 -c 3 -W 1 172.21.84.38
PING 172.21.84.38 (172.21.84.38) 1420(1448) bytes of data.
1428 bytes from 172.21.84.38: icmp_seq=1 ttl=64 time=0.062 ms
1428 bytes from 172.21.84.38: icmp_seq=2 ttl=64 time=0.058 ms
1428 bytes from 172.21.84.38: icmp_seq=3 ttl=64 time=0.088 ms
--- 172.21.84.38 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2074ms
rtt min/avg/max/mdev = 0.058/0.069/0.088/0.013 ms
@node2 ~ $
@node2 ~ $ ping -M do -s 1420 -c 3 -W 1 172.21.84.37
PING 172.21.84.37 (172.21.84.37) 1420(1448) bytes of data.
--- 172.21.84.37 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2038ms
@node2 ~ $
ping
-M do
: do-not-framgentの有効化
-s 1420
: payload size
-c 3
: 実行回数
-W 1
: タイムアウト
自分で試したこと
- firewall でGRE許可(以降残したまま)
sudo firewall-cmd --permanent --zone=trusted --add-service=gre
sudo firewall-cmd --reload
details
@node1 ~ $ sudo firewall-cmd --permanent --zone=trusted --list-all
[sudo] password:
trusted (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: eth1 eth2 gre1 gre2 wg0 wg1
sources:
services: gre
ports:
protocols:
forward: yes
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:
@node1 ~ $
@node2 ~ $ sudo firewall-cmd --permanent --zone=trusted --list-all
[sudo] password:
trusted (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: eth1 eth2 gre1 gre2 wg0 wg1
sources:
services: gre
ports:
protocols:
forward: yes
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:
@node2 ~ $
- firewall denylog 有効化2
sudo firewall-cmd --get-log-denied
sudo firewall-cmd --set-log-denied=all
sudo firewall-cmd --get-log-denied
@node2 ~ $ sudo firewall-cmd --get-log-denied
[sudo] password:
off
@node2 ~ $ sudo firewall-cmd --set-log-denied=all
success
@node2 ~ $ sudo firewall-cmd --get-log-denied
all
@node2 ~ $
-
nmtui
で設定 -
ip
で設定node1sudo ip tunnel del gre1 sudo ip tunnel add gre1 mode gre local 172.18.1.1 remote 172.18.1.2 sudo ip addr add 172.21.84.37/30 dev gre1 sudo ip link set gre1 up sudo firewall-cmd --permanent --zone=trusted --change-interface=gre1
node2sudo ip tunnel del gre1 sudo ip tunnel add gre1 mode gre local 172.18.1.2 remote 172.18.1.1 sudo ip addr add 172.21.84.38/30 dev gre1 sudo ip link set gre1 up sudo firewall-cmd --permanent --zone=trusted --change-interface=gre1
-
nmcli
で設定node1sudo nmcli connection delete gre1 sudo nmcli connection add type ip-tunnel ip-tunnel.mode gre con-name gre1 ifname gre1 local 172.18.1.1 remote 172.18.1.2 sudo nmcli connection modify gre1 ipv4.addresses '172.21.84.37/30' sudo nmcli connection modify gre1 ipv4.method manual sudo nmcli connection modify gre1 connection.zone trusted sudo nmcli connection modify gre1 connection.autoconnect yes sudo nmcli connection up gre1
node2sudo nmcli connection delete gre1 sudo nmcli connection add type ip-tunnel ip-tunnel.mode gre con-name gre1 ifname gre1 local 172.18.1.2 remote 172.18.1.1 sudo nmcli connection modify gre1 ipv4.addresses '172.21.84.38/30' sudo nmcli connection modify gre1 ipv4.method manual sudo nmcli connection modify gre1 connection.zone trusted sudo nmcli connection modify gre1 connection.autoconnect yes sudo nmcli connection up gre1
-
ip -s l
で統計情報参照-
TX bytes
,TX packets
がカウントアップしているが、RX bytes
,RX packets
が0 - TX, RXともに
errors
,dropped
が 0 - tunnel張ろうとしている←peerへの経路が存在している→LinkUp状態
-
-
frrでospf 設定入れて
tcpdump -i eth1 -vv ip proto 47
追記1
コメントの指摘を受け、疎通確認改めて行いました。
ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 02:01:76:00:00:00 brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname ens3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether fa:16:3e:0f:8a:ac brd ff:ff:ff:ff:ff:ff
altname enp0s4
altname ens4
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 02:01:76:00:00:00 brd ff:ff:ff:ff:ff:ff
altname enp0s5
altname ens5
5: wg1: <POINTOPOINT,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/none
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:ef:00:00:00 brd ff:ff:ff:ff:ff:ff
7: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/gre 0.0.0.0 brd 0.0.0.0
8: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
25: gre1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1476 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/gre 172.18.1.1 peer 172.18.1.2
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 02:01:a0:00:00:00 brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname ens3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether fa:16:3e:4f:00:72 brd ff:ff:ff:ff:ff:ff
altname enp0s4
altname ens4
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether fa:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
altname enp0s5
altname ens5
5: wg1: <POINTOPOINT,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/none
6: br-2950348a0094: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
7: br-2a05bcaf98e7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:69:00:00:00 brd ff:ff:ff:ff:ff:ff
8: br-39a5fbf13fae: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:8b:00:00:00 brd ff:ff:ff:ff:ff:ff
9: br-7c4cbaef463b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:cf:00:00:00 brd ff:ff:ff:ff:ff:ff
10: br-933f67c1bb09: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:3a:00:00:00 brd ff:ff:ff:ff:ff:ff
11: br-d0fec5379b74: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:8d:00:00:00 brd ff:ff:ff:ff:ff:ff
12: br-ea9c2a028332: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:77:00:00:00 brd ff:ff:ff:ff:ff:ff
13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:2d:00:00:00 brd ff:ff:ff:ff:ff:ff
15: vethc519abb@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-2a05bcaf98e7 state UP mode DEFAULT group default
link/ether f2:d9:e9:00:00:00 brd ff:ff:ff:ff:ff:ff link-netnsid 4
17: vethd4285e6@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 32:47:2e:00:00:00 brd ff:ff:ff:ff:ff:ff link-netnsid 1
19: veth4350432@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-933f67c1bb09 state UP mode DEFAULT group default
link/ether d6:75:02:00:00:00 brd ff:ff:ff:ff:ff:ff link-netnsid 3
21: vethd62ad8e@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7c4cbaef463b state UP mode DEFAULT group default
link/ether 86:51:aa:00:00:00 brd ff:ff:ff:ff:ff:ff link-netnsid 2
23: veth2a369b4@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ea9c2a028332 state UP mode DEFAULT group default
link/ether da:bb:f3:00:00:00 brd ff:ff:ff:ff:ff:ff link-netnsid 0
25: vethb9ffbd0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-39a5fbf13fae state UP mode DEFAULT group default
link/ether f2:4e:22:00:00:00 brd ff:ff:ff:ff:ff:ff link-netnsid 5
26: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/gre 0.0.0.0 brd 0.0.0.0
27: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
28: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
34: gre1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1476 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/gre 172.18.1.2 peer 172.18.1.1
$ ip a show dev gre1
25: gre1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1476 qdisc noqueue state UNKNOWN group default qlen 1000
link/gre 172.18.1.1 peer 172.18.1.2
inet 172.21.84.37/30 brd 172.21.84.39 scope global noprefixroute gre1
valid_lft forever preferred_lft forever
inet6 fe80::4904:b92:6aa:7c30/64 scope link noprefixroute
valid_lft forever preferred_lft forever
$ ip a show dev gre1
34: gre1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1476 qdisc noqueue state UNKNOWN group default qlen 1000
link/gre 172.18.1.2 peer 172.18.1.1
inet 172.21.84.38/30 brd 172.21.84.39 scope global noprefixroute gre1
valid_lft forever preferred_lft forever
inet6 fe80::b3e5:13e9:c83f:c38f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
ping -M do -s 1420 -c 3 -W 1 172.18.1.1
ping -M do -s 1420 -c 3 -W 1 172.18.1.2
ping -c 3 -W 1 -I gre1 172.21.84.38
ping -c 3 -W 1 -I gre1 172.21.84.37
node1 ping to 172.18.1.0
@node1 ~ $ ping -M do -s 1420 -c 3 -W 1 172.18.1.1
PING 172.18.1.1 (172.18.1.1) 1420(1448) bytes of data.
1428 bytes from 172.18.1.1: icmp_seq=1 ttl=64 time=0.068 ms
1428 bytes from 172.18.1.1: icmp_seq=2 ttl=64 time=0.077 ms
1428 bytes from 172.18.1.1: icmp_seq=3 ttl=64 time=0.060 ms
--- 172.18.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2044ms
rtt min/avg/max/mdev = 0.060/0.068/0.077/0.007 ms
@node1 ~ $ ping -M do -s 1420 -c 3 -W 1 172.18.1.2
PING 172.18.1.2 (172.18.1.2) 1420(1448) bytes of data.
1428 bytes from 172.18.1.2: icmp_seq=1 ttl=64 time=0.586 ms
1428 bytes from 172.18.1.2: icmp_seq=2 ttl=64 time=0.462 ms
1428 bytes from 172.18.1.2: icmp_seq=3 ttl=64 time=0.445 ms
--- 172.18.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2088ms
rtt min/avg/max/mdev = 0.445/0.497/0.586/0.062 ms
node2 ping to 172.18.1.0
@node2 ~ $ ping -M do -s 1420 -c 3 -W 1 172.18.1.2
PING 172.18.1.2 (172.18.1.2) 1420(1448) bytes of data.
1428 bytes from 172.18.1.2: icmp_seq=1 ttl=64 time=0.076 ms
1428 bytes from 172.18.1.2: icmp_seq=2 ttl=64 time=0.058 ms
1428 bytes from 172.18.1.2: icmp_seq=3 ttl=64 time=0.051 ms
--- 172.18.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2046ms
rtt min/avg/max/mdev = 0.051/0.061/0.076/0.010 ms
@node2 ~ $ ping -M do -s 1420 -c 3 -W 1 172.18.1.1
PING 172.18.1.1 (172.18.1.1) 1420(1448) bytes of data.
1428 bytes from 172.18.1.1: icmp_seq=1 ttl=64 time=0.687 ms
1428 bytes from 172.18.1.1: icmp_seq=2 ttl=64 time=0.729 ms
1428 bytes from 172.18.1.1: icmp_seq=3 ttl=64 time=0.635 ms
--- 172.18.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.635/0.683/0.729/0.038 ms
node1 ping to 172.21.84.36
@node1 ~ $ ping -c 3 -W 1 -I gre1 172.21.84.37
PING 172.21.84.37 (172.21.84.37) from 172.21.84.37 gre1: 56(84) bytes of data.
64 bytes from 172.21.84.37: icmp_seq=1 ttl=64 time=0.111 ms
64 bytes from 172.21.84.37: icmp_seq=2 ttl=64 time=0.078 ms
64 bytes from 172.21.84.37: icmp_seq=3 ttl=64 time=0.085 ms
--- 172.21.84.37 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2056ms
rtt min/avg/max/mdev = 0.078/0.091/0.111/0.014 ms
@node1 ~ $ ping -c 3 -W 1 -I gre1 172.21.84.38
PING 172.21.84.38 (172.21.84.38) from 172.21.84.37 gre1: 56(84) bytes of data.
--- 172.21.84.38 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2064ms
MTU関連の場合を考慮し一旦-M do
と-s 1420
オプションは削除して実行
node2 ping to 172.21.84.36
@node2 ~ $ ping -c 3 -W 1 -I gre1 172.21.84.38
PING 172.21.84.38 (172.21.84.38) from 172.21.84.38 gre1: 56(84) bytes of data.
64 bytes from 172.21.84.38: icmp_seq=1 ttl=64 time=0.101 ms
64 bytes from 172.21.84.38: icmp_seq=2 ttl=64 time=0.066 ms
64 bytes from 172.21.84.38: icmp_seq=3 ttl=64 time=0.070 ms
--- 172.21.84.38 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2053ms
rtt min/avg/max/mdev = 0.066/0.079/0.101/0.015 ms
@node2 ~ $ ping -c 3 -W 1 -I gre1 172.21.84.37
PING 172.21.84.37 (172.21.84.37) from 172.21.84.38 gre1: 56(84) bytes of data.
--- 172.21.84.37 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2044ms
MTU関連の場合を考慮し一旦-M do
と-s 1420
オプションは削除して実行
- ping飛ばないのでtcpdump(ospfの設定は入れた状態)
ip link set gre1 down; sleep 1; ip link set gre1 up
sudo tcpdump -i eth1 -w /tmp/tcpdump_$(hostname)_$(date +%s).pcap -c 50 ip proto 47
details
fe80::4904:b92:6aa:7c30
はgre1
のipv6 link-local
fe80::b3e5:13e9:c83f:c38f
はgre1
のipv6 link-local
tcpdump実ファイル(有効期限無し)をgoogledriveとaxfcにそれぞれ置きます。(googledriveは棚卸しするときに消すかも)
https://drive.google.com/open?id=17zi_tXJH2gdXf_4l9QguSkFIpb9Nl3kl&usp=drive_fs
https://www.axfc.net/u/4088415
axfc download pass: xnfxvjQGPPMGD8nD
zip unar pass: njLa39iH!
一旦ローカル環境で試したいけど、PCにvm系入れたときの副作用(dockerが動かなくなるとか、virtualboxやvm workstation入れたらsandboxが動かなくなるとか)が怖いので試せない...
-
8.2. レイヤー 3 トラフィックを IPv4 パケットにカプセル化するための GRE トンネルの設定
https://docs.redhat.com/ja/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/configuring-a-gre-tunnel-using-nmcli-to-encapsulate-layer-3-traffic-in-ipv4-packets_configuring-ip-tunnels#configuring-a-gre-tunnel-using-nmcli-to-encapsulate-layer-3-traffic-in-ipv4-packets_configuring-ip-tunnels ↩ -
5.17. 拒否されたパケットに対するロギングの設定 | Red Hat Product Documentation
https://docs.redhat.com/ja/documentation/red_hat_enterprise_linux/7/html/security_guide/configuring_logging_for_denied_packets#Configuring_Logging_for_Denied_Packets ↩