2
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

GNS3でNexus9000vを使ってvPCを設定する

Posted at

概要

GNS3でNexus9000vを使ってvPCを設定する。
ベストプラクティスを使用したNexus 9000 vPCの理解と設定

使用するトポロジー
image.png

環境

  • ThinkCentre M75q Gen2(8CPUs, Mem64GB)
  • Ubuntu 20.04.1 LTS
  • GNS3:2.2.42

事前準備

GNS3で Nexus9000v を使用するための準備をする。
https://www.gns3.com/marketplace/featured/cisco-nx-osv-9000

  1. アプライアンスファイル(cisco-nxosv9k.gns3a)
  2. OVMF-edk2-stable202305.fd
  3. nexus9300v.10.1.1.qcow2 →リンク先のCiscoサイトからダウンロード可能
    あとは公式手順のとおりGNS3へインポートする。GNS3 Documentation | Import GNS3 appliance

もし起動時にloader >のプロンプトになってしまったら、dirでbootflashにあるnxosのファイル名を確認してbootコマンドで起動できる。

loader > dir                                                                   

bootflash::  

  .rpmstore
  nxos.10.1.1.bin
  bootflash_sync_list
  evt_log_snapshot
  .swtam

loader > boot bootflash:/nxos.10.1.1.bin  

設定

以下はLEAF-01の設定。LEAF-02にも同様の設定を行う。LEAF-01がプライマリスイッチとなるように、LEAF-01は role priority 8192、LEAF-02は role priority 16384 と設定しておく。

//今回使用したvlan
vlan 1,80-99

//featureを有効化
feature lacp
feature vpc

//vPC Peer-keepalive linkにはmgmt0インタフェースを使用
interface mgmt0
  description tpos
  vrf member management
  no ip redirects
  ip address 192.168.122.101/24

//vPC Peer-linkはPort-Channel1(Eth1/8, Eth1/9)を使用
interface Ethernet1/8
  switchport mode trunk
  channel-group 1 mode active
!
interface Ethernet1/9
  switchport mode trunk
  channel-group 1 mode active
!
interface port-channel1
  switchport mode trunk
  spanning-tree port type network
  vpc peer-link

//vPC Domain設定
vpc domain 1
  peer-switch
  role priority 8192
  system-priority 8192
  peer-keepalive destination 192.168.122.102 source 192.168.122.101
  delay restore 210
  peer-gateway
  auto-recovery
  ip arp synchronize

//vPCとして使用するインタフェースの設定
interface Ethernet1/1
  switchport mode trunk
  channel-group 4 mode active
!
interface port-channel4
  switchport mode trunk
  vpc 4

対向のホスト(RockyLinux)側でもLACPをしゃべれるように設定。このあとのテストのためにvlanインタフェースも作っておく。今回はvlan88-90を使用。

//bondingインターフェースを作成
nmcli con add type bond ifname bond0 con-name bond0 ipv4.method "disabled" ipv6.method "disabled" +bond.options "mode=802.3ad","lacp_rate=fast"

//bondingに使用する物理NICを指定
nmcli con add type ethernet ifname eth0 master bond0
nmcli con add type ethernet ifname eth1 master bond0

//作成したコネクションのアクティベート
nmcli con up bond0
nmcli con up bond-slave-eth0
nmcli con up bond-slave-eth1

//vlanインターフェースの作成(80~99)
nmcli con add type vlan con-name bond0.80 ifname bond0.80 dev bond0 ipv4.method "auto"  ipv6.method "disabled" id 80
nmcli con add type vlan con-name bond0.81 ifname bond0.81 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 81
nmcli con add type vlan con-name bond0.82 ifname bond0.82 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 82
nmcli con add type vlan con-name bond0.83 ifname bond0.83 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 83
nmcli con add type vlan con-name bond0.84 ifname bond0.84 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 84
nmcli con add type vlan con-name bond0.85 ifname bond0.85 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 85
nmcli con add type vlan con-name bond0.86 ifname bond0.86 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 86
nmcli con add type vlan con-name bond0.87 ifname bond0.87 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 87
nmcli con add type vlan con-name bond0.88 ifname bond0.88 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 88
nmcli con add type vlan con-name bond0.89 ifname bond0.89 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 89
nmcli con add type vlan con-name bond0.90 ifname bond0.90 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 90
nmcli con add type vlan con-name bond0.91 ifname bond0.91 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 91
nmcli con add type vlan con-name bond0.92 ifname bond0.92 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 92
nmcli con add type vlan con-name bond0.93 ifname bond0.93 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 93
nmcli con add type vlan con-name bond0.94 ifname bond0.94 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 94
nmcli con add type vlan con-name bond0.95 ifname bond0.95 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 95
nmcli con add type vlan con-name bond0.96 ifname bond0.96 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 96
nmcli con add type vlan con-name bond0.97 ifname bond0.97 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 97
nmcli con add type vlan con-name bond0.98 ifname bond0.98 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 98
nmcli con add type vlan con-name bond0.99 ifname bond0.99 dev bond0 ipv4.method "auto" ipv6.method "disabled" id 99

//Bond0.x0の設定(80~99)
nmcli con modify bond0.80 ipv4.method manual ipv4.address 172.16.80.1/24
nmcli con modify bond0.81 ipv4.method manual ipv4.address 172.16.81.1/24
nmcli con modify bond0.82 ipv4.method manual ipv4.address 172.16.82.1/24
nmcli con modify bond0.83 ipv4.method manual ipv4.address 172.16.83.1/24
nmcli con modify bond0.84 ipv4.method manual ipv4.address 172.16.84.1/24
nmcli con modify bond0.85 ipv4.method manual ipv4.address 172.16.85.1/24
nmcli con modify bond0.86 ipv4.method manual ipv4.address 172.16.86.1/24
nmcli con modify bond0.87 ipv4.method manual ipv4.address 172.16.87.1/24
nmcli con modify bond0.88 ipv4.method manual ipv4.address 172.16.88.1/24
nmcli con modify bond0.89 ipv4.method manual ipv4.address 172.16.89.1/24
nmcli con modify bond0.90 ipv4.method manual ipv4.address 172.16.90.1/24
nmcli con modify bond0.91 ipv4.method manual ipv4.address 172.16.91.1/24
nmcli con modify bond0.92 ipv4.method manual ipv4.address 172.16.92.1/24
nmcli con modify bond0.93 ipv4.method manual ipv4.address 172.16.93.1/24
nmcli con modify bond0.94 ipv4.method manual ipv4.address 172.16.94.1/24
nmcli con modify bond0.95 ipv4.method manual ipv4.address 172.16.95.1/24
nmcli con modify bond0.96 ipv4.method manual ipv4.address 172.16.96.1/24
nmcli con modify bond0.97 ipv4.method manual ipv4.address 172.16.97.1/24
nmcli con modify bond0.98 ipv4.method manual ipv4.address 172.16.98.1/24
nmcli con modify bond0.99 ipv4.method manual ipv4.address 172.16.99.1/24

//設定反映(80~99)
nmcli con up bond0.80
nmcli con up bond0.81
nmcli con up bond0.82
nmcli con up bond0.83
nmcli con up bond0.84
nmcli con up bond0.85
nmcli con up bond0.86
nmcli con up bond0.87
nmcli con up bond0.88
nmcli con up bond0.89
nmcli con up bond0.90
nmcli con up bond0.91
nmcli con up bond0.92
nmcli con up bond0.93
nmcli con up bond0.94
nmcli con up bond0.95
nmcli con up bond0.96
nmcli con up bond0.97
nmcli con up bond0.98
nmcli con up bond0.99

設定を確認してみる

今回は問題なく設定できたが、vPC周りのトラブルシューティングが必要であればこちらの公式ドキュメントを参照。Checklist、確認すべきコマンド、想定される症状ごとの原因等が記載されている。
Cisco Nexus 9000 Series NX-OS Troubleshooting Guide, Release 10.1(x) | Chapter: Troubleshooting vPCs

LEAF-01# sh vpc brief
Legend:
                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                     : 1   
Peer status                       : peer adjacency formed ok      
vPC keep-alive status             : peer is alive                 
Configuration consistency status  : success 
Per-vlan consistency status       : success                       
Type-2 consistency status         : success 
vPC role                          : primary                       
Number of vPCs configured         : 1   
Peer Gateway                      : Enabled
Dual-active excluded VLANs        : -
Graceful Consistency Check        : Enabled
Auto-recovery status              : Enabled, timer is off.(timeout = 240s)
Delay-restore status              : Timer is off.(timeout = 210s)
Delay-restore SVI status          : Timer is off.(timeout = 10s)
Operational Layer3 Peer-router    : Disabled
Virtual-peerlink mode             : Disabled

vPC Peer-link status
---------------------------------------------------------------------
id    Port   Status Active vlans    
--    ----   ------ -------------------------------------------------
1     Po1    up     1,80-99                                                 
         

vPC status
----------------------------------------------------------------------------
Id    Port          Status Consistency Reason                Active vlans
--    ------------  ------ ----------- ------                ---------------
4     Po4           up     success     success               1,80-99

動作確認

①Peer-linkがdown

  • プライマリスイッチでPeer-linkをdownさせる(LEAF-01でPeer-link(Po1)のインタフェースをshutする)
  • Peer-keepalive linkはアクティブを継続
    →セカンダリ スイッチはすべての vPC メンバーポートをシャットダウンする
//LEAF-02側でdebugコマンドを入れておく
LEAF-02# debug vpc peer-link 

//LEAF-01でPo1をshut
LEAF-02# 
2023 Aug 16 11:02:06.721219 vpc: Failed to receive peer mct state resp 
2023 Aug 16 11:02:16.722717 vpc: Failed to receive peer mct state resp 
2023 Aug 16 11:02:26.723838 vpc: Failed to receive peer mct state resp 
2023 Aug 16 11:02:26.737546 vpc: Stop reload timer successful 
2023 Aug 16 11:02:26.737596 vpc: Local role is Slave and OOB configured.Listening to OOB Timers 
2023 Aug 16 11:02:29.739047 vpc: Starting listening to OOB 
2023 Aug 16 11:02:29.742105 vpc: Vlans in svi down: vlan [1][80][81][82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99] 
2023 Aug 16 11:02:29.742970 vpc: 0 orphan-ports configured
2023 Aug 16 11:02:29.743017 vpc: No orphan ports to suspend on peer link down 
2023 Aug 16 11:02:29.743043 vpc: Found the pending Pre-MCT wrap 
2023 Aug 16 11:02:29.743056 vpc: Sending response to pre-MCT down,peer_in_upgrade:0, mcecm_global_info.role:2 
2023 Aug 16 11:02:29.743232 vpc: Saving the MCT [1] context in PSS
2023 Aug 16 11:02:29.777669 vpc:  [22] vlans went down on MCT, vlans - [1,80-99] 
2023 Aug 16 11:02:29.777782 vpc: num-up-vlans [0], total up vlans in MCT [] 
2023 Aug 16 11:02:29.777790 vpc: Sending response to Log_down with status [SUCCESS]
2023 Aug 16 11:02:29.779317 vpc: Sending response to BUNDLE_LAST_DWN with status [SUCCESS]
2023 Aug 16 11:02:29.779861 vpc: Saving the MCT [1] context in PSS
2023 Aug 16 11:02:29.829675 vpc:  [22] vlans went down on MCT, vlans - [1,80-99] 
2023 Aug 16 11:02:29.867629 vpc:  [22] vlans went down on MCT, vlans - [1,80-99] 

//
LEAF-02# sh vpc           
/省略/
vPC Peer-link status
---------------------------------------------------------------------
id    Port   Status Active vlans    
--    ----   ------ -------------------------------------------------
1     Po1    down   -                                                           
         
vPC status
----------------------------------------------------------------------------
Id    Port          Status Consistency Reason                Active vlans
--    ------------  ------ ----------- ------                ---------------
4     Po4           down   failed      Peer-link is down     - 

LEAF-02# sh int status
/省略/     
Eth1/1        --                 suspndByV trunk     auto    auto    10g 

②Peer-keepalive linkがdown

  • プライマリスイッチでPeer-keepalive linkをdownさせる(LEAF-01でkeepalive link(mgmt0)のインタフェースをshutする)
  • Peer-linkはアクティブを継続
    →このdownによる影響はない。keepalive link経由ではなく、PeerLink経由で相手を認識する動作となる。
//LEAF-02側でdebugコマンドを入れておく
LEAF-02# debug vpc peer-keepalive 

//LEAF-01でmgmt0をshut
LEAF-02# 
2023 Aug 16 11:43:04 LEAF-02 %$ VDC-1 %$ %VPC-2-PEER_KEEP_ALIVE_RECV_FAIL: In domain 1, VPC peer keep-alive receive has failed (message repeated 1 time)

//keepalive link経由での疎通は出来なくなったが、PeerLink経由で相手を認識できている
LEAF-02# sh vpc
Legend:
                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                     : 1   
Peer status                       : peer adjacency formed ok      
vPC keep-alive status             : peer is not reachable through peer-keepalive
/省略/

上記は公式ドキュメントどおりの期待された動作。なぜこのような動作としているのか意図を調べていたところ、相当古いがコミュニティでの会話に回答があり、なるほどと思えたので備忘としてリンクを貼っておく。
Cisco Community | vPC reaction to peer-link or peer-keepalive link failure

2
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?