0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

OpenNebulaフロントエンドをコンピュートノードとしても機能させる(OneDeploy設定更新)

Posted at

概要

これまでの記事では、nebula-f1をフロントエンド(管理ノード)専用として構成してきました。この記事では、OneDeployの設定を更新し、nebula-f1をコンピュートノードとしても機能させるための準備を行います。

前提条件

  • OpenNebula 7.0がOneDeployでデプロイ済み
  • OneDeploy実行環境(Ansibleコントロールノード)にアクセス可能
  • example.yamlファイルが保存されている
  • nebula-f1, nebula-n1, nebula-n2が稼働中

環境情報

ホスト名 IPアドレス 現在の役割 変更後の役割
nebula-f1 192.168.11.110 Frontend Frontend + Compute
nebula-n1 192.168.11.111 Compute Compute
nebula-n2 192.168.11.112 Compute Compute

作業手順

1. OneDeploy実行環境へのアクセス

# OneDeploy実行環境(Ansibleコントロールノード)にログイン

cd ~/one-deploy/my-one

# example.yamlの存在確認
ls -l example.yaml

2. 現在のexample.yamlの確認

cat example.yaml

現在の設定例:

flathill@nebula-f1:~/one-deploy/my-one$ cat example.yml
---
all:
  vars:
    ansible_user: flathill
    ansible_become: true
    ansible_become_method: sudo
    one_version: '7.0'
    one_pass: opennebulapass
    ensure_hosts: true
    ensure_keys: true
    features:
      evpn: false
      evpn_greedy: false
    ds:
      mode: ssh
    vn:
      admin_net:
        managed: true
        template:
          VN_MAD: bridge
          PHYDEV: enp6s18
          BRIDGE: br0
          AR:
            TYPE: IP4
            IP: 192.168.11.128
            SIZE: 48
          NETWORK_ADDRESS: 192.168.11.0
          NETWORK_MASK: 255.255.255.0
          GATEWAY: 192.168.11.1
          DNS: 1.1.1.1

frontend:
  hosts:
    nebula-f1: { ansible_host: 192.168.11.110 }

node:
  hosts:
    nebula-n1: { ansible_host: 192.168.11.111 }
    nebula-n2: { ansible_host: 192.168.11.112 }

3. example.yamlのバックアップ

# 元の設定をバックアップ
cp -p example.yml example.yml.backup.$(date +%Y%m%d_%H%M%S)

# バックアップの確認
ls -l example.yaml*

4. example.yamlの編集

vi example.yaml

修正後の内容:

---
all:
  vars:
    ansible_user: flathill
    ansible_become: true
    ansible_become_method: sudo
    one_version: '7.0'
    one_pass: opennebulapass
    ensure_hosts: true
    ensure_keys: true
    features:
      evpn: false
      evpn_greedy: false
    ds:
      mode: ssh
    vn:
      admin_net:
        managed: true
        template:
          VN_MAD: bridge
          PHYDEV: enp6s18
          BRIDGE: br0
          AR:
            TYPE: IP4
            IP: 192.168.11.128
            SIZE: 48
          NETWORK_ADDRESS: 192.168.11.0
          NETWORK_MASK: 255.255.255.0
          GATEWAY: 192.168.11.1
          DNS: 1.1.1.1

frontend:
  hosts:
    nebula-f1: { ansible_host: 192.168.11.110 }

node:
  hosts:
    nebula-f1: { ansible_host: 192.168.11.110 }
    nebula-n1: { ansible_host: 192.168.11.111 }
    nebula-n2: { ansible_host: 192.168.11.112 }

変更点: nodes セクションに nebula-f1 (192.168.11.110) を追加しました。

保存して終了(Ctrl+O, Enter, Ctrl+X)

5. 設定変更の確認

# 差分の確認
diff -ruN example.yaml.backup.* example.yaml

期待される出力:

flathill@nebula-f1:~/one-deploy/my-one$ diff -ruN example.yml.backup.* example.yml
--- example.yml.backup.20251209_083015  2025-12-04 01:36:32.705593676 +0000
+++ example.yml 2025-12-09 08:30:43.703887958 +0000
@@ -35,5 +35,6 @@

 node:
   hosts:
+    nebula-f1: { ansible_host: 192.168.11.110 }
     nebula-n1: { ansible_host: 192.168.11.111 }
     nebula-n2: { ansible_host: 192.168.11.112 }

6. Ansible接続テスト

# nebula-f1への接続確認
ansible -i example.yml f1 -m ping

期待される出力:

flathill@nebula-f1:~/one-deploy/my-one$ cd ..
flathill@nebula-f1:~/one-deploy$ hatch shell
flathill@nebula-f1:~/one-deploy$ source "/home/flathill/.local/share/hatch/env/virtual/one-deploy/_iO46lt7/one-deploy/bin/activate"
(one-deploy) flathill@nebula-f1:~/one-deploy/my-one$ ansible -i example.yml nebula-f1 -m ping
nebula-f1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

7. OneDeployの再適用(nebula-f1のみ)

# nebula-f1のみにKVMノード設定を適用
ansible-playbook -i example.yml opennebula.deploy.main --limit nebula-f1

実行中の出力例:

(one-deploy) flathill@nebula-f1:~/one-deploy/my-one$ ansible-playbook -i example.yml opennebula.deploy.main --limit nebula
-f1
[WARNING]: Collection community.crypto does not support Ansible version 2.16.14
[WARNING]: Could not match supplied host pattern, ignoring: bastion

PLAY [bastion] ***********************************************************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: router
[WARNING]: Could not match supplied host pattern, ignoring: grafana
[WARNING]: Could not match supplied host pattern, ignoring: mons
[WARNING]: Could not match supplied host pattern, ignoring: mgrs
[WARNING]: Could not match supplied host pattern, ignoring: osds

PLAY [router,frontend,node,grafana,mons,mgrs,osds] ***********************************************************************

TASK [opennebula.deploy.helper/python3 : Bootstrap python3 intepreter] ***************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/python3 : Install required python3 packages] **********************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/facts : Collect facts] ********************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.helper/cache : Query raw status of unattended-upgrades.service] **********************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/cache : Purge unattended-upgrades.service] ************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/cache : Update package cache] *************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/python3 : Bootstrap python3 intepreter] ***************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/python3 : Install required python3 packages] **********************************************
ok: [nebula-f1]

PLAY [frontend,node] *****************************************************************************************************

TASK [opennebula.deploy.common : Compute facts] **************************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.common : Dynamically add ungrouped inventory host to represent the Master] ***********************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts (Ceph)] *******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/hosts : Set hostname] *********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/hosts : Slurp /etc/hosts] *****************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.helper/hosts : Populate /etc/hosts] **************************************************************
skipping: [nebula-f1] => (item=nebula-f1)
skipping: [nebula-f1] => (item=nebula-n1)
skipping: [nebula-f1] => (item=nebula-n2)
skipping: [nebula-f1]

PLAY [frontend,node,mons,mgrs,osds] **************************************************************************************

TASK [opennebula.deploy.helper/facts : Collect facts] ********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts] **************************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.common : Dynamically add ungrouped inventory host to represent the Master] ***********************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts (Ceph)] *******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : ansible.builtin.include_tasks] ********************************************************
included: /home/flathill/one-deploy/ansible_collections/opennebula/deploy/roles/precheck/tasks/ceph.yml for nebula-f1

TASK [opennebula.deploy.precheck : Check if specified Ceph release is supported] *****************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if Ceph feature is supported for current distro family] *************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : ansible.builtin.include_tasks] ********************************************************
included: /home/flathill/one-deploy/ansible_collections/opennebula/deploy/roles/precheck/tasks/site.yml for nebula-f1

TASK [opennebula.deploy.precheck : Check ansible version] ****************************************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Ensure correct type for critical vars] ************************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Validate passwords strength] **********************************************************
skipping: [nebula-f1] => (item=one_pass)
skipping: [nebula-f1] => (item=context.PASSWORD)
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Check if parallel federation deployment has been requested] ***************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Ensure zone_name is always defined in federated clusters] *****************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if one_vip/force_ha settings are valid] *****************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if all vip related settings are provided] ***************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if Prometheus can be enabled (one_token)] ***************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if Prometheus can be enabled (one_version)] *************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if Prometheus can be enabled (federation)] **************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if legacy OneGate Proxy has been requested] *************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if distro family is supported] **************************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if distro is supported] *********************************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if node_hv is supported] ********************************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if DB backend is supported for the HA setup] ************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if TProxy is supported] *********************************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Ensure input TProxy config is correctly structured] ***********************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Check if Apache Passenger has been requested] *****************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check if using a supported web server] ************************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check Front-end hostname is valid] ****************************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.precheck : Check Front-end hostnames are resolvable] *********************************************
ok: [nebula-f1] => (item=nebula-f1)

TASK [opennebula.deploy.precheck : ansible.builtin.include_tasks] ********************************************************
[WARNING]: Collection community.general does not support Ansible version 2.16.14
included: /home/flathill/one-deploy/ansible_collections/opennebula/deploy/roles/precheck/tasks/pci_passthrough.yml for nebula-f1

TASK [opennebula.deploy.precheck : Gather additional hardware facts for CPU vendor and flags] ****************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Assert CPU architecture is x86_64] ****************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Assert CPU supports Hardware virtualization (VT-x/AMD-V)] *****************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Check for IOMMU ACPI table] ***********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Warn if IOMMU ACPI table not found] ***************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Check for IOMMU groups directory] *****************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Assert IOMMU is enabled in the kernel] ************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Check if IOMMU groups are populated] **************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Assert IOMMU groups are populated] ****************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.precheck : Try to load vfio-pci module if not present] *******************************************
skipping: [nebula-f1]

PLAY [frontend,node] *****************************************************************************************************

TASK [opennebula.deploy.common : Compute facts] **************************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.common : Dynamically add ungrouped inventory host to represent the Master] ***********************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts (Ceph)] *******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/keys : Create id_rsa/.pub keypairs] *******************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/keys : Add authorized keys] ***************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/keys : Slurp public keys from the master Front-end] ***************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/keys : Add authorized keys (from the master Front-end)] ***********************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/fstab : Install required OS packages] *****************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.helper/fstab : Add to fstab and mount filesystems] ***********************************************
skipping: [nebula-f1]

PLAY [all] ***************************************************************************************************************

TASK [opennebula.deploy.helper/facts : Collect facts] ********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts] **************************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.common : Dynamically add ungrouped inventory host to represent the Master] ***********************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts (Ceph)] *******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.repository : Install dependencies (TRY)] *********************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.repository : ansible.builtin.include_tasks] ******************************************************
skipping: [nebula-f1] => (item=ceph)
skipping: [nebula-f1] => (item=frr)
skipping: [nebula-f1] => (item=grafana)
included: /home/flathill/one-deploy/ansible_collections/opennebula/deploy/roles/repository/tasks/opennebula.yml for nebula-f1 => (item=opennebula)

TASK [opennebula.deploy.repository : Compute facts (DNF)] ****************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.repository : Enable required DNF extra repos] ****************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.repository : Check if OpenNebula GPG key is installed] *******************************************
ok: [nebula-f1]

TASK [opennebula.deploy.repository : Download OpenNebula GPG key (once)] *************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.repository : Install OpenNebula GPG key] *********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.repository : Install OpenNebula package source] **************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.repository : Update package manager cache] *******************************************************
ok: [nebula-f1]

PLAY [frontend] **********************************************************************************************************

TASK [opennebula.deploy.common : Compute facts] **************************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.common : Dynamically add ungrouped inventory host to represent the Master] ***********************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts (Ceph)] *******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.database : Check if DB backend is supported] *****************************************************
ok: [nebula-f1] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [opennebula.deploy.database : Install DB packages] ******************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.database : Enable / Start DB service (NOW)] ******************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.database : Create DB instance for OpenNebula] ****************************************************
ok: [nebula-f1]

TASK [ansible.builtin.include_role : repository] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Install OpenNebula base packages] ********************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Decide if Front-end should be HA / Sort-out Federation membership] ***********
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Set oneadmin's password if provided] *****************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Configure oned (DB + ONEGATE_ENDPOINT)] **************************************
ok: [nebula-f1] => (item={'actions': [{'put': {'path': ['DB', 'BACKEND'], 'value': '"mysql"'}}, {'put': {'path': ['DB', 'SERVER'], 'value': '"localhost"'}}, {'put': {'path': ['DB', 'PORT'], 'value': 0}}, {'put': {'path': ['DB', 'USER'], 'value': '"oneadmin"'}}, {'put': {'path': ['DB', 'DB_NAME'], 'value': '"opennebula"'}}, {'put': {'path': ['ONEGATE_ENDPOINT'], 'value': '"http://192.168.11.110:5030"'}}]})
ok: [nebula-f1] => (item=None)
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Workaround potential Libvirt's NFS detection issues] *************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Handle the keep_empty_bridge VNM setting] ************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Configure oned (RAFT)] *******************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Configure monitord] **********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Check if drs configuration exists] *******************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Configure drs scheduler] *****************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Check if rank configuration exists] ******************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Configure rank scheduler] ****************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : ansible.builtin.include_tasks] ***********************************************
included: /home/flathill/one-deploy/ansible_collections/opennebula/deploy/roles/opennebula/server/tasks/standalone.yml for nebula-f1

TASK [opennebula.deploy.opennebula/server : ansible.builtin.include_tasks] ***********************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : ansible.builtin.include_tasks] ***********************************************
included: /home/flathill/one-deploy/ansible_collections/opennebula/deploy/roles/opennebula/server/tasks/solo.yml for nebula-f1

TASK [opennebula.deploy.opennebula/server : Enable / Start OpenNebula (NOW)] *********************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Make sure OpenNebula is not restarted twice] *********************************
ok: [nebula-f1]

TASK [ansible.builtin.include_role : opennebula/leader] ******************************************************************

TASK [opennebula.deploy.opennebula/leader : Guess the Leader] ************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Decrement the retry counter] *************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Ping the Leader] *************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Dynamically add ungrouped inventory host to represent the Leader] ************
skipping: [nebula-f1] => (item=nebula-f1)
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Get Zone] ********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Detect if the Leader is there] ***********************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Slurp oneadmin's public key from the Leader] *********************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Get oneadmin's user template] ************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/server : Update oneadmin's user template] *********************************************
skipping: [nebula-f1]

TASK [ansible.builtin.include_role : repository] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.gate : Install OneGate Server and dependencies] **************************************************
ok: [nebula-f1]

TASK [ansible.builtin.include_role : opennebula/leader] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.gate : Configure OneGate Server (:host)] *********************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.gate : Configure oned (ONEGATE_ENDPOINT)] ********************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.gate : Configure TProxy] *************************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.gate : Enable OneGate Server (NOW)] **************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.gate : Start OneGate Server (NOW)] ***************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.gate : Make sure OneGate Server is not restarted twice] ******************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Guess the Leader] ************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Decrement the retry counter] *************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Ping the Leader] *************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Dynamically add ungrouped inventory host to represent the Leader] ************
skipping: [nebula-f1] => (item=nebula-f1)
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Get Zone] ********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Detect if the Leader is there] ***********************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.helper/flush : Flush] ****************************************************************************

TASK [opennebula.deploy.opennebula/leader : Guess the Leader] ************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Decrement the retry counter] *************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Ping the Leader] *************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Dynamically add ungrouped inventory host to represent the Leader] ************
skipping: [nebula-f1] => (item=nebula-f1)
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Get Zone] ********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.opennebula/leader : Detect if the Leader is there] ***********************************************
skipping: [nebula-f1]

TASK [ansible.builtin.include_role : repository] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.flow : Install OneFlow Server and dependencies] **************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.flow : Configure OneFlow Server (:host)] *********************************************************
ok: [nebula-f1]

TASK [ansible.builtin.include_role : opennebula/leader] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.flow : Enable OneFlow Server (NOW)] **************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.flow : Start OneFlow Server (NOW)] ***************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.flow : Make sure OneFlow Server is not restarted twice] ******************************************
ok: [nebula-f1]

TASK [ansible.builtin.include_role : repository] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.provision : Install OneProvision] ****************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.provision : Get Ansible version (oneadmin)] ******************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.provision : Install python3-pip] *****************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.provision : Install Ansible (oneadmin)] **********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.provision : Download and install Terraform] ******************************************************
skipping: [nebula-f1]

TASK [ansible.builtin.include_role : repository] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.gui : Install FireEdge and dependencies] *********************************************************
ok: [nebula-f1]

TASK [ansible.builtin.include_role : opennebula/leader] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.gui : Compute facts (GUI)] ***********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.gui : Configure Sunstone Server (token_remote_support)] ******************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.gui : Enable FireEdge Server (NOW)] **************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.gui : Start FireEdge Server (NOW)] ***************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.gui : Ensure FireEdge and Sunstone are not restarted twice] **************************************
ok: [nebula-f1]

TASK [Generate SSL certificates] *****************************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.gui : Install Web Server] ************************************************************************
skipping: [nebula-f1]

PLAY [router] ************************************************************************************************************
skipping: no hosts matched

PLAY [node] **************************************************************************************************************

TASK [opennebula.deploy.common : Compute facts] **************************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.common : Dynamically add ungrouped inventory host to represent the Master] ***********************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts (Ceph)] *******************************************************************
skipping: [nebula-f1]

TASK [ansible.builtin.include_role : repository] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.kvm : Install OpenNebula KVM packages] ***********************************************************
changed: [nebula-f1]

TASK [opennebula.deploy.kvm : Ensure each libvirtd uses distinct UUID] ***************************************************
[WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues
when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
changed: [nebula-f1]

TASK [opennebula.deploy.kvm : Restart libvirtd (NOW)] ********************************************************************
changed: [nebula-f1]

TASK [opennebula.deploy.kvm : Disable Libvirt's default network (optional)] **********************************************
ok: [nebula-f1]

TASK [opennebula.deploy.kvm : Query raw status of virtqemud.service] *****************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.kvm : Mask virtqemud* units and libvirtd* sockets] ***********************************************
skipping: [nebula-f1] => (item=virtqemud.service)
skipping: [nebula-f1] => (item=virtqemud.socket)
skipping: [nebula-f1] => (item=virtqemud-admin.socket)
skipping: [nebula-f1] => (item=virtqemud-ro.socket)
skipping: [nebula-f1]

TASK [opennebula.deploy.kvm : Create /etc/systemd/system/libvirtd.service.d/] ********************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.kvm : Override libvirtd.service] *****************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.kvm : Restart libvirtd (NOW)] ********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.kvm : Check if virsh errors out] *****************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.kvm : Delete stale virtqemud* sockets (fix)] *****************************************************
skipping: [nebula-f1] => (item=/var/run/libvirt/virtqemud-admin-sock)
skipping: [nebula-f1] => (item=/var/run/libvirt/virtqemud-sock)
skipping: [nebula-f1] => (item=/var/run/libvirt/virtqemud-sock-ro)
skipping: [nebula-f1]

TASK [opennebula.deploy.kvm : Check if AppArmor configuration exists] ****************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.kvm : Add permissions to AppArmor] ***************************************************************
changed: [nebula-f1] => (item=  /srv/** rwk,)
changed: [nebula-f1] => (item=  /var/lib/one/datastores/** rwk,)

TASK [opennebula.deploy.kvm : Reload apparmor] ***************************************************************************
changed: [nebula-f1]

TASK [ansible.builtin.include_role : opennebula/leader] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.kvm : Slurp oneadmin's pubkey] *******************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.kvm : Add oneadmin's pubkey to authorized keys] **************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.kvm : Get Nodes] *********************************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.kvm : Add this KVM Node to OpenNebula] ***********************************************************
changed: [nebula-f1]

TASK [ansible.builtin.include_role : ceph/repository] ********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Install extra Ceph dependencies] *****************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Create /var/run/ceph/ with correct permissions] **************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Ensure pool exists] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Ensure user exists] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Slurp UUID] **************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Detect UUID] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Generate new UUID] *******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Store UUID] **************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Set ceph.uuid fact] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Get keyring] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Get key] *****************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Store keyring] ***********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Create /etc/ceph/ceph.conf] **********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Create Libvirt secret XML] ***********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/node : Create Ceph secret in Libvirt] *******************************************************
skipping: [nebula-f1]

TASK [Provision Datastores (node)] ***************************************************************************************

TASK [ansible.builtin.include_role : opennebula/leader] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Get Datastores] ***************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Parse Datastores] *************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Ensure /var/lib/one/datastores/ exists] ***************************************
ok: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Setup datastore symlinks (image)] *********************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Setup datastore symlinks (system)] ********************************************
skipping: [nebula-f1]

TASK [ansible.builtin.include_role : repository] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.frr/common : Install Free Range Routing (FRR)] ***************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.frr/common : Remove frr.conf (cleanup)] **********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.frr/common : Configure FRR daemons (cleanup)] ****************************************************
skipping: [nebula-f1] => (item={'regexp': '^staticd_options="-A 127.0.0.1"', 'replace': 'staticd_options="-A 127.0.0.1 -P 2620"'})
skipping: [nebula-f1]

TASK [opennebula.deploy.frr/common : (Re)Start FRR service (NOW)] ********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.frr/evpn : Store evpn_if in hostvars] ************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.frr/evpn : Configure BGP] ************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.frr/evpn : Enable BGP] ***************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.frr/evpn : (Re)Start FRR service (NOW)] **********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.network/common : Compute helper facts] ***********************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.network/node : ansible.builtin.include_tasks] ****************************************************
skipping: [nebula-f1] => (item=['admin_net', 'netplan'])
skipping: [nebula-f1]

TASK [opennebula.deploy.pci_passthrough/node : Create vfio udev rule] ****************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.pci_passthrough/node : Reload udev rules] ********************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.pci_passthrough/node : Trigger udev events] ******************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.pci_passthrough/node : Install driverctl with retries] *******************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.pci_passthrough/node : Manage individual PCI devices] ********************************************
skipping: [nebula-f1]

TASK [ansible.builtin.include_role : repository] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.prometheus/exporter : Install Prometheus and dependencies] ***************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.prometheus/exporter : Enable / Start Prometheus exporters (NOW)] *********************************
skipping: [nebula-f1] => (item=opennebula-libvirt-exporter)
skipping: [nebula-f1] => (item=opennebula-node-exporter)
skipping: [nebula-f1]

PLAY [frontend] **********************************************************************************************************

TASK [opennebula.deploy.common : Compute facts] **************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Dynamically add ungrouped inventory host to represent the Master] ***********************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts (Ceph)] *******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/frontend : Slurp UUID] **********************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.ceph/frontend : Detect UUID] *********************************************************************
skipping: [nebula-f1]

TASK [Provision Datastores (frontend)] ***********************************************************************************

TASK [opennebula.deploy.common : Compute facts] **************************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.common : Dynamically add ungrouped inventory host to represent the Master] ***********************
skipping: [nebula-f1]

TASK [opennebula.deploy.common : Compute facts (Ceph)] *******************************************************************
skipping: [nebula-f1]

TASK [ansible.builtin.include_role : opennebula/leader] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Get Datastores] ***************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Parse Datastores] *************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Ensure /var/lib/one/datastores/ exists] ***************************************
ok: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Setup datastore symlinks (image)] *********************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Setup datastore symlinks (file)] **********************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.datastore/simple : Update Datastores] ************************************************************
skipping: [nebula-f1] => (item={'ID': '1', 'UID': '0', 'GID': '0', 'UNAME': 'oneadmin', 'GNAME': 'oneadmin', 'NAME': 'default', 'PERMISSIONS': {'OWNER_U': '1', 'OWNER_M': '1', 'OWNER_A': '0', 'GROUP_U': '1', 'GROUP_M': '0', 'GROUP_A': '0', 'OTHER_U': '0', 'OTHER_M': '0', 'OTHER_A': '0'}, 'DS_MAD': 'fs', 'TM_MAD': 'ssh', 'BASE_PATH': '/var/lib/one//datastores/1', 'TYPE': '0', 'DISK_TYPE': '0', 'STATE': '0', 'CLUSTERS': {'ID': '0'}, 'TOTAL_MB': '98682', 'FREE_MB': '77647', 'USED_MB': '15977', 'IMAGES': {'ID': ['3', '4', '5']}, 'TEMPLATE': {'ALLOW_ORPHANS': 'YES', 'CLONE_TARGET': 'SYSTEM', 'DISK_TYPE': 'FILE', 'DS_MAD': 'fs', 'LN_TARGET': 'SYSTEM', 'PERSISTENT_SNAPSHOTS': 'YES', 'RESTRICTED_DIRS': '/', 'SAFE_DIRS': '/var/tmp', 'TM_MAD': 'ssh', 'TYPE': 'IMAGE_DS'}})
skipping: [nebula-f1] => (item={'ID': '0', 'UID': '0', 'GID': '0', 'UNAME': 'oneadmin', 'GNAME': 'oneadmin', 'NAME': 'system', 'PERMISSIONS': {'OWNER_U': '1', 'OWNER_M': '1', 'OWNER_A': '0', 'GROUP_U': '1', 'GROUP_M': '0', 'GROUP_A': '0', 'OTHER_U': '0', 'OTHER_M': '0', 'OTHER_A': '0'}, 'DS_MAD': '-', 'TM_MAD': 'ssh', 'BASE_PATH': '/var/lib/one//datastores/0', 'TYPE': '1', 'DISK_TYPE': '0', 'STATE': '0', 'CLUSTERS': {'ID': '0'}, 'TOTAL_MB': '0', 'FREE_MB': '0', 'USED_MB': '0', 'IMAGES': {}, 'TEMPLATE': {'ALLOW_ORPHANS': 'YES', 'DISK_TYPE': 'FILE', 'DS_MIGRATE': 'YES', 'PERSISTENT_SNAPSHOTS': 'YES', 'RESTRICTED_DIRS': '/', 'SAFE_DIRS': '/var/tmp', 'SHARED': 'NO', 'TM_MAD': 'ssh', 'TYPE': 'SYSTEM_DS'}})
skipping: [nebula-f1] => (item={'ID': '2', 'UID': '0', 'GID': '0', 'UNAME': 'oneadmin', 'GNAME': 'oneadmin', 'NAME': 'files', 'PERMISSIONS': {'OWNER_U': '1', 'OWNER_M': '1', 'OWNER_A': '0', 'GROUP_U': '1', 'GROUP_M': '0', 'GROUP_A': '0', 'OTHER_U': '0', 'OTHER_M': '0', 'OTHER_A': '0'}, 'DS_MAD': 'fs', 'TM_MAD': 'ssh', 'BASE_PATH': '/var/lib/one//datastores/2', 'TYPE': '2', 'DISK_TYPE': '0', 'STATE': '0', 'CLUSTERS': {'ID': '0'}, 'TOTAL_MB': '98682', 'FREE_MB': '77647', 'USED_MB': '15977', 'IMAGES': {}, 'TEMPLATE': {'ALLOW_ORPHANS': 'YES', 'CLONE_TARGET': 'SYSTEM', 'DS_MAD': 'fs', 'LN_TARGET': 'SYSTEM', 'PERSISTENT_SNAPSHOTS': 'YES', 'RESTRICTED_DIRS': '/', 'SAFE_DIRS': '/var/tmp', 'TM_MAD': 'ssh', 'TYPE': 'FILE_DS'}})
skipping: [nebula-f1]

TASK [opennebula.deploy.network/common : Compute helper facts] ***********************************************************
ok: [nebula-f1]

TASK [ansible.builtin.include_role : opennebula/leader] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.network/frontend : Get VNETs] ********************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.network/frontend : Parse VNETs] ******************************************************************
ok: [nebula-f1]

TASK [opennebula.deploy.network/frontend : Update VNETs] *****************************************************************
skipping: [nebula-f1] => (item=admin_net)
skipping: [nebula-f1]

TASK [opennebula.deploy.network/frontend : Create VNETs] *****************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.network/frontend : Update ARs] *******************************************************************
skipping: [nebula-f1] => (item=['admin_net', 0])
skipping: [nebula-f1]

TASK [opennebula.deploy.network/frontend : Create ARs] *******************************************************************
skipping: [nebula-f1]

TASK [ansible.builtin.include_role : opennebula/leader] ******************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.pci_passthrough/frontend : Enable monitoring of all PCI devices] *********************************
skipping: [nebula-f1]

TASK [opennebula.deploy.pci_passthrough/frontend : Get OpenNebula hosts information] *************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.pci_passthrough/frontend : Set OpenNebula hosts list fact] ***************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.pci_passthrough/frontend : Apply PCI devices filtering per-node] *********************************
skipping: [nebula-f1] => (item=nebula-f1)
skipping: [nebula-f1] => (item=nebula-n1)
skipping: [nebula-f1] => (item=nebula-n2)
skipping: [nebula-f1]

TASK [ansible.builtin.include_role : repository] *************************************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.prometheus/server : Install Prometheus and dependencies] *****************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.prometheus/server : Patch Prometheus datasources] ************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.prometheus/server : Enable / Start / Restart Prometheus service and exporters (NOW)] *************
skipping: [nebula-f1] => (item=opennebula-exporter)
skipping: [nebula-f1] => (item=opennebula-node-exporter)
skipping: [nebula-f1] => (item=opennebula-prometheus)
skipping: [nebula-f1]

TASK [opennebula.deploy.prometheus/server : Configure Alertmanager] ******************************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.prometheus/server : Override Alertmanager's systemd unit (mkdir)] ********************************
skipping: [nebula-f1]

TASK [opennebula.deploy.prometheus/server : Override Alertmanager's systemd unit] ****************************************
skipping: [nebula-f1]

TASK [opennebula.deploy.prometheus/server : Enable / Start / Restart Alertmanager service (NOW)] *************************
skipping: [nebula-f1]

PLAY [grafana] ***********************************************************************************************************
skipping: no hosts matched

PLAY RECAP ***************************************************************************************************************
nebula-f1                  : ok=103  changed=6    unreachable=0    failed=0    skipped=151  rescued=0    ignored=0

所要時間: 約5〜10分

8. 適用結果の確認(nebula-f1)

8-1. KVMパッケージのインストール確認

# nebula-f1にログイン
ssh flathill@192.168.11.110

# KVMパッケージの確認
dpkg -l | grep -E 'qemu-kvm|libvirt'

期待される出力:

ii  libvirt-clients                      10.0.0-2ubuntu8.9                       amd64        Programs for the libvirt library
ii  libvirt-daemon                       10.0.0-2ubuntu8.9                       amd64        Virtualization daemon
ii  libvirt-daemon-config-network        10.0.0-2ubuntu8.9                       all          Libvirt daemon configuration files (default network)
ii  libvirt-daemon-config-nwfilter       10.0.0-2ubuntu8.9                       all          Libvirt daemon configuration files (default network filters)
ii  libvirt-daemon-driver-qemu           10.0.0-2ubuntu8.9                       amd64        Virtualization daemon QEMU connection driver
ii  libvirt-daemon-system                10.0.0-2ubuntu8.9                       amd64        Libvirt daemon configuration files
ii  libvirt-daemon-system-systemd        10.0.0-2ubuntu8.9                       all          Libvirt daemon configuration files (systemd)
ii  libvirt-l10n                         10.0.0-2ubuntu8.9                       all          localization for the libvir  library
ii  libvirt0:amd64                       10.0.0-2ubuntu8.9                       amd64        library for interfacing with different virtualization systems

8-2. libvirtdサービスの確認

sudo systemctl status libvirtd

期待される出力:

flathill@nebula-f1:~/one-deploy$ sudo systemctl status libvirtd
● libvirtd.service - libvirt legacy monolithic daemon
     Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; preset: enabled)
     Active: active (running) since Tue 2025-12-09 08:35:35 UTC; 2min 56s ago
TriggeredBy: ● libvirtd-ro.socket
             ● libvirtd.socket
             ● libvirtd-admin.socket
       Docs: man:libvirtd(8)
             https://libvirt.org/
   Main PID: 179171 (libvirtd)
      Tasks: 20 (limit: 32768)
     Memory: 14.4M (peak: 46.7M)
        CPU: 844ms
     CGroup: /system.slice/libvirtd.service
             mq179171 /usr/sbin/libvirtd --timeout 120

Dec 09 08:35:35 nebula-f1 systemd[1]: Starting libvirtd.service - libvirt legacy monolithic daemon...
Dec 09 08:35:35 nebula-f1 systemd[1]: Started libvirtd.service - libvirt legacy monolithic daemon.
Dec 09 08:35:35 nebula-f1 dnsmasq[173703]: read /etc/hosts - 11 names
Dec 09 08:35:35 nebula-f1 dnsmasq[173703]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 names
Dec 09 08:35:35 nebula-f1 dnsmasq-dhcp[173703]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Dec 09 08:35:35 nebula-f1 dnsmasq[173703]: exiting on receipt of SIGTERM

8-3. oneadminユーザーのSSH設定確認

# oneadminユーザーとしてlocalhostへの接続確認
sudo -u oneadmin ssh localhost hostname

期待される出力:

flathill@nebula-f1:~/one-deploy$ sudo -u oneadmin ssh localhost hostname
nebula-f1

9. OpenNebulaホストの確認

# oneadminユーザーに切り替え
sudo -i -u oneadmin

# ホストリストの確認
onehost list

期待される出力:

oneadmin@nebula-f1:~$ onehost list

  ID NAME                                                        CLUSTER    TVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   2 192.168.11.110                                              default      0       0 / 400 (0%)     0K / 7.7G (0%) on
   1 192.168.11.111                                              default      2    300 / 400 (75%)  4.1G / 7.7G (53%) on
   0 192.168.11.112                                              default      1    100 / 400 (25%)   128M / 7.7G (1%) on

10. Sunstone WebUIでの確認

  1. Sunstone WebUI (http://192.168.11.110:9869) にアクセス
  2. Infrastructure > Hosts を開く
  3. nebula-f1 が表示され、状態が on であることを確認

まとめ

この記事では、OneDeployの設定を更新し、nebula-f1をコンピュートノードとしても機能させるための設定を完了しました。

完了した作業:

  • example.yamlに nebula-f1 を nodes として追加
  • Ansible Playbookを再適用し、KVM関連パッケージとlibvirtdを設定
  • OpenNebulaホストとして nebula-f1 (f1) を登録

参考資料

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?