0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

Performance Impact Assessment of Backing Chain LVM Snapshots in Proxmox VE 9 Series Environments

Posted at

Introduction

We have conducted multiple rounds of verification regarding the LVM snapshot feature in Proxmox VE environments.
Previously, LVM snapshots configured on iSCSI LUNs were not officially supported by Proxmox VE, though manual execution via LVM was possible. However, manual testing revealed severe performance degradation—up to 46.4 times slower—as the number of snapshots increased, rendering it impractical for real-world use.

In contrast, the Backing Chain LVM feature (Technology Preview) introduced in Proxmox VE 9.0 has demonstrated snapshot creation and rollback on a 100GB disk completing in just 2.7 seconds.

This article quantitatively evaluates how much the ** impact of increased snapshot counts on performance**.

Key Points of This Article

  • Measuring performance impact based on snapshot count using the Backing Chain LVM method
  • Performance comparison with conventional manual LVM snapshots
  • Clarification of limitations and considerations as a Technology Preview feature

Background: Challenges with Conventional Manual LVM Snapshots

Issues with the Conventional Manual LVM Snapshot Method

Performance problems identified in manual LVM snapshots through previous verification:

Number of Snapshots File Creation Speed Performance Degradation Rate
0 snapshots 9,236.5 files/sec -
1 snapshot 313.5 files/sec 29.5x degradation
2 snapshots 199.2 files/sec 46.4x degradation

Key Issues:

  • Significant overhead due to Copy-on-Write (CoW) mechanism
  • Dramatic increase in I/O latency (22.50ms -> 309.85ms)
  • Substantial decrease in Write IOPS (3,675.78 -> 1,323.55)
  • Processing time impractical for real-world use

Important Background:

  • LVM snapshots configured on iSCSI LUNs were traditionally not supported by Proxmox VE
  • Manual execution via LVM commands was possible, but performance raised doubts about practicality
  • Consequently, snapshot functionality was often avoided in many environments

Introduction of the Backing Chain Method

Verification results for Proxmox VE 9.0's Backing Chain LVM feature (Technology Preview):

Key Findings:

  • Snapshot creation (100GB): 2.516 seconds
  • Rollback (100GB): 2.697 seconds
  • Disabling zero-fill achieved 98% reduction in processing time
  • Data integrity verified on the actual filesystem

Technical Features:

  • Efficient differential management via the Chain method
  • Optimized zero-fill processing (saferemove 0 setting)
  • Nearly constant processing time even with large volumes

Verification Purpose and Hypothesis

Verification Purpose

  1. Measure performance impact of snapshot count using the Backing Chain method

    • Performance comparison with 0, 1, and 2 snapshots
    • Comparison with degradation patterns observed in conventional manual LVM snapshots
  2. Evaluate practical operational feasibility

    • Examine appropriate snapshot management strategies
    • Establish capacity design guidelines

Verification Hypotheses

Hypothesis 1: The Backing Chain method improves performance degradation caused by increased snapshot counts
Hypothesis 2: The 29.5-46.4x degradation observed with the conventional manual method is mitigated
Hypothesis 3: Performance at a practical level can potentially be maintained

Verification Environment

System Configuration

Proxmox VE: 9.0.5
Node Name: tx1320m1
iSCSI Target: TrueNAS (Virtual Machine Environment)
Test VM: RHEL9-Production-Test (VM ID: 105)
Virtual Disk: 100GB (Actual Production Scale)
OS: Red Hat Enterprise Linux 9

Hardware Specifications

Proxmox VE Host (tx1320m1)

  • CPU: Intel Xeon E3-1240 v3 @ 3.40GHz (8 cores)
  • Memory: 31.30 GiB
  • Kernel: Linux 6.14.8-2-pve
  • Proxmox VE: pve-manager/9.0.5

TrueNAS Host (iSCSI Target) - Virtual Machine Environment

  • Virtualization Platform: Running on Proxmox VE 8.3.0
  • Physical Host: AMD Ryzen 5 5600G with Radeon Graphics (12 cores)
  • Memory: 62.18 GiB

Storage Configuration

Optimized storage.cfg settings:

lvm: iSCSI-LVM
        vgname iSCSI-LVM-VG
        base TrueNAS:0.0.0.scsi-36589cfc0000007974c3b8445544ea1fc
        
content images,rootdir
        saferemove 0  # ← Optimized setting: Disable zero-fill
        shared 1
        snapshot-as-volume-chain 1  # ← Enable Backing Chain feature

4-Tier Virtualized Storage Architecture

RHEL9 Application (Measurement Script)
    v System Calls (write/read)
RHEL9 Filesystem (ext4/xfs)
    v Block I/O
Proxmox VE 9.0.5 LVM Logical Volume (vm-105-disk-0)
    v Device Mapping
iSCSI-LVM-VG (Volume Group) - Backing Chain Support
    v Physical Volume 
/dev/sdb (iSCSI Device)
    v iSCSI Protocol (Network)
TrueNAS Virtual Machine (Proxmox VE 8.3 Platform)
    v Virtualization Layer
Physical Storage (AMD Ryzen 5 5600G Environment)

Measurement Method

Script Used (Same as Previous Verification)

This verification uses the exact same script as the previous manual LVM snapshot performance evaluation:

1. File Creation Script

  • Function: Creates a large number of files (104,857) within a specified directory structure (1,048 folders)
  • File Size: Fixed at 100KB
  • Total Data Volume: Approximately 10GB
  • Measurement Items: File creation time, creation speed (files/second)

2. File Update Script

  • Function: Performs incremental updates on existing files

  • Update Patterns:

  • Initial File Creation: 0x00 pattern (all zeros)

    • 1st File Update: 0xff pattern (all ones)
    • 2nd File Update: 0x55 pattern (alternating pattern 1)
    • 3rd File Update: 0xaa pattern (alternating pattern 2)

3. Real-Time Monitoring Script

  • Function: Continuous monitoring of system resource usage
  • Monitored Items: CPU usage, memory usage, disk I/O statistics
  • Interval: 1-second intervals
  • Output Format: Log output in CSV format

Snapshot Creation Method

Previous verification used manual LVM commands, but this time we implement unified command-line operations via Proxmox VE:

# Create Snapshot
qm snapshot <vmid> <snapshot_name> --description “<description>”

# List Snapshots
qm listsnapshot <vmid>

# Delete Snapshot
qm delsnapshot <vmid> <snapshot_name>

Measurement Results

Snapshot Creation Time

Snapshot creation time on Proxmox VE 9.0 Backing Chain LVM:

# Creating the first snapshot
root@tx1320m1:~# time qm snapshot 105 performance-test-1 --description "Performance test with 1 snapshot"
snapshotting 'drive-scsi0' (iSCSI-LVM:vm-105-disk-0.qcow2)
external qemu snapshot
Creating a new current volume with performance-test-1 as backing snap
  Renamed "vm-105-disk-0.qcow2" to "snap_vm-105-disk-0_performance-test-1.qcow2" in volume group "iSCSI-LVM-VG"
  
Rounding up size to full physical extent <100.02 GiB
  Logical volume "vm-105-disk-0.qcow2" created.
Formatting '/dev/iSCSI-LVM-VG/vm-105-disk-0.qcow2', fmt=qcow2 cluster_size=131072 extended_l2=on preallocation=metadata compression_type=zlib size=107374182400 backing_file=snap_vm-105-disk-0_performance-test-1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16

real    0m2.864s
user    0m0.861s
sys     0m0.331s

# Create second snapshot
root@tx1320m1:~# time qm snapshot 105 performance-test-2 --description "Performance test with 2 snapshots"

real    0m2.828s
user    0m0.843s
sys     0m0.319s

Result: Snapshot creation completed in approximately 3 seconds even on a 100GB disk. Maintains the same high performance as the 2.5 seconds confirmed in previous verification.

File Creation Performance Measurement Results

Number of Snapshots Creation Time File Creation Speed Previous Comparison
0 snapshots 9.29 sec 11,288.2 files/sec Baseline
1 snapshot 8.79 sec 11,928.8 files/sec 1.06x
2 snapshots 9.59 sec 10,928.6 files/sec *0.97x *

Results: Performance degradation due to increased snapshots is negligible. A slight performance improvement is observed with 1 snapshot.

System Resource Usage Comparison

Item 0 Snapshots 1 Snapshot 2 Snapshots
CPU Usage (Average) 2.56% 5.15% 5.19%
CPU Usage (Max) 37.5% 38.9% 38.2%
Load Average (Average) 0.60 0.94 1.09
Load Average (Max) 1.48 1.46 1.37
Memory Usage 19.1% 21.5% 22.8%
Memory Usage (MB) 1.11 GB 1.30 GB 1.40 GB
Write IOPS (Average) 1,457 3,617 3,538
I/O Latency (Average) 11.3ms 23.7ms 24.6ms
Disk Write (Average) 108 MB/s 268 MB/s 261 MB/s

image.png

image.png

Comparison with Conventional Manual LVM Snapshots

Differences in Performance Degradation Patterns

Number of Snapshots Manual LVM Degradation Rate Backing Chain Degradation Rate Improvement Effect
0 snapshots 1.0x 1.00x Baseline
1 snapshot 29.5x degradation 0.95x (improvement) Significant improvement
2 snapshots 46.4x degradation 1.03x degradation Significant improvement

File Creation Speed Comparison

Number of Snapshots Manual LVM Method Backing Chain Method Performance Ratio
0 9,236 files/sec 11,288 files/sec 122.2%
1 snapshot 314 files/sec 11,929 files/sec 3,805.0%
2 snapshots 199 files/sec 10,929 files/sec 5,486.2%

I/O Latency Improvement Effect

Number of Snapshots Manual LVM Method Backing Chain Method Improvement Rate
0 22.5ms 11.3ms 2.0x Improvement
1 209.2ms 23.7ms 8.8x Improvement
2 309.9ms 24.6ms 12.6x Improvement

image.png

Performance Degradation Analysis

1. Significant Improvement Over Traditional Manual LVM Snapshots

Challenges of Manual LVM Method:

  • 29.5x performance degradation with 1 snapshot
  • 46.4x performance degradation with 2 snapshots
  • Severe I/O bottleneck due to Copy-on-Write processing
  • Processing delays making practical use difficult

Improvements with the Backing Chain Method:

  • Performance degradation due to snapshot count is nearly eliminated
  • Slight performance improvement (0.95x) with one snapshot
  • Significant improvement in I/O latency (8.8-12.6x)
  • Achieves practical-level performance

2. Technical Improvements

Efficiency of the Chain Approach:

Conventional CoW Approach: Redundant processing with every write
Backing Chain Approach: Lightweight processing via efficient differential management

Effects of Zero-Clear Optimization:

  • Skipping unnecessary processing via saferemove 0 setting
  • Optimized metadata management
  • Reduced physical I/O load

3. Impact on Production Operations

Operational Improvements:

  • Enables practical utilization of snapshot functionality
  • Expanded backup strategy options
  • Enhanced safety during system modification tasks
  • Reduced Disaster Recovery Time

Changes in Technical Background:

  • Previously: LVM snapshots on iSCSI LUNs were not supported by Proxmox VE; manual execution also raised practicality concerns
  • Currently: With the Backing Chain method, official support is planned and practical-level performance has been achieved

Considerations for Production Use

Snapshot Operational Strategy

Capacity Design Guideline (Critical Constraint):

Recommended VG Capacity = Disk Size × (Planned Snapshots + 1) × 1.5
Example: 100GB disk, assuming 2 snapshots
    -> Ensure VG capacity of 450GB or more

Operational Flow:

  1. Pre-operation Capacity Verification: Confirm sufficient free space using the vgs command
  2. Snapshot Creation: Create using qm snapshot
  3. Work Execution: Perform system modification tasks
  4. Result Verification: Continue operation if no issues; rollback if problems occur
  5. Periodic Cleanup: Delete unnecessary snapshots

Monitoring and Alert Configuration

Critical Monitoring Items:

  • Continuous monitoring of VG free space
  • Tracking snapshot utilization rate
  • Detection of abnormal I/O latency

Limitations and Precautions

Limitations as a Technology Preview Feature

  1. Feature in Development
  • Technology Preview feature in Proxmox VE 9.0
    • Requires sufficient testing period before production deployment
    • Potential for unexpected bugs
  1. Capacity Constraints (Critical)

    • Even with the Backing Chain method, creating a snapshot requires free space equal to the original disk size
    • Insufficient space causes creation errors, making prior capacity planning essential
  2. Security Considerations

    • Deleted data remains physically present due to the saferemove 0 setting
    • Not an issue in single-tenant environments, but requires consideration in multi-tenant environments

Specifics of the Test Environment

  • Measurement results in a 4-layer virtualization environment
  • Storage accessed via iSCSI
  • Storage provided by a TrueNAS virtual machine

Results may differ in typical physical environments or with directly attached storage.

Summary

Summary of Verification Findings

This verification confirmed that Proxmox VE 9.0's Backing Chain LVM feature is a promising technology that improves upon previous challenges.

Key Findings:

  1. Significant Improvement in Snapshot Performance Issues

    • Conventional 29.5-46.4x degradation -> Virtually no degradation
    • Substantial performance improvement in file creation speed
    • 8.8-12.6x improvement in I/O latency
  2. Confirmation of Practical Usability

    • Enables practical utilization of snapshot functionality
    • Enhances safety during system modification tasks
    • Reduces disaster recovery time
  3. Expanded Operational Options

    • Enables consideration of operations utilizing snapshots
    • Expands backup strategy options
    • Freedom from previous constraints

Technical Significance

Improvements via Backing Chain LVM:

  • Previously: LVM snapshots on iSCSI LUNs were not supported by Proxmox VE; manual execution also raised practicality concerns
  • Now: Officially supported via the Backing Chain method, achieving practical-level performance

Future Outlook and Considerations

Positioned as a Technology Preview feature:

  • Currently a developmental feature; unexpected bugs may exist
  • Requires sufficient testing and careful evaluation before production use
  • Further improvements expected in future official releases

Considerations for Production Use:

  • Importance of proper capacity planning
  • Need for continuous monitoring and maintenance
  • Conduct testing considering environment-specific characteristics

Final Assessment

Proxmox VE 9.0's Backing Chain LVM feature is a promising technology that improves upon previous challenges. However, we recommend considering its implementation only after careful verification, fully understanding its Technology Preview status.

With proper capacity design and sufficient verification, this technology has been confirmed to significantly improve upon previous limitations, offering enhanced operational efficiency and security.


We hope these verification results serve as a reference for storage operations in Proxmox VE environments.

Related Articles & Reference Information

Articles in This Series

  • "Proxmox VE 9.0 Backing Chain LVM - Configuration Guide"
  • "Proxmox VE 9.0 Backing Chain LVM - Operational Verification (Failure Edition)"
  • "Proxmox VE 9.0 Backing Chain LVM - Operational Verification (Success Edition)"
  • "Proxmox VE 9.0 Backing Chain LVM - Zero-Clear Optimization Verification"
  • "Proxmox VE 9.0 Backing Chain LVM - Practical Testing with RHEL9"

Comparative Articles

  • "Performance Impact Assessment of LVM Snapshots in Proxmox VE 8 Series Environments"
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?