1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

AIX 7.3 TL4 から標準搭載のパフォーマンス診断ツール - perfpmr

1
Posted at

はじめに

AIX 7.3 TL4 より、これまで手動でダウンロードが必要だったパフォーマンス診断ツール perfpmr が、AIX の標準機能として正式に組み込まれました。

地味なアップデートに見えますが、AIX 管理者・自動化スクリプトの観点では大きな変化です。
本記事ではその背景・変更点・実際の使い方をまとめます。


動画

当記事を元にした動画を作成しました。理解の一助にお役立てください。


perfpmr とは

perfpmr(Performance PMR) は、AIX システムのパフォーマンス問題を診断・トラブルシューティングするためのユーティリティです。

CPU 枯渇・メモリページング・ディスク I/O 遅延・ネットワーク性能など、多角的なシステム統計を一括収集し、IBM サポートへのエスカレーション時に必要なデータを効率よく取得できます。

主な収集データ:

カテゴリ コマンド・ツール
システム統計 vmstat, iostat, netstat, sar, mpstat
トレース trace, lock trace, filemon, tprof
構成情報 lscfg, lslpp, snap など
ネットワーク iptrace, tcpdump

README を読む

ツールは /opt/IBM/perfpmr/ にインストールされており、同梱の README に詳細な手順が記載されています。

# cat /opt/IBM/perfpmr/README

README の要点を以下にまとめます。

項目 内容
対応環境 AIX 7.3 TL04 以降 / VIOS 4.1.2.0 以降
インストールパス /opt/IBM/perfpmr/
実行権限 root 必須
推奨空き容量 45MB × 論理CPU数 以上
FS の制約 リモートマウント FS での実行禁止(iptrace ハングの恐れ)
データ提出先 IBM eCuRep(FTP / HTTPS)

主な注意事項:

  • データ収集自体がわずかなオーバーヘッドを発生させる
  • トレース出力は 数百 MB 規模 になる場合がある
  • HACMP 環境では Dead Man Switch と同一ディスクを使用しないこと、または監視タイムアウトを事前に延長すること
  • iptrace はパケットレートが高い環境でネットワークスループットに影響を与えるため、不要な場合は -T オプションで省略可能
  • 収集後は PROBLEM.INFO に問題の背景を記入して IBM に提出する
README 原文(クリックで展開)
---------------------------------------------------------
This perfpmr package contains a number of performance tools and some instructions.  Some of these products available with AIX. These tools are now officially part of AIX, and we are providing support for them.

AIX PERFORMANCE DATA COLLECTION PROCESS
---------------------------------------------------------

  Note:   The act of collecting performance data may add a small bit of extra overhead. There may be a large volume of trace data (depending on how many logical cpus are busy) that is written to the filesystem; this may impact the HACMP Dead Man Switch monitor if it's using the same disk as where the data is being collected.If that's the case, then either use a different filesystem/disk for the data collection or extend the Dead Man Switch timeout prior to collecting perfpmr data to avoid accidental failovers.


  TABLE OF CONTENTS
  -----------------
       I.   INTRODUCTION
      II.   HOW TO OBTAIN TOOLS
     III.   HOW TO COLLECT DATA FOR AN AIX PERFORMANCE PROBLEM
      IV.   HOW TO SEND DATA TO IBM



 I.   INTRODUCTION

      This package contains a set of tools and instructions for collecting the data needed to analyze a AIX performance problem.  This tool set runs on AIX 7.3 TL04 and VIOS level 4.1.2.0 onwards.


 II.  HOW TO OBTAIN TOOLS

       The perfpmr tools is available inside perfpmr directory on both AIX and VIOS.

       Below is the path of perfpmr tool:
       =====================================
       /opt/IBM/perfpmr/

       This version should run fine on AIX 7.3 TL04 and VIOS level 4.1.2.0 onwards.

       login as root or use the 'su' command to obtain root authority


 III. HOW TO COLLECT DATA FOR AN AIX PERFORMANCE PROBLEM

      A. Purpose:

           1. This section describes the set of steps that should be followed to collect performance data.

           2. The goal is to collect a good base of information that can be used by AIX technical support specialists or development lab programmers to get started in analyzing and solving the performance problem. This process may need to be repeated after analysis of the initial set of data is completed.


      B. Collection of the Performance Data on Your System

           1. Detailed System Performance Data:

              Detailed performance data is required to analyze and solve a performance problem. Follow these steps to invoke the supplied shell scripts:

              NOTE:  You must have root user authority when executing these shell scripts.

                a. Create a data collection directory and 'cd' into this directory.
                   Allow at least 45MB*#of_logicalcpus of unused space in whatever file system is used.

                   *IMPORTANT* - DO NOT COLLECT DATA IN A REMOTELY MOUNTED FILESYSTEM SINCE IPTRACE MAY HANG

                   For example using /tmp filesystem:
                       # mkdir /tmp/perfdata
                       # cd /tmp/perfdata

                   If the disks for /tmp aren't fast, then it's preferable to use non-rootvg disks.

                b. HACMP users:
                     If HACMP's deadman switch monitor is checking response time of an I/O to a disk,then keep in mind that perfpmr will dump out possibly hundreds of megabytes of data near the beginning of perfpmr when the trace script is run. This may impact the HACMP monitor thread if it is using the same disk as where the perfpmr data is collected. It is generally recommended to lengthen the HACMP deadman switch interval while performance data is being collected if that is the case.

                c. Collect the 'standard' PERFPMR data for 600 seconds (this is the default)
                    Start the data collection while the problem is already occurring with the command:

                     /directory_where_perfpmrscripts_are_installed/perfpmr.sh

                   The perfpmr.sh shell provided will:
                   - immediately collect a 5 second trace (trace.sh 5) and then a 5-second lock trace
                   - there will be a 60-second trace on just one specific event ID after that
                   - collect 600 seconds of general system performance data (monitor.sh 600).
                   - collect hardware and software configuration information (config.sh).

                   In addition, if it finds the following programs available
                   in the current execution path, it will:
                   - collect 10 seconds of iptrace information (iptrace.sh 10)
                   - collect 10 seconds of filemon information (filemon.sh 10)
                   - collect two 60 seconds of tprof information (tprof.sh 60)
                   - Note that iptrace can have a noticeable impact on network throughput
                     when the packet rate is high. So if a 10-second period of impact to network throughput is unacceptable, then use the -T option of perfpmr.sh to omit iptrace collection.

                   NOTE:
                   Since performance problems may mask other problems, it is not uncommon to fix one issue and then collect more data to work on another issue.

                d. Answer the questions in the text file called 'PROBLEM.INFO' in the data collection directory created above.  This background information about your problem helps us better understand what is going wrong.


 IV. HOW TO SEND THE DATA TO IBM.

      A. Combine all the collected data into a single binary 'tar' file and compress it:

           Put the completed PROBLEM.INFO in the same directory where the data was collected (ie. /tmp/perfdata in the following example).  Change to the parent directory, and use the tar command as follows:

           Package the data in one of the following ways:
           1)
              # cd /tmp/perfdata
              # cd ..
              # pax -xpax -vw perfdata | gzip -c > salesforceticket#.pax.gz
             or
           2)  cd /tmp; perfpmr.sh -o perfdata -z salesforceticket#.pax.gz


      B. Submission of testcase to IBM:

        Blue Diamond clients should upload data only to the Blue Diamond servers. For all other clients, please use the instructions at this link to upload data to IBM:
             https://www.ibm.com/support/pages/enhanced-customer-data-repository-ecurep-send-data-ftp
         Click on the link for the upload method of your choice (ftp, https, Java)

             Data placed on the server listed below cannot be accessed by unauthorized personnel.

             All servers enforce authentication. For Web access, you need to authenticate with your IBMid. For FTPS and SFTP servers, you need to create an IBM Support File Transfer ID. The IBM Support File Transfer ID is valid until revoked.
                https://www.ibm.com/support/pages/enhanced-customer-data-repository-ecurep-send-data-ftp
                Userid:  your support id
                password:  your password
               'cd toibm/aix'
               'bin'
               'put  SF#.BRANCH#.COUNTRY#.pax.gz'
                  ex. 'TS01234567.pax.gz'
                  or  'TS01234567.perfpmr.hostname1.pax.gz'
               'quit'

            If the transfer fails with an error, it's possible that a file already exists by the same name on the ftp server. In this case, add something to the name of the file to differentiate it from the file already on the ftp site (ex. TS01234567.july18.pax.gz).

            Notify your IBM customer representative you have submitted the data.


AIX 7.3 TL4 での変更点

TL4 以前:毎回 FTP からダウンロードが必要

TL4 以前は、perfpmr を使用するにはダウンロードと導入が必要でした。

# OS バージョンを確認して該当スクリプトをダウンロード
ftp https://public.dhe.ibm.com/aix/tools/perftools/perfpmr/perf73/perf73.tar.Z
uncompress perf73.tar.Z
tar -xvf perf73.tar
./perfpmr.sh

自動化スクリプトに組み込む場合も、OS バージョンの判定 → ダウンロード → 展開 → 実行、という複雑なロジックが必要でした。

TL4 以降:固定パスに統合済み

bos.perf.tools フィールドセットに統合され、OS インストール時に自動でセットアップされます。

# perfpmr の実行
/opt/IBM/perfpmr/perfpmr.sh

変更点まとめ

項目 TL4 以前 TL4 以降
入手方法 IBM FTP から手動ダウンロード OS インストール時に自動インストール済み
インストール場所 任意ディレクトリ /opt/IBM/perfpmr/ に固定
バージョン管理 ユーザーが個別に管理 AIX 更新(TL/SP)経由で自動提供
自動化スクリプト DL → 展開 → 実行の複雑なロジックが必要 固定パスを直接コールするだけ
サポート状態 非公式(AS-IS 提供) OS レベルで正式サポート

実際の使い方

前提条件

  • OS バージョン:AIX 7.3 TL4 以降
  • 権限:root 必須
  • 空き容量:45MB × 論理CPU数 以上
  • リモートマウントされた FS での実行は禁止(iptrace がハングする可能性あり)

ステップ 1:データ収集ディレクトリの作成

# mkdir /tmp/perfdata
# cd /tmp/perfdata

ステップ 2:問題発生中に perfpmr を実行

パフォーマンス低下が発生している 最中 に実行します。デフォルトは 600 秒(10 分) のサンプリングです。

# /opt/IBM/perfpmr/perfpmr.sh

秒数を変更したい場合:

# /opt/IBM/perfpmr/perfpmr.sh 300   # 5分の場合

iptrace のネットワーク影響を避けたい場合は -T オプションを使用します。

/opt/IBM/perfpmr/perfpmr.sh -T

ステップ 3:データの圧縮と IBM への提出

# cd /tmp
# perfpmr.sh -o perfdata -z TS#チケット番号#.pax.gz

稼働確認

AIX 7.3 TL4-00-2546 環境でオプションなし実行してみます。

# oslevel -s
7300-04-00-2546

実行結果は 約 19 分 で完了しました。実行中は以下のフェーズが順番に走ります:

フェーズ 処理内容 所要時間(目安)
1 trace.sh(5秒トレース + ロックトレース) 約 15 秒
2 trace_419.sh(特定イベント 60秒トレース) 約 60 秒
3 monitor.sh(600秒のシステム全体統計収集) 約 633 秒
4 iptrace.sh / tcpdump.sh(各 10秒) 約 36 秒
5 filemon.sh(10秒) 約 12 秒
6 tprof.sh(60秒 × 2回) 約 124 秒
7 config.sh(HW/SW 構成情報収集 + snap 約 252 秒

実行ログ全文

time /opt/IBM/perfpmr/perfpmr.sh の実行ログ(クリックで展開)
# time /opt/IBM/perfpmr/perfpmr.sh
PERFPMR: Memory required: 260000000 bytes (247 MB)   noncomputational_mem_available=2029080576 bytes (1935) MB

(C) COPYRIGHT International Business Machines Corp., 2000,2001,2002,2003,2004-2011

21:10:06-04/24/26 :     perfpmr.sh begin
    PERFPMR: hostname: testaix7304
    PERFPMR: systemid: IBM,06785CA21
    PERFPMR: perfpmr.sh Version Universal 2025/09/24  oslevel: 7.3
    PERFPMR: current directory: /tmp/perfdata
    PERFPMR: perfpmr tool directory: /opt/IBM/perfpmr
    PERFPMR: Parameters passed to perfpmr.sh:
    PERFPMR: Data collection started in foreground (renice -n -20)
    PERFPMR: Current timezone: CST6CDT
21:10:06-04/24/26 :     PERFPMR: executing perfpmr_perfpmr_premonitor -p 1 -t 1 stanza
                        PERFPMR: Executing /opt/IBM/perfpmr/perfpmr_premonitor.sh -p 1 -t 1
Starting perfstat_trigger at Fri Apr 24 21:10:06 CDT 2026
Starting tcpstat at Fri Apr 24 21:10:06 CDT 2026
21:10:07-04/24/26 :     PERFPMR: executing perfpmr_allocate_tracebuffers stanza
21:10:07-04/24/26 :     PERFPMR: executing perfpmr_trace -k 10e,254,116,117 -L 20000000 -T 20000000 -I -c -E7 5 stanza
                        PERFPMR: Executing /opt/IBM/perfpmr/trace.sh.sh -p  -k 10e,254,116,117 -L 20000000 -T 20000000 -I -c -E7 5
     TRACE.SH: Stopping any previously running traces
/dev/systrctl: No such device or address
TRACE.SH: trcstop -s not supported for non-circ files. Retrying without -s option
/dev/systrctl: No such device or address
     TRACE.SH: Starting trace for 5 seconds
/usr/bin/trace    -p -r PURR   -k 10e,254,116,117  -f -n -C all -d -L 20000000 -T 20000000 -ao trace.raw
21:10:07-04/24/26 :     initializing trace buffers  :  execution_time: 0.153 seconds
21:10:07-04/24/26 :     trcon completed
     TRACE.SH: Data collection started
     TRACE.SH: trace data collection is being stopped
21:10:12-04/24/26 :     trcoff completed
21:10:12-04/24/26 :          TRACE.SH: Suspending component trace
21:10:12-04/24/26 :          TRACE.SH: component trace suspended
21:10:12-04/24/26 :     trcstop  :  execution_time: 0.0210000000000008 seconds
     TRACE.SH: Trace stopped
21:10:12-04/24/26 :     trcstop completed

     TRACE.SH: Enabling locktrace
lock tracing enabled for all classes
     TRACE.SH: Changing lock trace level from <3> to <7> temporarily
     TRACE.SH: Starting trace for 5 seconds
/usr/bin/trace    -p -r PURR  -j 234,101,104,106,10C,10E,112,113,134,139,465,46D,46E,5D8,606,607,608,609,1044,4AF,419,11F,107,200,102,103,102F,4B0 -f -n -C all -d -L 20000000 -T 20000000 -ao trace.raw.lock
21:10:12-04/24/26 :     initializing lock trace buffers  :  execution_time: 1.115 seconds
21:10:13-04/24/26 :     trcon completed
     TRACE.SH: Data collection started
     TRACE.SH: trace data collection is being stopped
21:10:18-04/24/26 :     trcoff completed
21:10:18-04/24/26 :     trcstop  :  execution_time: 0.0800000000000001 seconds
     TRACE.SH: Trace stopped
21:10:18-04/24/26 :     trcstop completed
     TRACE.SH: Disabling locktrace
lock tracing disabled for all classes
     TRACE.SH: Setting locktrclevel of <7> back to previous level of <3>
     TRACE.SH: Trcnm data is in file trace.nm
     TRACE.SH: /etc/trcfmt saved in file trace.fmt
     TRACE.SH: Binary trace data files are in trace.raw.lock*
     TRACE.SH: Trcnm data is in file trace.nm
     TRACE.SH: /etc/trcfmt saved in file trace.fmt
TRACE.SH: skipping collection of gennames/gensyms/inode data until config.sh
     TRACE.SH: Resuming component trace from suspended state
21:10:22-04/24/26 :     trace.sh completed: execution_time: 15.262 seconds
21:10:22-04/24/26 :     PERFPMR: executing perfpmr_comptrace -L 20000000 -T 20000000 -d -s 5 stanza
                        PERFPMR: skipping comptrace collection
21:10:22-04/24/26 :     PERFPMR: executing perfpmr_trace_419 -L 10000000 -T 5000000 60 stanza
                        PERFPMR: Executing /opt/IBM/perfpmr/trace_419.sh -L 10000000 -T 5000000 60
21:10:22-04/24/26 :     trace_419.sh
/usr/bin/trace -C all -j 419    -L 10000000 -T 5000000 -ano trc_419.raw
21:11:22-04/24/26 :     trace_419 has been stopped
21:11:22-04/24/26 :     trace_419.sh completed: execution_time: 60.194 seconds
21:11:22-04/24/26 :     PERFPMR: executing perfpmr_monitor -I 0 -N 0 -S 0 600 stanza
                        PERFPMR: Executing /opt/IBM/perfpmr/monitor.sh   -I 0 -N 0 -S 0 600
21:11:22-04/24/26 :     monitor.sh -I

     MONITOR: Capturing initial lsps, svmon, and vmstat data
21:11:23-04/24/26 :          MONITOR: getting vnic stats into vnicstats.before
21:11:23-04/24/26 :          MONITOR: getting pile stats into pile.before
21:11:27-04/24/26 :     svmon -G   :  execution_time: 0.004 seconds
21:11:27-04/24/26 :     svmon -P -O mapping=on,maxbufsize=32MB   :  execution_time: 0.024 seconds
21:11:27-04/24/26 :     svmon -S -O pidlist=on,maxbufsize=32MB svmon internal error: 0x0001 : svm_segment failed due to select_vsid :occurence = 1 errno = 9
  :  execution_time: 0.018 seconds
21:11:27-04/24/26 :     svmon.sh completed  :  execution_time: 0.062 seconds
21:11:27-04/24/26 :     fcstat before on each fc adapter
21:11:28-04/24/26 :     fcstats completed
21:11:28-04/24/26 :     lsmpio before
21:11:28-04/24/26 :     IPSEC before
     MONITOR: Starting perf_xtra programs: initsleep=0 count=0 sleep=0
21:11:28-04/24/26 :          MONITOR: Starting system monitors for 600 seconds.
              Starting <mpstat> monitoring
21:11:28-04/24/26 :     perfxtra.sh completed: execution_time: 0.002 seconds
              Starting <ps> monitoring
              Starting <nfs> monitoring
              Starting <netstat> monitoring
              Starting <24x7count> monitoring
              Starting <kdb> monitoring
              Starting <emstat> monitoring
              Starting <tstat> monitoring
              Starting <sar> monitoring
              Starting <iostat> monitoring
              Starting <aiostat> monitoring
              Starting <ventstat> monitoring
              Starting <fcstat2> monitoring
              Starting <iomon> monitoring
              Starting <foldstat> monitoring
              Starting <powerstat> monitoring
              Starting <lparstat> monitoring
              Starting <vmstat> monitoring
iostat: No tapes found in the system.
iostat: 0551-157 Asynchronous I/O not configured on the system.
lparstat: 0551-016 Command not supported on this Platform.
              Starting <pprof> monitoring
21:11:35-04/24/26 :          MONITOR: Waiting for measurement period to end....
21:21:50-04/24/26 :          MONITOR: Capturing final lsps, svmon, and vmstat data
21:21:50-04/24/26 :          MONITOR: getting vnic stats into vnicstats.after
21:21:50-04/24/26 :          MONITOR: getting pile stats into pile.after
21:21:53-04/24/26 :     svmon -G   :  execution_time: 0.004 seconds
21:21:54-04/24/26 :     svmon -P -O mapping=on,maxbufsize=32MB   :  execution_time: 0.023 seconds
21:21:54-04/24/26 :     svmon -S -O pidlist=on,maxbufsize=32MB svmon internal error: 0x0001 : svm_segment failed due to select_vsid :occurence = 1 errno = 9
  :  execution_time: 0.018 seconds
21:21:54-04/24/26 :     svmon.sh completed  :  execution_time: 0.06 seconds
21:21:54-04/24/26 :     fcstat after on each fc adapter
21:21:55-04/24/26 :     fcstats completed
21:21:55-04/24/26 :     lsmpio after
21:21:55-04/24/26 :     IPSEC after
     MONITOR: Generating reports....
     MONITOR: Network reports are in netstat.int and nfsstat.int
     MONITOR: Monitor reports are in monitor.int and monitor.sum.
21:21:55-04/24/26 :     monitor.sh completed: execution_time: 632.923 seconds
21:21:55-04/24/26 :     PERFPMR: executing perfpmr_iptrace -L 0 -m500000 10 stanza
                        PERFPMR: Executing /opt/IBM/perfpmr/iptrace.sh -L 0 -m500000 10
Checking current packet rate for 5 seconds
Average packet rate of <12> is less than maxrate threshold of <500000>
iptrace collection will proceed

     IPTRACE: Starting iptrace for 10 seconds....
0513-059 The iptrace Subsystem has been started. Subsystem PID is 10223938.
0513-044 The iptrace Subsystem was requested to stop.
     IPTRACE: iptrace collected.
     IPTRACE: Binary iptrace data is in file iptrace.raw   :  execution_time: 10.022 seconds
21:22:12-04/24/26 :     PERFPMR: executing perfpmr_tcpdump -m500000 10 stanza
                        PERFPMR: Executing /opt/IBM/perfpmr/tcpdump.sh -m500000 10

     TCPDUMP: Starting tcpdump for 10 seconds....
Checking current packet rate for 5 seconds

     TCPDUMP: tcpdump collected.
     TCPDUMP: Binary tcpdump data is in file tcpdump.raw.en0
TCPDUMP COLLECTION :  execution_time: 10.015 seconds
21:22:28-04/24/26 :     PERFPMR: executing perfpmr_filemon -T 10000000 -O detailed,all -D 10 stanza
                        PERFPMR: Executing /opt/IBM/perfpmr/filemon.sh -T 10000000 -O detailed,all -D 10

     FILEMON: Starting filesystem monitor for 10 seconds....
21:22:28-04/24/26 :     Starting trace -C all -j 139,134,107,10B,15B,163,19C,1C9,221,222,228,232,2A1,2A2,45B,5D8,AB2 -L 10000000 -T 10000000 -andfo filemon_trace.raw  :  execution_time: 0.14 seconds
21:22:29-04/24/26 :     trcon initiated  :  execution_time: 0.00299999999999989 seconds
     FILEMON: tracing started
21:22:39-04/24/26 :     trcstop initiated  :  execution_time: 0.00900000000000034 seconds
21:22:39-04/24/26 :     FILEMON: trcstop completed
     FILEMON: Generating report....
21:22:39-04/24/26 :     filemon report generating .. 21:22:39-04/24/26 :        running gensyms  :  execution_time: 1.123 seconds
21:22:40-04/24/26 :     FILEMON: filemon.sh post processing completed
21:22:40-04/24/26 :     FILEMON: filemon.sh removing trace files
21:22:40-04/24/26 :     FILEMON: filemon.sh completed  : execution_time: 12.307 seconds
21:22:40-04/24/26 :     PERFPMR: executing perfpmr_tprof -f10 60 stanza
                        PERFPMR: Executing /opt/IBM/perfpmr/tprof.sh -f10 60

     TPROF: Starting tprof for 60 seconds....
Creating /tmp/perfdata/tprof_data/tprofdata_purrbased
21:22:41-04/24/26 :     Running purr-based tprof -v      -R -T 20000000 -l -r tprof -F  -A all -x /usr/bin/sleep 60   :  execution_time: 61.417 seconds
/tmp/perfdata/tprof_data
Creating /tmp/perfdata/tprof_data/tprofdata_tickbased

     TPROF: Starting tprof with no PURR for 60 seconds....
21:23:42-04/24/26 :     Running tick-based tprof -v     -T 20000000 -l -r tprof_nopurr -F  -A all -x /usr/bin/sleep 60   :  execution_time: 61.404 seconds
/tmp/perfdata/tprof_data
     TPROF: Sample data collected....
21:24:44-04/24/26 :     tprof    -R   -l -r tprof -skej  :  execution_time: 0.63600000000001 seconds
/tmp/perfdata/tprof_data
21:24:44-04/24/26 :     tprof      -l -r tprof_nopurr -zskej  :  execution_time: 0.640999999999991 seconds
/tmp/perfdata/tprof_data
     TPROF: tprof trc and syms files are in tprof_data
21:24:45-04/24/26 :     tprof collection completed
     TPROF: Tprof report is in tprof.sum and tprof_nopurr.prof
21:24:45-04/24/26 :     tprof.sh completed: execution_time: 124.434 seconds
21:24:45-04/24/26 :     PERFPMR: executing perfpmr_hpmcount -H 5 stanza
                        PERFPMR: skipping hpmcount collection
21:24:45-04/24/26 :     PERFPMR: executing perfpmr_pmucount -g all -o pmucount_kuyng.txt -e pmucount.stderr -P 59 1 stanza
                        PERFPMR: skipping pmucount collection
21:24:45-04/24/26 :     PERFPMR: executing perfpmr_probevue 5 stanza
                        PERFPMR: skipping probevue collection
21:24:45-04/24/26 :     PERFPMR: executing perfpmr_config -u 0 stanza
                        PERFPMR: Executing /opt/IBM/perfpmr/config.sh     -u 0
21:24:45-04/24/26 :     config.sh begin

     CONFIG.SH: Generating SW/HW configuration
21:24:45-04/24/26 :     copying ODM files  :  execution_time: 0.083 seconds
21:24:46-04/24/26 :     do_kdb_vmm  :  execution_time: 2.647 seconds
21:24:48-04/24/26 :     ipcs -Smqsa  :  execution_time: 0.00300000000000011 seconds
21:24:48-04/24/26 :     lsmcode -A  :  execution_time: 0.365 seconds
21:24:49-04/24/26 :     lspv -u   :  execution_time: 0.0209999999999999 seconds
21:24:49-04/24/26 :     lspv   :  execution_time: 0.0220000000000002 seconds
21:24:49-04/24/26 :     lsvg   :  execution_time: 0.0339999999999998 seconds
21:24:49-04/24/26 :     lsvg -l  :  execution_time: 0.0180000000000002 seconds
21:24:49-04/24/26 :     lslv lv  :  execution_time: 0.451 seconds
21:24:49-04/24/26 :     lsattr -E -l dev  :  execution_time: 0.179 seconds
21:24:49-04/24/26 :     getting disk queue_depth info  :  execution_time: 0.675999999999999 seconds
21:24:50-04/24/26 :     disk queue_depth info completed
21:24:50-04/24/26 :     df  :  execution_time: 0.00600000000000023 seconds
21:24:51-04/24/26 :     /usr/bin/netstat -in -rn -D -an -c -Cn  :  execution_time: 0.0190000000000001 seconds
RPC: 1832-011 Program unavailable: nfso: 1831-730 cannot contact local statd.
RPC: 1832-011 Program unavailable: nfso: 1831-730 cannot contact local statd.
RPC: 1832-011 Program unavailable: nfso: 1831-730 cannot contact local statd.
RPC: 1832-011 Program unavailable: nfso: 1831-730 cannot contact local statd.
RPC: 1832-011 Program unavailable: nfso: 1831-730 cannot contact local statd.
RPC: 1832-011 Program unavailable: nfso: 1831-730 cannot contact local statd.
21:24:56-04/24/26 :     getmempool.sh  :  execution_time: 1.822 seconds
21:24:58-04/24/26 :     getvmpool.sh  :  execution_time: 0.678000000000001 seconds
21:24:58-04/24/26 :     getj2mem.sh  :  execution_time: 0.289999999999999 seconds
21:24:59-04/24/26 :     genkld  :  execution_time: 0.00400000000000134 seconds
21:24:59-04/24/26 :     genkex  :  execution_time: 0.00299999999999834 seconds
21:24:59-04/24/26 :     getevars  :  execution_time: 0.00499999999999901 seconds
21:24:59-04/24/26 :     ls /proc/pid/cwd  :  execution_time: 0.00600000000000023 seconds
21:24:59-04/24/26 :     errpt  :  execution_time: 0.0229999999999997 seconds
21:24:59-04/24/26 :     emgr -lv3 > emgr.out There is no efix data on this system.
  :  execution_time: 0.277000000000001 seconds
21:24:59-04/24/26 :     lslpp -ch  :  execution_time: 0.324999999999999 seconds
21:24:59-04/24/26 :     instfix -ic  :  execution_time: 1.215 seconds
21:25:01-04/24/26 :     lscfg -vp  :  execution_time: 0.327 seconds
lparstat: 0551-016 Command not supported on this Platform.
21:25:01-04/24/26 :     lvol kdb cmd  :  execution_time: 0.459 seconds
21:25:01-04/24/26 :     do_kdb_devsw  :  execution_time: 2.484 seconds
21:25:04-04/24/26 :     echo vnode|kdb  :  execution_time: 3.642 seconds
21:25:08-04/24/26 :     echo vfs|kdb  :  execution_time: 0.280999999999999 seconds
21:25:08-04/24/26 :     collect dscrctl and NX settings  :  execution_time: 0.373999999999999 seconds
21:25:08-04/24/26 :     echo dmpdt_chrp -i  :  execution_time: 0.297999999999998 seconds
21:25:08-04/24/26 :     sysdumpdev -l, -e  :  execution_time: 0.00600000000000023 seconds
21:25:09-04/24/26 :     gennames  :  execution_time: 0.715 seconds
cp: /etc/resolv.conf: No such file or directory
21:25:09-04/24/26 :     VIO client cfg data  :  execution_time: 0.762 seconds
/var/perf/daily//*.nmon not found
./*.nmon not found
21:25:10-04/24/26 :     getting alog information
21:25:10-04/24/26 :     powermt data being collected
21:25:10-04/24/26 :     db2 tids being collected  :  execution_time: 0.0159999999999982 seconds
21:25:11-04/24/26 :     proctree -Tt  :  execution_time: 0.00900000000000034 seconds
21:25:11-04/24/26 :     memdetails.sh  -u 0   :  execution_time: 17.495 seconds
21:25:28-04/24/26 :     memdetails.sh  completed
21:25:28-04/24/26 :     application symbol tables being generated  :  execution_time: 0.0559999999999974 seconds
21:25:28-04/24/26 :     collect logfiles  :  execution_time: 0.0399999999999991 seconds
21:25:28-04/24/26 :     ipsec  :  execution_time: 0.0150000000000006 seconds
21:25:28-04/24/26 :     smtctl  :  execution_time: 0.00300000000000011 seconds
21:25:28-04/24/26 :     lruobj  :  execution_time: 1.087 seconds
/tmp/perfdata
21:25:29-04/24/26 :     AHAFS:  AHAFS is being used .. collecting ahafs config
  :  execution_time: 0.00800000000000267 seconds
21:25:30-04/24/26 :     th  |kdb  :  execution_time: 0.702999999999996 seconds
21:25:31-04/24/26 :     running lsconf  :  execution_time: 1.571 seconds
21:25:32-04/24/26 :     do_kdb_scb  :  execution_time: 5.268 seconds
21:25:38-04/24/26 :     vnicstat.sh  :  execution_time: 0.0619999999999976 seconds
21:25:38-04/24/26 :     radinfo  :  execution_time: 0.0110000000000028 seconds
21:25:38-04/24/26 :     nvme_collect  :  execution_time: 2.20399999999999 seconds
21:25:40-04/24/26 :     pmctl/pmcycles  :  execution_time: 0.00600000000000023 seconds
21:25:40-04/24/26 :     vxfs  :  execution_time: 0.00800000000000267 seconds
21:25:40-04/24/26 :     lcpu2bid  :  snap execution_time: 0.00300000000000011 seconds
21:25:40-04/24/26 :     ipf  :  execution_time: 0 seconds
21:25:40-04/24/26 :     efsenable  :  execution_time: 0.155999999999999 seconds
21:25:40-04/24/26 :     snap -gGfFLktRSn -d /tmp/perfdata/snap_data
(snap の出力は長いため後述)
  :  snap execution_time: 193.193 seconds
21:28:53-04/24/26 :     locale
21:28:53-04/24/26 :     ipclimits
21:28:53-04/24/26 :     lsuser -a capabilities ALL
21:28:53-04/24/26 :     pmlist
Output in pmlist.allgroups.txt
21:28:54-04/24/26 :     pmlist completed  :  execution_time: 0.04 seconds
21:28:54-04/24/26 :     collecting inode data from kdb  :  execution_time: 3.26000000000002 seconds
21:28:57-04/24/26 :     running gensyms  :  execution_time: 0.671999999999997 seconds
21:28:58-04/24/26 :     gensyms completed
21:28:58-04/24/26 :     proc/version
     CONFIG.SH: Report is in file config.sum
21:28:58-04/24/26 :     config.sh completed  :  execution_time: 252.431 seconds

    PERFPMR: Data collection complete.

Stopping perfpmr_premonitor.sh
    root 13435328        1   0 21:10:06  pts/0  0:00 /bin/ksh93 /opt/IBM/perfpmr/perfpmr_premonitor.sh -p 1 -t 1
In perfpmr_premonitor cleanup

PERFPMR: Data files can be archived and gzipped using:
   perfpmr.sh -z filename [-o "dirs"]
      filename  is the name of the archive file.
         An example of a typical archive filename:
            /tmp/salesforceticket#.perfpmr.pax.gz
         Or /tmp/salesforceticket#.host.perfpmr.pax.gz
      "dirs"      dirs is a list of directories or perfdata directory name enclosed in quotes.
       When  -o is specified, you must be in the parent directory of the data director[y\ies] specified
          ex. perfpmr.sh -z /tmp/TS0000000.perfpmr.pax.gz -o perfdata
       If -o not specified, all files in current directory are archived.
       For ftp upload instructions, visit:
       https://www.ibm.com/support/pages/enhanced-customer-data-repository-ecurep-send-data-ftp
       NOTE: Blue Diamond clients should upload to the BD servers instead.
21:28:58-04/24/26 :     perfpmr.sh completed  :  execution_time: 1134.113 seconds

real    18m54.20s
user    0m23.89s
sys     0m17.22s

生成されたファイル一覧

完了後、収集ディレクトリには大量のファイルが生成されます。

ls -l の結果(クリックで展開)
# ls -l
total 265592
-rw-r--r--    1 root     system            0 Apr 24 21:11 24x7count.csv
-rw-r--r--    1 root     system           94 Apr 24 21:11 24x7count.err
-rw-r--r--    1 root     system          202 Apr 24 21:11 24x7count.int
-rw-r--r--    1 root     system          113 Apr 24 21:11 24x7count_sh.outerr
-rw-r--r--    1 root     system         3636 Apr 24 21:21 acf.out
-rw-r--r--    1 root     system          943 Apr 24 21:25 ahafs.conf
-rw-r--r--    1 root     system           53 Apr 24 21:11 aiostat.int
-rw-r--r--    1 root     system        57051 Apr 24 21:25 alog.boot
-rw-r--r--    1 root     system         9676 Apr 24 21:25 alog.console
drwxr-xr-x    2 root     system          256 Apr 24 21:25 aso_logs
drwxr-xr-x    2 root     system        12288 Apr 24 21:10 comptrace_dir
-rw-r--r--    1 root     system       360435 Apr 24 21:28 config.sum
-rw-------    1 root     system        24075 Apr 24 21:25 conslog.alog
-rw-r--r--    1 root     system         1214 Apr 24 21:25 crontab_l
drwxr-xr-x    2 root     system          256 Apr 24 21:25 crontabs
-rw-r--r--    1 root     system         6092 Apr 24 21:10 ctctrl.err
-rw-r--r--    1 root     system        22148 Apr 24 21:25 devsw.out
-rw-r--r--    1 root     system        91189 Apr 24 21:25 devtree.out
-rw-r--r--    1 root     system        13991 Apr 24 21:24 disk_kdb.out
-rw-r--r--    1 root     system           72 Apr 24 21:24 disk_qd_list
-rw-r--r--    1 root     system          372 Apr 24 21:24 disk_qdepth.out
-rw-r--r--    1 root     system         2492 Apr 24 21:25 dlpi.conf
-rw-r--r--    1 root     system          263 Apr 24 21:25 efs.out
-rw-r--r--    1 root     system          208 Apr 24 21:21 eidmon.after
-rw-r--r--    1 root     system          210 Apr 24 21:11 eidmon.before
-rw-r--r--    1 root     system           91 Apr 24 21:25 emc_powermt.txt
-rw-r--r--    1 root     system           31 Apr 24 21:24 emgr.out
-rw-r--r--    1 root     system         8192 Apr 24 21:24 errlog
-rw-r--r--    1 root     system        11184 Apr 24 21:24 errpt_a
-rw-r--r--    1 root     system       503089 Apr 24 21:24 errtmplt
-rw-r--r--    1 root     system         2074 Apr 24 21:25 etc_environment
-rw-r--r--    1 root     system         2320 Apr 24 21:25 etc_filesystems
-rw-r--r--    1 root     system         3359 Apr 24 21:25 etc_inittab
-rw-r--r--    1 root     system         4541 Apr 24 21:25 etc_netsvc.conf
-r-xr-xr-x    1 root     system         1814 Apr 24 21:25 etc_profile
-r-xr-xr--    1 root     system         3683 Apr 24 21:25 etc_rc
-rw-------    1 root     system        14348 Apr 24 21:25 etc_sec_ldap.cfg
-rw-r-----    1 root     system         1384 Apr 24 21:25 etc_security_limits
-rw-r--r--    1 root     system       344925 Apr 24 21:25 etc_services
-rw-r--r--    1 root     system         1569 Apr 24 21:21 fc.after
-rw-r--r--    1 root     system         1570 Apr 24 21:11 fc.before
-rw-r--r--    1 root     system         7567 Apr 24 21:21 fcstat.after
-rw-r--r--    1 root     system         7567 Apr 24 21:11 fcstat.before
-rw-r--r--    1 root     system        26322 Apr 24 21:21 fcstat2.csv
-rw-r--r--    1 root     system            0 Apr 24 21:11 fcstat2.err
-rw-r--r--    1 root     system        39628 Apr 24 21:21 fcstat2.out
-rw-r--r--    1 root     system          882 Apr 24 21:21 fcstat2.totals.after
-rw-r--r--    1 root     system          882 Apr 24 21:11 fcstat2.totals.before
-rw-r--r--    1 root     system        23046 Apr 24 21:22 filemon.sum
-rw-r--r--    1 root     system       103673 Apr 24 21:21 foldstat.out
-rw-r--r--    1 root     system         3033 Apr 24 21:24 genkex.out
-rw-r--r--    1 root     system        15169 Apr 24 21:24 genkld.out
-rw-r--r--    1 root     system     49876291 Apr 24 21:25 gennames.out
-rw-r--r--    1 root     system       142294 Apr 24 21:24 getevars.out
-rw-r--r--    1 root     system       929298 Apr 24 21:25 instfix.out
-rw-r--r--    1 root     system       175475 Apr 24 21:21 iomon.out
-rw-r--r--    1 root     system       181356 Apr 24 21:21 iostat.Dl
-rw-r--r--    1 root     system        76832 Apr 24 21:21 iostat.int
-rw-r--r--    1 root     system            0 Apr 24 21:11 iostat.p
-rw-r--r--    1 root     system          781 Apr 24 21:11 iostat.path
-rw-r--r--    1 root     system          205 Apr 24 21:21 iostat.sum
-rw-r--r--    1 root     system         3134 Apr 24 21:21 iostat.total
-rw-r--r--    1 root     system          672 Apr 24 21:21 ipsec.after
-rw-r--r--    1 root     system          673 Apr 24 21:11 ipsec.before
-rw-r--r--    1 root     system         2592 Apr 24 21:25 ipsec.out
-rw-r--r--    1 root     system        24373 Apr 24 21:22 iptrace.raw
-rw-r--r--    1 root     system          830 Apr 24 21:10 kdb.err
-rw-r--r--    1 root     system        54779 Apr 24 21:24 kdb.stats.out
-rw-r--r--    1 root     system           48 Apr 24 21:25 lcpu2bid.out
-rw-r--r--    1 root     system       286780 Apr 24 21:25 localhost_251121.topas
-rw-r--r--    1 root     system         5319 Apr 24 21:21 lparstat.E
-rw-r--r--    1 root     system         5979 Apr 24 21:21 lparstat.d
-rw-r--r--    1 root     system       516240 Apr 24 21:21 lparstat.int
-rw-r--r--    1 root     system         5632 Apr 24 21:21 lparstat.l
-rw-r--r--    1 root     system         1346 Apr 24 21:10 lparstat.nsp
-rw-r--r--    1 root     system         2527 Apr 24 21:11 lparstat.sum
drwxr-xr-x    2 root     system          256 Apr 24 21:25 lrulist_dir
-rw-r--r--    1 root     system         3215 Apr 24 21:25 lsconf.out
-rw-r--r--    1 root     system        88460 Apr 24 21:24 lslpp.Lc
-rw-r--r--    1 root     system        10396 Apr 24 21:21 lsmpio.after
-rw-r--r--    1 root     system        10398 Apr 24 21:11 lsmpio.before
-rw-r--r--    1 root     system          424 Apr 24 21:24 lsmpio.disks.out
-rw-r--r--    1 root     system          350 Apr 24 21:21 lsps.after
-rw-r--r--    1 root     system          350 Apr 24 21:11 lsps.before
-rw-r--r--    1 root     system          187 Apr 24 21:24 lspv.uuid.out
-rw-r--r--    1 root     system         1563 Apr 24 21:25 lsrset.out
-rw-r--r--    1 root     system           66 Apr 24 21:25 lssrad.out
-rw-r--r--    1 root     system          109 Apr 24 21:28 lsuser.out
-rw-r--r--    1 root     system            0 Apr 24 21:25 lswpar.out
-rw-r--r--    1 root     system       204800 Apr 24 21:25 lvmt.log
drwxr-xr-x    2 root     system         4096 Apr 24 21:25 mem_details_dir
-rw-r--r--    1 root     system        15666 Apr 24 21:24 mempools.out
-rw-r--r--    1 root     system         1791 Apr 24 21:10 memtrace.err
-rw-r--r--    1 root     system           57 Apr 24 21:24 microcode_levels.out
-rw-r--r--    1 root     system          109 Apr 24 21:25 mmfs.cfg
-rw-r--r--    1 root     system       245634 Apr 24 21:21 monitor.int
-rw-r--r--    1 root     system        51286 Apr 24 21:21 monitor.sum
-rw-r--r--    1 root     system       162819 Apr 24 21:21 mpstat.int
-rw-r--r--    1 root     system        24302 Apr 24 21:21 mpstat_h.int
-rw-r--r--    1 root     system       107100 Apr 24 21:21 mrq.out
-rw-r--r--    1 root     system          883 Apr 24 21:25 netbackup.cfg
-rw-r--r--    1 root     system          831 Apr 24 21:22 netstat.i.tmp
-rw-r--r--    1 root     system       171461 Apr 24 21:21 netstat.int
-rw-r--r--    1 root     system          253 Apr 24 21:21 netstat_sh.outerr
-rw-r--r--    1 root     system        10046 Apr 24 21:21 nfsstat.int
-rw-r--r--    1 root     system          220 Apr 24 21:21 nfsstat_sh.outerr
-rw-r--r--    1 root     system           86 Apr 24 21:25 nvme.log
-rw-r--r--    1 root     system         6633 Apr 24 21:25 nvme_data.out
drwxr-xr-x    2 root     system         4096 Apr 24 21:24 objrepos
-rw-r--r--    1 root     system         7140 Apr 24 21:21 pdtstat.out
-rw-------    1 root     system         5076 Apr 24 21:28 perfpmr.cfg
-rw-r--r--    1 root     system        24239 Apr 24 21:28 perfpmr.int
-rw-r--r--    1 root     system       526123 Apr 24 21:28 perfstat_trigger.premonitor.out
-rw-r--r--    1 root     system         3168 Apr 24 21:25 persistent.db
-rw-r--r--    1 root     system         5802 Apr 24 21:21 pile.after
-rw-r--r--    1 root     system         5802 Apr 24 21:11 pile.before
-rw-r--r--    1 root     system         2494 Apr 24 21:24 pile.out
-rw-r--r--    1 root     system        11470 Apr 24 21:21 pmapstat.out
-rw-r--r--    1 root     system       675740 Apr 24 21:28 pmlist.allgroups.txt
-rw-r--r--    1 root     system         3477 Apr 24 21:21 powerstat.out
-rw-r--r--    1 root     system        35220 Apr 24 21:21 ppda.out
-rw-rw----    1 root     system        87232 Apr 24 21:21 pprof.trace.raw
-rw-rw----    1 root     system      4097752 Apr 24 21:16 pprof.trace.raw-0
-rw-rw----    1 root     system      1014072 Apr 24 21:21 pprof.trace.raw-1
-rw-rw----    1 root     system       707168 Apr 24 21:21 pprof.trace.raw-2
-rw-rw----    1 root     system       590416 Apr 24 21:21 pprof.trace.raw-3
-rw-rw----    1 root     system       416872 Apr 24 21:21 pprof.trace.raw-4
-rw-rw----    1 root     system       412328 Apr 24 21:21 pprof.trace.raw-5
-rw-rw----    1 root     system       397352 Apr 24 21:21 pprof.trace.raw-6
-rw-rw----    1 root     system       395240 Apr 24 21:21 pprof.trace.raw-7
-rw-r--r--    1 root     system         4183 Apr 24 21:24 proc.cwd.out
-rw-r--r--    1 root     system            0 Apr 24 21:11 proc_sys_adapter.err
-rw-r--r--    1 root     system        32510 Apr 24 21:21 proc_sys_adapter.int.after
-rw-r--r--    1 root     system        32512 Apr 24 21:11 proc_sys_adapter.int.before
-rw-r--r--    1 root     system        15461 Apr 24 21:25 proctree.out
-rw-r--r--    1 root     system        23643 Apr 24 21:21 ps.int
-rw-r--r--    1 root     system        10865 Apr 24 21:21 ps.sum
-rw-r--r--    1 root     system        85088 Apr 24 21:21 psAfmo.after
-rw-r--r--    1 root     system        89788 Apr 24 21:11 psAfmo.before
drwxr-xr-x    2 root     system         4096 Apr 24 21:21 ps_output
-rw-r--r--    1 root     system        18965 Apr 24 21:21 psa.elfk
-rw-r--r--    1 root     system        21576 Apr 24 21:11 psb.elfk
-rw-r--r--    1 root     system         8338 Apr 24 21:10 psef.begin
-rw-r--r--    1 root     system         8644 Apr 24 21:28 psef.end
-rw-r--r--    1 root     system        52954 Apr 24 21:21 psemo.after
-rw-r--r--    1 root     system        56254 Apr 24 21:11 psemo.before
-rw-r--r--    1 root     system         1063 Apr 24 21:25 radinfo.out
-r-x------    1 root     system         7438 Apr 24 21:28 reorg.sh
-rw-r--r--    1 root     system        71995 Apr 24 21:21 rqi.out
-rw-r--r--    1 root     system       285846 Apr 24 21:21 sar.bin
-rw-r--r--    1 root     system        41119 Apr 24 21:21 sar.int
-rw-r--r--    1 root     system        39008 Apr 24 21:21 sar.sum
-rw-r--r--    1 root     system      1211359 Apr 24 21:25 scb.6.0
-rw-r--r--    1 root     system          705 Apr 24 21:25 smit.log
-rw-r--r--    1 root     system          520 Apr 24 21:25 smtctl.out
drwxr-xr-x   21 root     system         4096 Apr 24 21:25 snap_data
-rw-r--r--    1 root     system         3287 Apr 24 21:25 strtune.out
-rw-r--r--    1 root     system       102847 Apr 24 21:21 svmon.after
-rw-r--r--    1 root     system           93 Apr 24 21:21 svmon.after.S
-rw-r--r--    1 root     system       102847 Apr 24 21:11 svmon.before
-rw-r--r--    1 root     system           93 Apr 24 21:11 svmon.before.S
drwxr-xr-x    2 root     system          256 Apr 24 21:25 syslogs
-rw-r--r--    1 root     system          159 Apr 24 21:22 tcpdump.en0.err
-rw-r--r--    1 root     system        14308 Apr 24 21:22 tcpdump.raw.en0
-rw-r--r--    1 root     system       288927 Apr 24 21:28 tcpstat.out
-rw-r--r--    1 root     system       167892 Apr 24 21:25 testaix7304_260424.topas
-rw-r--r--    1 root     system        30832 Apr 24 21:25 th.kdb.out
-rw-r--r--    1 root     system         5407 Apr 24 21:24 tprof.sum
drwxr-xr-x    4 root     system          256 Apr 24 21:23 tprof_data
-rw-r--r--    1 root     system         7438 Apr 24 21:24 tprof_nopurr.prof
-rw-r--r--    1 root     system        46389 Apr 24 21:10 trace.begin.psef
-rw-r--r--    1 root     system          399 Apr 24 21:28 trace.crash.inode
-rw-r--r--    1 root     system        46769 Apr 24 21:10 trace.end.psef
-rw-r--r--    1 root     system      1920235 Apr 24 21:10 trace.fmt
-rw-r--r--    1 root     system      3977446 Apr 24 21:28 trace.j2.inode
-rw-r--r--    1 root     system           66 Apr 24 21:10 trace.lssrad.out
-rw-r--r--    1 root     system         7605 Apr 24 21:10 trace.maj_min2lv
-rw-r--r--    1 root     system          965 Apr 24 21:10 trace.mount
-rw-r--r--    1 root     system       373136 Apr 24 21:10 trace.nm
-rw-rw----    1 root     system      1757232 Apr 24 21:10 trace.raw
-rw-rw----    1 root     system      2371728 Apr 24 21:10 trace.raw-0
-rw-rw----    1 root     system       534984 Apr 24 21:10 trace.raw-1
-rw-rw----    1 root     system        75040 Apr 24 21:10 trace.raw-2
-rw-rw----    1 root     system        74384 Apr 24 21:10 trace.raw-3
-rw-rw----    1 root     system        47824 Apr 24 21:10 trace.raw-4
-rw-rw----    1 root     system        57032 Apr 24 21:10 trace.raw-5
-rw-rw----    1 root     system        47464 Apr 24 21:10 trace.raw-6
-rw-rw----    1 root     system        77480 Apr 24 21:10 trace.raw-7
-rw-rw----    1 root     system      1757520 Apr 24 21:10 trace.raw.lock
-rw-rw----    1 root     system     17965024 Apr 24 21:10 trace.raw.lock-0
-rw-rw----    1 root     system       674552 Apr 24 21:10 trace.raw.lock-1
-rw-rw----    1 root     system       212376 Apr 24 21:10 trace.raw.lock-2
-rw-rw----    1 root     system       200328 Apr 24 21:10 trace.raw.lock-3
-rw-rw----    1 root     system       140920 Apr 24 21:10 trace.raw.lock-4
-rw-rw----    1 root     system       132248 Apr 24 21:10 trace.raw.lock-5
-rw-rw----    1 root     system       114296 Apr 24 21:10 trace.raw.lock-6
-rw-rw----    1 root     system       194888 Apr 24 21:10 trace.raw.lock-7
-rw-r--r--    1 root     system     26251601 Apr 24 21:28 trace.syms
-rw-r--r--    1 root     system        19260 Apr 24 21:10 trace.tunables.out
-rw-r--r--    1 root     system        91984 Apr 24 21:10 trace_lparstatH.out
-rw-rw----    1 root     system      1756952 Apr 24 21:11 trc_419.raw
-rw-rw----    1 root     system       984512 Apr 24 21:11 trc_419.raw-0
-rw-rw----    1 root     system       137200 Apr 24 21:11 trc_419.raw-1
-rw-rw----    1 root     system        74984 Apr 24 21:11 trc_419.raw-2
-rw-rw----    1 root     system        95136 Apr 24 21:11 trc_419.raw-3
-rw-rw----    1 root     system        66624 Apr 24 21:11 trc_419.raw-4
-rw-rw----    1 root     system        68472 Apr 24 21:11 trc_419.raw-5
-rw-rw----    1 root     system        65216 Apr 24 21:11 trc_419.raw-6
-rw-rw----    1 root     system       102176 Apr 24 21:11 trc_419.raw-7
-rw-r--r--    1 root     system        49858 Apr 24 21:21 tstat.out
-rw-r--r--    1 root     system       129628 Apr 24 21:24 tunables.sum
-rw-r--r--    1 root     system        28685 Apr 24 21:24 tunables_lastboot
-rw-r--r--    1 root     system         1365 Apr 24 21:24 tunables_lastboot.log
-rw-------    1 root     system          419 Apr 24 21:24 tunables_nextboot
-rw-------    1 root     system          419 Apr 24 21:24 tunables_nextliveupdate
-rw-------    1 root     system          791 Apr 24 21:24 tunables_usermodified
-rw-r--r--    1 root     system        26452 Apr 24 21:24 tunablesx.sum
-rw-r--r--    1 root     system       109272 Apr 24 21:25 unix.what
-rw-r--r--    1 root     system        13668 Apr 24 21:21 ventstat.out
-rw-r--r--    1 root     system         6001 Apr 24 21:25 vfc_client.kdb.out
-rw-r--r--    1 root     system         1423 Apr 24 21:25 vfs.kdb
-rw-r--r--    1 root     system        29712 Apr 24 21:21 vmmstats_kdb.after
-rw-r--r--    1 root     system        29712 Apr 24 21:11 vmmstats_kdb.before
-rw-r--r--    1 root     system         2521 Apr 24 21:24 vmpools.out
-rw-r--r--    1 root     system          560 Apr 24 21:24 vmpools.save
-rw-r--r--    1 root     system       100743 Apr 24 21:21 vmstat.int
-rw-r--r--    1 root     system       465224 Apr 24 21:21 vmstat.psize.int
-rw-r--r--    1 root     system         1163 Apr 24 21:21 vmstat.sum
-rw-r--r--    1 root     system         2112 Apr 24 21:21 vmstat_s.out
-rw-r--r--    1 root     system         2475 Apr 24 21:21 vmstat_s.p.after
-rw-r--r--    1 root     system         2475 Apr 24 21:11 vmstat_s.p.before
-rw-r--r--    1 root     system         1083 Apr 24 21:21 vmstat_v.after
-rw-r--r--    1 root     system         1083 Apr 24 21:11 vmstat_v.before
-rw-r--r--    1 root     system         1802 Apr 24 21:21 vmstati.after
-rw-r--r--    1 root     system         1802 Apr 24 21:11 vmstati.before
-rw-r--r--    1 root     system          135 Apr 24 21:25 vnic.out
-rw-r--r--    1 root     system          135 Apr 24 21:21 vnicstats.after
-rw-r--r--    1 root     system          135 Apr 24 21:11 vnicstats.before
-rw-r--r--    1 root     system      4496747 Apr 24 21:25 vnode.kdb
-rw-r--r--    1 root     system            0 Apr 24 21:25 vxfs_tunables.out
-rw-r--r--    1 root     system          497 Apr 24 21:28 w.int

HACMP / PowerHA 利用時の注意

HACMP が稼働しているアクティブなクラスタノードで perfpmr を実行する場合、追加負荷による誤フェイルオーバーが発生する可能性があります。

特に Dead Man Switch モニタと同一ディスクにデータ収集する場合 は、事前にタイムアウト値を延長するか、別ディスクを使用してください。実行前に AIX HA チームへの確認を推奨します。


VIOS での収集

仮想デバイスが構成されている環境では、LPAR クライアントと 同時に VIOS 側の perfpmr データも収集することが推奨されます。

# root 権限を取得
$ oem_setup_env

# mkdir /tmp/perfdata
# cd /tmp/perfdata
# /opt/IBM/perfpmr/perfpmr.sh

まとめ

AIX 7.3 TL4 における perfpmr の標準統合により:

  • 即時利用可能:ダウンロード・展開作業が不要
  • 自動化が簡素化:固定パスで直接コール可能
  • バージョン整合性:AIX 更新経由で常に最新状態
  • 正式サポート:OS レベルで公式サポート対象

性能問題発生時に即座にデータ収集がよりスムーズにできるようになりました。

以上です。


参考リンク

1
0
3

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?