プロファイラのご紹介
Linux を使っていて、プログラミングするときにどこにどれだけ時間がかかっているか知りたい時がありますよね?
私にはあります。そこで、ちょっとプロファイラについて調べてみました。
頭が混んがらがる3つのプロファイラ
ここに3つのプロファイラを示しますが、これは私の頭の中を整理する為に書いています。3つとも特徴があるのですが
最初に使おうとするとどれがどれだかこんがらがってしまいます。
gprof(GNU Profiler)
最初に紹介するのはgprof(GNU Profiler)です。その名の通りGNUのプロファイラです。
本家はここらしいです。
perf(Linux profiling with performance counters)
Linuxで使えるパフォーマンスカウンタのプロファイリングをしてくれるツールです。
gperftools(Google Performance Tools)
Google が提供しているプロファイラです。
インストール方法
gprof
入れようと思っていた時にはすでに入っていたっぽい。binutilsで入るそうです。(man ページより)
$ rpm -ql binutils|grep gprof
/usr/bin/gprof
perf
$ sudo yum install perf
gperftools
$ sudo yum install gperftools
基本コマンドの使い方
今回は以下のプログラムをプロファイルに使いました。
#include<stdio.h>
#include<omp.h>
void loop1()
{
int i,j;
for(i=0;i<100000;i++)
for(j=0;j<10000;j++);
}
void loop2()
{
int i,j;
for(i=0;i<1000000;i++)
for(j=0;j<10000;j++);
}
void main()
{
loop1();
loop2();
}
いざ実行
gprof では
gprofでは、以下のコマンドで実行します。
$ gcc -pg -o hello hello.c
$ ./hello
結果は以下の通り
$ gprof ./hello gmon.out
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
91.17 12.27 12.27 1 12.27 12.27 loop2
10.16 13.64 1.37 1 1.37 1.37 loop1
% the percentage of the total running time of the
time program used by this function.
cumulative a running sum of the number of seconds accounted
seconds for by this function and those listed above it.
self the number of seconds accounted for by this
seconds function alone. This is the major sort for this
listing.
calls the number of times this function was invoked, if
this function is profiled, else blank.
self the average number of milliseconds spent in this
ms/call function per call, if this function is profiled,
else blank.
total the average number of milliseconds spent in this
ms/call function and its descendents per call, if this
function is profiled, else blank.
name the name of the function. This is the minor sort
for this listing. The index shows the location of
the function in the gprof listing. If the index is
in parenthesis it shows where it would appear in
the gprof listing if it were to be printed.
Copyright (C) 2012-2016 Free Software Foundation, Inc.
Copying and distribution of this file, with or without modification,
are permitted in any medium without royalty provided the copyright
notice and this notice are preserved.
Call graph (explanation follows)
granularity: each sample hit covers 2 byte(s) for 0.07% of 13.64 seconds
index % time self children called name
<spontaneous>
[1] 100.0 0.00 13.64 main [1]
12.27 0.00 1/1 loop2 [2]
1.37 0.00 1/1 loop1 [3]
-----------------------------------------------
12.27 0.00 1/1 main [1]
[2] 90.0 12.27 0.00 1 loop2 [2]
-----------------------------------------------
1.37 0.00 1/1 main [1]
[3] 10.0 1.37 0.00 1 loop1 [3]
-----------------------------------------------
This table describes the call tree of the program, and was sorted by
the total amount of time spent in each function and its children.
Each entry in this table consists of several lines. The line with the
index number at the left hand margin lists the current function.
The lines above it list the functions that called this function,
and the lines below it list the functions this one called.
This line lists:
index A unique number given to each element of the table.
Index numbers are sorted numerically.
The index number is printed next to every function name so
it is easier to look up where the function is in the table.
% time This is the percentage of the `total' time that was spent
in this function and its children. Note that due to
different viewpoints, functions excluded by options, etc,
these numbers will NOT add up to 100%.
self This is the total amount of time spent in this function.
children This is the total amount of time propagated into this
function by its children.
called This is the number of times the function was called.
If the function called itself recursively, the number
only includes non-recursive calls, and is followed by
a `+' and the number of recursive calls.
name The name of the current function. The index number is
printed after it. If the function is a member of a
cycle, the cycle number is printed between the
function's name and the index number.
For the function's parents, the fields have the following meanings:
self This is the amount of time that was propagated directly
from the function into this parent.
children This is the amount of time that was propagated from
the function's children into this parent.
called This is the number of times this parent called the
function `/' the total number of times the function
was called. Recursive calls to the function are not
included in the number after the `/'.
name This is the name of the parent. The parent's index
number is printed after it. If the parent is a
member of a cycle, the cycle number is printed between
the name and the index number.
If the parents of the function cannot be determined, the word
`<spontaneous>' is printed in the `name' field, and all the other
fields are blank.
For the function's children, the fields have the following meanings:
self This is the amount of time that was propagated directly
from the child into the function.
children This is the amount of time that was propagated from the
child's children to the function.
called This is the number of times the function called
this child `/' the total number of times the child
was called. Recursive calls by the child are not
listed in the number after the `/'.
name This is the name of the child. The child's index
number is printed after it. If the child is a
member of a cycle, the cycle number is printed
between the name and the index number.
If there are any cycles (circles) in the call graph, there is an
entry for the cycle-as-a-whole. This entry shows who called the
cycle (as parents) and the members of the cycle (as children.)
The `+' recursive calls entry shows the number of function calls that
were internal to the cycle, and the calls entry for each member shows,
for that member, how many times it was called from other members of
the cycle.
Copyright (C) 2012-2016 Free Software Foundation, Inc.
Copying and distribution of this file, with or without modification,
are permitted in any medium without royalty provided the copyright
notice and this notice are preserved.
Index by function name
[3] loop1 [2] loop2
不要なところも多いですが、各項目の解説がつきます。先頭部分に実行時間の情報が載っていますね。
この場合,loop1 と loop2 では実行時間が約10倍になるように設定していますので、順当な結果かと思います。
perfでは
perf は以下のように使います。
$ gcc -o hello hello.c
$ perf record ./hello
プロファイリング結果は以下の通りです。
$ perf report
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 53K of event 'cycles:uppp'
# Event count (approx.): 62054101085
#
# Overhead Command Shared Object Symbol
# ........ ....... ................ .......................
#
90.93% hello hello [.] loop2
9.06% hello hello [.] loop1
0.01% hello [unknown] [k] 0xffffffff8fb8b4ef
0.00% hello ld-2.17.so [.] _dl_relocate_object
#
# (Tip: Create an archive with symtabs to analyse on other machine: perf archive)
#
ここでも、loop2が実行時間がかかっている事がわかります。
キャッシュミスについて調べるためには -e オプションに cache-misses を設定してあげます。
こんな感じ
$ perf record -e cache-misses ./hello
gperftoolsでは
gperftools は次のように使います。
$ gcc -o hello hello.c -lprofiler
$ CPUPROFILE=hello.prof ./hello
プロファイリング結果を可視化してくれます。例えば Web ブラウザで見ようとするとこんな感じ
$ pprof --web ./hello ./hello.prof
コマンドラインで見るならこんな感じ
pprof ./hello hello.prof
Using local file ./hello.
Using local file hello.prof.
Welcome to pprof! For help, type 'help'.
(pprof) top
Total: 1346 samples
1225 91.0% 91.0% 1225 91.0% loop2
121 9.0% 100.0% 121 9.0% loop1
0 0.0% 100.0% 1346 100.0% __libc_start_main
0 0.0% 100.0% 1346 100.0% _start
0 0.0% 100.0% 1346 100.0% main
(pprof) quit
ちょっと面倒だけど、PNGとかにも出力できます。
$ pprof --dot ./hello ./hello.prof > hello.dot
$ dot -Tpng hello.dot > hello.png
こんな絵がでます。
この3つを組み合わせるといろいろなことができそうな感じ。
OpenMP で使えない?
私の最終目標は以下の通りでした。
- プロファイルして実行時間のかかるプログラム部分を調査
- 時間のかかっているところだけをOpenMPで並列化
- もう一度プロファイル
- 結果をみて満足する
でも、うまく動かないみたい。
perf の解析結果を見ていると
perf にOpenMP化したプログラムを食わせてみました。
#include<stdio.h>
#include<omp.h>
void loop1()
{
int i,j;
for(i=0;i<100000;i++)
for(j=0;j<10000;j++);
}
void loop2()
{
int i,j;
#pragma omp parallel for private(i,j)
for(i=0;i<1000000;i++)
for(j=0;j<10000;j++);
#pragma omp parallel for private(i,j)
for(i=0;i<100000;i++)
for(j=0;j<10000;j++);
}
int main()
{
loop1();
loop2();
return 0;
}
$ gcc -fopenmp -o ./hello_omp ./hello_omp.c
$ perf record ./hello_omp
$ perf report |less
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 62K of event 'cycles:uppp'
# Event count (approx.): 69417742165
#
# Overhead Command Shared Object Symbol
# ........ ......... ................ ..............................
#
82.81% hello_omp hello_omp [.] loop2._omp_fn.0
8.26% hello_omp hello_omp [.] loop2._omp_fn.1
8.11% hello_omp hello_omp [.] loop1
0.79% hello_omp libgomp.so.1.0.0 [.] gomp_team_barrier_wait_end
0.01% hello_omp libgomp.so.1.0.0 [.] gomp_barrier_wait_end
0.01% hello_omp [unknown] [k] 0xffffffff8fb8b4ef
0.00% hello_omp ld-2.17.so [.] __libc_memalign
0.00% hello_omp libc-2.17.so [.] __clone
#
# (Tip: To see callchains in a more compact form: perf report -g folded)
#
うーん、要するに、
loop2._omp_fn.0
loop2._omp_fn.1
は、関数化されたforループを計測しているってことでしょうね、と予想がつきます。
因に、このカウンターはGCCとは無関係らしく、clangでコンパイルしたものも使えます。
$ clang -fopenmp ./hello_omp.c -o hello_omp
$ perf record ./hello_omp
[ perf record: Woken up 8 times to write data ]
[ perf record: Captured and wrote 2.367 MB perf.data (61980 samples) ]
$ perf report |less
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 61K of event 'cycles:uppp'
# Event count (approx.): 68946068022
#
# Overhead Command Shared Object Symbol
# ........ ......... ................ .....................................................................
#
78.81% hello_omp hello_omp [.] .omp_outlined.
7.89% hello_omp hello_omp [.] .omp_outlined..1
7.88% hello_omp hello_omp [.] loop1
3.70% hello_omp libomp.so [.] __kmp_x86_pause
0.62% hello_omp libomp.so [.] __kmp_wait_template<kmp_flag_64<false, true>, true, false, true>
0.32% hello_omp libomp.so [.] __kmp_hardware_timestamp
0.30% hello_omp libomp.so [.] __kmp_wait_template<kmp_flag_64<false, true>, false, false, true>
0.24% hello_omp libomp.so [.] kmp_basic_flag_native<unsigned long long, true>::notdone_check
0.15% hello_omp libomp.so [.] kmp_flag_native<unsigned long long>::get
0.07% hello_omp libomp.so [.] flag_traits<unsigned long long>::tcr
0.01% hello_omp [unknown] [k] 0xffffffff8fb8b4ef
0.00% hello_omp libomp.so [.] __kmp_reserve_threads
0.00% hello_omp ld-2.17.so [.] strcmp
0.00% hello_omp libomp.so [.] __kmp_remove_one_handler
0.00% hello_omp [unknown] [k] 0xffffffff8fb95098
0.00% hello_omp ld-2.17.so [.] _dl_lookup_symbol_x
0.00% hello_omp libc-2.17.so [.] _IO_vsnprintf
0.00% hello_omp libomp.so [.] kmp_convert<unsigned long, int, false, false, false, true>::to
0.00% hello_omp ld-2.17.so [.] memcpy
0.00% hello_omp libomp.so [.] KMPNativeAffinity::Mask::zero
0.00% hello_omp libomp.so [.] __kmp_x2apicid_get_levels
0.00% hello_omp libomp.so [.] KMPNativeAffinity::Mask::next
0.00% hello_omp ld-2.17.so [.] _dl_sysdep_start
0.00% hello_omp libc-2.17.so [.] syscall
0.00% hello_omp libomp.so [.] KMPNativeAffinity::Mask::~Mask
#
# (Tip: Limit to show entries above 5% only: perf report --percent-limit 5)
#
おそらくですが
78.81% hello_omp hello_omp [.] .omp_outlined.
7.89% hello_omp hello_omp [.] .omp_outlined..1
がOpenMP化された関数みたいなものに相当すると予想できます。
gperftoiolsでは
pprof でもできるみたいです。
$ gcc -fopenmp -o ./hello_omp ./hello_omp.c -lprofiler
$ CPUPROFILE=hello_omp.prof ./hello_omp
PROFILE: interrupts/evictions/bytes = 1545/73/4368
$ pprof --text ./hello_omp ./hello_omp.prof
Using local file ./hello_omp.
Using local file ./hello_omp.prof.
Total: 1545 samples
1284 83.1% 83.1% 1291 83.6% loop2._omp_fn.0
128 8.3% 91.4% 128 8.3% loop2._omp_fn.1
122 7.9% 99.3% 122 7.9% loop1
10 0.6% 99.9% 11 0.7% do_spin (inline)
1 0.1% 100.0% 1 0.1% cpu_relax (inline)
0 0.0% 100.0% 18 1.2% GOMP_parallel
0 0.0% 100.0% 1405 90.9% __clone
0 0.0% 100.0% 140 9.1% __libc_start_main
0 0.0% 100.0% 140 9.1% _start
0 0.0% 100.0% 11 0.7% do_wait (inline)
0 0.0% 100.0% 11 0.7% gomp_team_barrier_wait_end
0 0.0% 100.0% 1405 90.9% gomp_thread_start
0 0.0% 100.0% 18 1.2% loop2
0 0.0% 100.0% 140 9.1% main
0 0.0% 100.0% 1405 90.9% start_thread
$ clang -fopenmp -o ./hello_omp ./hello_omp.c -lprofiler
$ CPUPROFILE=hello_omp.prof ./hello_omp
Warning: please export TSAN_OPTIONS='ignore_noninstrumented_modules=1' to avoid false positive reports from the OpenMP runtime!
PROFILE: interrupts/evictions/bytes = 1596/116/15432
$ pprof --text ./hello_omp ./hello_omp.prof
Using local file ./hello_omp.
Using local file ./hello_omp.prof.
Total: 1596 samples
1197 75.0% 75.0% 1349 84.5% .omp_outlined.
122 7.6% 82.6% 122 7.6% .omp_outlined..1
119 7.5% 90.1% 119 7.5% loop1
79 4.9% 95.1% 79 4.9% _mm_pause (inline)
23 1.4% 96.5% 111 7.0% bool __kmp_wait_template@8d440
17 1.1% 97.6% 17 1.1% __kmp_hardware_timestamp
12 0.8% 98.3% 47 2.9% bool __kmp_wait_template@8dba8
11 0.7% 99.0% 24 1.5% kmp_basic_flag_native::notdone_check
11 0.7% 99.7% 11 0.7% kmp_flag_native::get
2 0.1% 99.8% 2 0.1% __GI___sched_yield
2 0.1% 99.9% 2 0.1% flag_traits::tcr
1 0.1% 100.0% 80 5.0% __kmp_x86_pause
0 0.0% 100.0% 1102 69.0% __clone
0 0.0% 100.0% 152 9.5% __kmp_barrier
0 0.0% 100.0% 3 0.2% __kmp_fork_barrier
0 0.0% 100.0% 374 23.4% __kmp_fork_call
0 0.0% 100.0% 47 2.9% __kmp_hyper_barrier_gather
0 0.0% 100.0% 111 7.0% __kmp_hyper_barrier_release
0 0.0% 100.0% 1 0.1% __kmp_internal_join
0 0.0% 100.0% 1471 92.2% __kmp_invoke_microtask
0 0.0% 100.0% 1471 92.2% __kmp_invoke_task_func
0 0.0% 100.0% 3 0.2% __kmp_join_barrier
0 0.0% 100.0% 1 0.1% __kmp_join_call
0 0.0% 100.0% 1102 69.0% __kmp_launch_thread
0 0.0% 100.0% 1102 69.0% __kmp_launch_worker
0 0.0% 100.0% 2 0.1% __kmp_yield
0 0.0% 100.0% 152 9.5% __kmpc_barrier
0 0.0% 100.0% 375 23.5% __kmpc_fork_call
0 0.0% 100.0% 494 31.0% __libc_start_main
0 0.0% 100.0% 494 31.0% _start
0 0.0% 100.0% 152 9.5% int __kmp_barrier_template
0 0.0% 100.0% 158 9.9% kmp_flag_64::wait
0 0.0% 100.0% 375 23.5% loop2
0 0.0% 100.0% 494 31.0% main
0 0.0% 100.0% 1102 69.0% start_thread
gprofはというと
これをgprofでやろうとしたら。
$ gcc -fopenmp -pg -p -o hello_omp ./hello_omp.c
$ ./hello_omp
$ gprof -b ./hello_omp
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
92.86 14.27 14.27 main
8.01 15.50 1.23 1 1.23 1.23 loop1
0.00 15.50 0.00 1 0.00 0.00 loop2
Call graph
granularity: each sample hit covers 2 byte(s) for 0.06% of 15.50 seconds
index % time self children called name
<spontaneous>
[1] 100.0 14.27 1.23 main [1]
1.23 0.00 1/1 loop1 [2]
0.00 0.00 1/1 loop2 [3]
-----------------------------------------------
1.23 0.00 1/1 main [1]
[2] 7.9 1.23 0.00 1 loop1 [2]
-----------------------------------------------
0.00 0.00 1/1 main [1]
[3] 0.0 0.00 0.00 1 loop2 [3]
-----------------------------------------------
Index by function name
[2] loop1 [3] loop2 [1] main
アチャー、loop2が0%で返って来るよ。こりゃ計測されていないな。
gprof 以外では多少なりとも使えるみたいだが、いずれにしても、並列化後は別のプロファイラが必要そうなのでそれはまたの機会に調べようと思います。
今日はここまで。
参考文献