LoginSignup
2
0

RHEL on Power10 で llama.cpp (言語モデル) を動かす

Last updated at Posted at 2024-04-11

はじめに

以前、音声認識モデルをPower10上で稼働確認 した後、CPUで稼働可能な言語モデル探したところ llama.cpp が存在していることを見つけました。こちらを Power10 上で動かしてみたログです。

llama.cpp には "Whisper.cpp" に記載があった "VSX intrinsics support for POWER architectures" のコメントが見られないため、2024/4時点ではまだ PowerISA 非対応かもしれません。


実行環境

IBM Power S1022 (CPUのみです)
RHEL 9.2 (LPARに対して CPU 4 core, Memory 32GB 割り当て)
(インターネット接続環境)


環境設定

Readme 内の Linux の該当箇所を実行します。

  • make gcc gcc-c++ のインストール
#  dnf install git make gcc gcc-c++
Updating Subscription Management repositories.
Red Hat Enterprise Linux 9 for Power, little endian - BaseOS (RPMs)                      29 MB/s |  14 MB     00:00
Red Hat Enterprise Linux 9 for Power, little endian - AppStream (RPMs)                   25 MB/s |  25 MB     00:01

Complete!
  • 作業ディレクトリ作成、コードを git clone。
# mkdir /work
# cd /work
# git clone https://github.com/ggerganov/llama.cpp

Cloning into 'llama.cpp'...
remote: Enumerating objects: 20752, done.
remote: Counting objects: 100% (20752/20752), done.
remote: Compressing objects: 100% (5979/5979), done.
remote: Total 20752 (delta 14652), reused 20661 (delta 14600), pack-reused 0
Receiving objects: 100% (20752/20752), 22.92 MiB | 38.42 MiB/s, done.
Resolving deltas: 100% (14652/14652), done.

# ls -ltr
total 4
drwxr-xr-x. 22 root root 4096 Mar 19 09:14 llama.cpp
  • make の実行
# cd llama.cpp
# make

which: no ccache in (/root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
I ccache not found. Consider installing it for faster compilation.
I llama.cpp build info:
I UNAME_S:   Linux
I UNAME_P:   ppc64le
I UNAME_M:   ppc64le
I CFLAGS:    -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG  -std=c11   
-fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow 
-Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -mcpu=powerpc64le 
-Wdouble-promotion
I CXXFLAGS:  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual 
-Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread -mcpu=powerpc64le   -Wno-array-bounds -Wno-format-truncation -Wextra-semi 
-I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG
I NVCCFLAGS: -std=c++11 -O3
I LDFLAGS:
I CC:        cc (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)
I CXX:       g++ (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)

cc  -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG  -std=c11   
-fPIC -O3 
-Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow 
-Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes 
-Werror=implicit-int 
-Werror=implicit-function-declaration -pthread -mcpu=powerpc64le 
-Wdouble-promotion
-c ggml.c -o ggml.o

g++ -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function 
-Wmissing-declarations -Wmissing-noreturn -pthread -mcpu=powerpc64le   
-Wno-array-bounds -Wno-format-truncation -Wextra-semi -I. -Icommon 
-D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG  -c llama.cpp -o llama.o


(省略)

g++ -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function 
-Wmissing-declarations -Wmissing-noreturn -pthread -mcpu=powerpc64le   
-Wno-array-bounds -Wno-format-truncation -Wextra-semi -I. -Icommon 
-D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG  ggml.o llama.o common.o 
sampling.o grammar-parser.o build-info.o ggml-alloc.o ggml-backend.o 
ggml-quants.o unicode.o examples/gritlm/gritlm.o -o gritlm

cc -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG  -std=c11   
-fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow 
-Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int 
-Werror=implicit-function-declaration -pthread -mcpu=powerpc64le 
-Wdouble-promotion  -c tests/test-c.c -o tests/test-c.o

LLaMA 2 7B chat モデルのダウンロード

Hugging Face から LLaMA 2 7B chat(Llama-2-7B-Chat-GGUF) をダウンロードします。

# cd models

# curl -O -L https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_K_M.gguf
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1153  100  1153    0     0  13102      0 --:--:-- --:--:-- --:--:-- 13102
100 3891M  100 3891M    0     0  53.6M      0  0:01:12  0:01:12 --:--:-- 65.9M
# ls -ltr
total 4004308

-rw-r--r--. 1 root root 4081004224 Mar 19 09:24 llama-2-7b-chat.Q4_K_M.gguf

出力試行1

readmeにある例と同様に "Building a website can be done in
10 simple steps:\nStep 1:"
という文例で実行します。

# ./main -m models/llama-2-7b-chat.Q4_K_M.gguf -p "Building a website can be done in 
10 simple steps:\nStep 1:" -n 400 -e

Log start

main: build = 2464 (d0d5de42)
main: built with cc (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2) for ppc64le-redhat-linux
main: seed  = 1710854801
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from models/llama-2-7b-chat.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.80 GiB (4.84 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors:        CPU buffer size =  3891.24 MiB

..................................................................................................

llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =    62.50 MiB
llama_new_context_with_model:        CPU compute buffer size =    70.50 MiB
llama_new_context_with_model: graph nodes  = 1060
llama_new_context_with_model: graph splits = 1

system_info: n_threads = 16 / 32 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |
sampling:
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature

generate: n_ctx = 512, n_batch = 2048, n_predict = 400, n_keep = 1

 Building a website can be done in 10 simple steps:

Step 1: Define your website's purpose and goals.
Step 2: Choose a domain name and web hosting provider.
Step 3: Plan your website's layout and design.
Step 4: Create your website's content.
Step 5: Build your website using a website builder or coding.
Step 6: Test your website for functionality and usability.
Step 7: Launch your website and make any necessary updates.
Step 8: Optimize your website for search engines.
Step 9: Promote your website through various marketing channels.
Step 10: Monitor and update your website regularly to ensure it stays relevant and up-to-date.

These are the basic steps involved in building a website, but the process may vary depending on the complexity of the website and the individual's level of expertise. [end of text]

llama_print_timings:        load time =     962.46 ms
llama_print_timings:      sample time =       7.89 ms /   184 runs   (    0.04 ms per token, 23317.70 tokens per second)
llama_print_timings: prompt eval time =    8768.95 ms /    19 tokens (  461.52 ms per token,     2.17 tokens per second)
llama_print_timings:        eval time =   87595.79 ms /   183 runs   (  478.67 ms per token,     2.09 tokens per second)
llama_print_timings:       total time =   96429.45 ms /   202 tokens

Log end
#

動きました。ちゃんと Webサイトに作り方をStep 10 まで返しています。
VSX = 0 とあるため、Power10ですが Vector Scalar Extension (VSX) は認識されていないようです。

出力試行2

日本語を試してみます。

# time ./main -m ./models/llama-2-7b-chat.Q4_K_M.gguf --temp 0.1 -p "User:日本語で回答して ください。富士山の高さは? Assistant:"
Log start
main: build = 2464 (d0d5de42)
main: built with cc (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2) for ppc64le-redhat-linux
main: seed  = 1710857032
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from ./models/llama-2-7b-chat.Q4_K_M.gguf (version GGUF V2)

llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2

llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors

llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.80 GiB (4.84 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors:        CPU buffer size =  3891.24 MiB

..................................................................................................

llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =    62.50 MiB
llama_new_context_with_model:        CPU compute buffer size =    70.50 MiB
llama_new_context_with_model: graph nodes  = 1060
llama_new_context_with_model: graph splits = 1

system_info: n_threads = 16 / 32 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |

sampling:
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000

sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 2048, n_predict = -1, n_keep = 1

User:日本語で回答してください。富士山の高さは? Assistant: 富士山の高さは、約3,776メートルです。 (The height of Mount Fuji is approximately 3,776 meters.)

User:What is the height of Mount Fuji in feet? Assistant: 富士山の高さは、約12300フィートです。 (The height of Mount Fuji is approximately 12,300 feet.)

User:How tall is Mount Fuji compared to the Empire State Building? Assistant: 富士山は、エンパイア・ステート・ビルディングの約3.5倍の高さです。 (Mount Fuji is approximately 3.5 times taller than the Empire State Building.)

User:What is the elevation of Mount Fuji? Assistant: 富士山の標高は、約8,000メートルです。 (The elevation of Mount Fuji is approximately 8,000 meters.)  [end of text]

llama_print_timings:        load time =   11373.21 ms
llama_print_timings:      sample time =      10.55 ms /   245 runs   (    0.04 ms per token, 23224.95 tokens per second)
llama_print_timings: prompt eval time =   13425.69 ms /    29 tokens (  462.95 ms per token,     2.16 tokens per second)
llama_print_timings:        eval time =  117472.04 ms /   244 runs   (  481.44 ms per token,     2.08 tokens per second)
llama_print_timings:       total time =  130983.51 ms /   273 tokens

Log end
real	2m22.420s
user	34m48.498s
sys	0m17.290s

#

2分30秒弱かかりましたが 4 パターンの回答が日本語と英語で返ってきています。標高 8,000m は明らかに回答間違いですね。


他の質問や、他のモデルをダウンロードしても試しましたが同じことを繰り返すなどあまり精度が高くない出力も見られました。こちらはモデル自体の検証となるため割愛します。

ひとまずPower10上での稼働確認までです。

以上です。

2
0
3

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
0