0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

TfidfVectorizerのマージ方法の違いによる精度比較

Last updated at Posted at 2024-10-02

概要

以前、TfidfVectorizerを高速にマージする方法を検討したのですが、精度を調査できていなかったので、調査しました。

*計算方法や考え方に不安があるので随時更新中です。

背景

概要で上げた参考記事では、TfidfVectorizerのプラパティとして取得できるIDF辞書からサブコーパスの文書頻度およびマージ後のコーパス全体における文書頻度を算出したうえで、それを最終的なIDFに変換するということを行っていました。この方法により、単純に全コーパスで初期化するベースライン手法と比べ、コーパス数が多いときで、1/20ほどの時間短縮が実現されました。

しかし、提案手法が、単純な方法でのマージと精度の面で同等になっているかは評価していませんでした。IDFから文書頻度を計算する過程で、丸め誤差等が生じている可能性があるため、ベースライン手法と検索結果が同じにならない可能性があります。

そこで、ベースライン手法を正として、それと、提案手法によるマージ後のベクトライザの検索結果がどの程度ずれるかを検証しました。

環境

  • M1 Mac, macOS 14.5
  • Python 3.11.2
  • scikit-learn 1.5.2

以下の仮想環境上で実行しています。

uv venv
source .venv/bin/activate
uv pip install scikit-learn

方針

サンプルのコーパスにおける検索結果が、ベースライン手法と比べてどれくらい異なっているかを評価します。
検索結果、つまり、ランキングの類似度の評価手法はかなりたくさんあるようなのですが(「参考」節参照)、今回は、一番直感的なイメージが湧きやすいjaccard係数で比較します。

jaccard係数は集合の類似度を測る指標の一つです。積集合と和集合のサイズ比として0から1の値をとります。1に近いほど2つの集合の類似度が高いと判断されます。

集合の類似度であるため、順序を含めた類似度は見ることができないのですが、複数のtop_nに対してjaccard係数を求めることで、なんとなくのイメージを掴めるようにします。

実装

マージ関数

以前の検証記事を参考にTfidfVectorizerをマージする関数を作ります。vectorizerのリストとサブコーパスのリストを入力することで、マージされたvectorizerを返します。

import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer

def merge_tfidf_vectorizers(vectorizers, datasets, forced_tfidf_params = None)->TfidfVectorizer:
    """
    複数のTfidfVectorizerインスタンスをマージする関数

    :param vectorizers: TfidfVectorizerインスタンスのリスト
    :param datasets: 各TfidfVectorizerに対応するデータセットのリスト
    :param tfidf_params: TfidfVectorizerのパラメータ
    :return: 統合された新しいTfidfVectorizerインスタンス
    """
    # IDF値から文書頻度を逆算する関数
    def idf_to_doc_freq(idf, n_docs):
        df = (n_docs + 1) / np.exp(idf - 1) - 1
        return np.round(df).astype(int)

    
    # 語彙と文書頻度を統合する
    combined_vocabulary = set()
    combined_df = {}
    n_docs_combined = 0

    for vectorizer, dataset in zip(vectorizers, datasets):
        df = idf_to_doc_freq(vectorizer.idf_, len(dataset))
        vocab_df = dict(zip(vectorizer.get_feature_names_out(), df))

        combined_vocabulary.update(vectorizer.get_feature_names_out())
        for word, freq in vocab_df.items():
            combined_df[word] = combined_df.get(word, 0) + freq

        n_docs_combined += len(dataset)

    combined_vocabulary = sorted(combined_vocabulary)

    # 新しいIDF値を計算
    combined_idf = np.log((n_docs_combined + 1) / (np.array(list(combined_df.values())) + 1)) + 1

    # 0番目の要素のparamをコピー。vectorizers同士のパラメータは全て等しい前提
    tfidf_params = vectorizers[0].get_params()
    tfidf_params["vocabulary"] = combined_vocabulary # vocabularyだけ更新
    forced_tfidf_params = forced_tfidf_params or {}
    tfidf_params.update(forced_tfidf_params) # 強制的に上書きしたいパラメータがあれば上書き

    # 新しいTfidfVectorizerインスタンスを作成してIDF値を設定
    tfidf_combined = TfidfVectorizer(**tfidf_params)
    tfidf_combined.fit(["dummy"])  # ダミーデータで初期化
    tfidf_combined._tfidf.idf_ = combined_idf

    return tfidf_combined

検証用コーパス

検証用コーパスを準備します。
ag_newsから50000件のサブコーパスを2つつくります。

from datasets import load_dataset

# Load the AG News dataset
dataset = load_dataset('ag_news')

# Define sub-corpus size
sub_corpus_size = 50000

# Extract the text data from the dataset
corpus_a = [item['text'] for item in dataset['train'].select(range(sub_corpus_size))]  # First sub_corpus_size items for corpus_a
corpus_b = [item['text'] for item in dataset['train'].select(range(sub_corpus_size, 2 * sub_corpus_size))]  # Next sub_corpus_size items for corpus_b

# Tokenize the corpora
def tokenize(text):
    return text.split()

tokenized_corpus_a = [tokenize(doc) for doc in corpus_a]
tokenized_corpus_b = [tokenize(doc) for doc in corpus_b]

queries = [item['text'] for item in dataset['test'].select(range(1000))]
tokenized_queries = [tokenize(query) for query in queries]

速度の比較

本題ではありませんが、念の為、前回の記事同様に、速度において提案手法が優位性があるかを見ておきます。

import time
import numpy as np

iterations = 10

def measure_execution_time(func, *args, iterations=10, **kwargs):
    times = []
    for _ in range(iterations):
        start_time = time.time()
        func(*args, **kwargs)
        end_time = time.time()
        times.append(end_time - start_time)
    
    average_time = np.mean(times)
    std_dev_time = np.std(times)
    
    return average_time, std_dev_time

tfidfa = TfidfVectorizer(analyzer=lambda x: x)
tfidfb = TfidfVectorizer(analyzer=lambda x: x)
tfidfa.fit(tokenized_corpus_a)
tfidfb.fit(tokenized_corpus_b)


# Measure time for merging
average_time_merge, std_dev_time_merge = measure_execution_time(merge_tfidf_vectorizers, [tfidfa, tfidfb], [tokenized_corpus_a, tokenized_corpus_b], iterations=iterations)
print(f"merge_tfidf_vectorizers function over {iterations} iterations: {average_time_merge} ± {std_dev_time_merge} seconds")

# Measure time for initializing
tfidf_simple = TfidfVectorizer(analyzer=lambda x: x)
average_time_init, std_dev_time_init = measure_execution_time(tfidf_simple.fit, tokenized_corpus_a+tokenized_corpus_b)
print(f"merge by fitting simply over {iterations} iterations: {average_time_init} ± {std_dev_time_init} seconds")
merge_tfidf_vectorizers function over 10 iterations: 0.7000281333923339 ± 0.04681306379989129 seconds
merge by fitting simply over 10 iterations: 1.6661359786987304 ± 0.043753639556577295 seconds

提案手法の実行速度が約半分に短縮されています。
前回記事の1/20という短縮とは大きく開きがありますが、コーパスサイズが小さいことと、前回は日本語、今回は英語で試しているため、形態素解析の時間がベースライン手法に前回はプラスで乗っかっていたことが理由と考えられます。いずれにせよ、今回のメインで検証したいことには特に影響はありません。

精度の比較

本題である精度の比較をやっていきます。
コーパスとクエリをベクトルに変換し、cos類似度行列を取得し、クエリごとに類似度の高い順にコーパス文書のidを得ます。その後、上位n件の集合を取得し、jaccard係数を算出します。nはいくつかの値で算出します。
クエリはag_newsのテストデータを1000件使用し、jaccard係数は平均値をみます。

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np

query_size = 1000

tfidfa = TfidfVectorizer(analyzer=lambda x: x)
tfidfb = TfidfVectorizer(analyzer=lambda x: x)
tfidfa.fit(tokenized_corpus_a)
tfidfb.fit(tokenized_corpus_b)
tfidf_list = [tfidfa, tfidfb]
datasets = [tokenized_corpus_a, tokenized_corpus_b]

merged_tfidf_fast = merge_tfidf_vectorizers(tfidf_list, datasets)

merged_tfidf_simple = TfidfVectorizer(analyzer=lambda x: x)
merged_tfidf_simple.fit(tokenized_corpus_a+tokenized_corpus_b)

# Transform the tokenized queries using both TfidfVectorizers
query_vectors_fast = merged_tfidf_fast.transform(tokenized_queries[:query_size])
query_vectors_simple = merged_tfidf_simple.transform(tokenized_queries[:query_size])

# Transform the corpus using both TfidfVectorizers
corpus_vectors_fast = merged_tfidf_fast.transform(tokenized_corpus_a + tokenized_corpus_b)
corpus_vectors_simple = merged_tfidf_simple.transform(tokenized_corpus_a + tokenized_corpus_b)

# Calculate cosine similarity between query vectors and corpus vectors
similarity_fast = cosine_similarity(query_vectors_fast, corpus_vectors_fast)
similarity_simple = cosine_similarity(query_vectors_simple, corpus_vectors_simple)

# Generate rankings (higher similarity -> higher rank)
rank_fast = np.argsort(similarity_fast, axis=1)[:, ::-1]
rank_simple = np.argsort(similarity_simple, axis=1)[:, ::-1]

# Define the number of top ranks to consider
top_ns = [1,2,3,4,5,6,7,8,9,10,100,1000]

for n in top_ns:
    jaccard_scores = []
    for i in range(len(tokenized_queries[:query_size])):
        set_fast = set(rank_fast[i, :n])
        set_simple = set(rank_simple[i, :n])
        intersection = set_fast.intersection(set_simple)
        union = set_fast.union(set_simple)
        jaccard = len(intersection) / len(union) if union else 0
        jaccard_scores.append(jaccard)
    
    average_jaccard = np.mean(jaccard_scores)
    std_dev_jaccard = np.std(jaccard_scores)
    print(f"Jaccard coefficient for top {n} ranks: {average_jaccard:.4f} ± {std_dev_jaccard:.4f}")

Jaccard coefficient for top 1 ranks: 0.4510 ± 0.4976
Jaccard coefficient for top 2 ranks: 0.3717 ± 0.3963
Jaccard coefficient for top 3 ranks: 0.3171 ± 0.3282
Jaccard coefficient for top 4 ranks: 0.2811 ± 0.2900
Jaccard coefficient for top 5 ranks: 0.2576 ± 0.2615
Jaccard coefficient for top 6 ranks: 0.2424 ± 0.2426
Jaccard coefficient for top 7 ranks: 0.2298 ± 0.2295
Jaccard coefficient for top 8 ranks: 0.2172 ± 0.2121
Jaccard coefficient for top 9 ranks: 0.2093 ± 0.2052
Jaccard coefficient for top 10 ranks: 0.2020 ± 0.1968
Jaccard coefficient for top 100 ranks: 0.1262 ± 0.1152
Jaccard coefficient for top 1000 ranks: 0.1228 ± 0.0749

ついでに、top10の生データもみておきます。

for i in range(10):
  print(f"query: {queries[i]}")
  print(rank_fast[i][:10])
  print(rank_simple[i][:10])
  print("")
query: Fears for T N pension after talks Unions representing workers at Turner   Newall say they are 'disappointed' after talks with stricken parent firm Federal Mogul.
[41999 67131 28380 72676 30345 71319 16029 32199 83808 28509]
[31598 83167 14276 31255 29535 32067 33377 33101 83808 41999]

query: The Race is On: Second Private Team Sets Launch Date for Human Spaceflight (SPACE.com) SPACE.com - TORONTO, Canada -- A second\team of rocketeers competing for the  #36;10 million Ansari X Prize, a contest for\privately funded suborbital space flight, has officially announced the first\launch date for its manned rocket.
[39168 49277 31605 67902 41109 57295 23407  5475 52314 98055]
[49303   156 48576 49070 43848   126 60002 50084 38685 45605]

query: Ky. Company Wins Grant to Study Peptides (AP) AP - A company founded by a chemistry researcher at the University of Louisville won a grant to develop a method of producing better peptides, which are short chains of amino acids, the building blocks of proteins.
[44416 23616  4036  4005 23268 56298 45787  9578 93142  9211]
[53420 81959   810  8179 81996 35431  4876 47611 64083 40573]

query: Prediction Unit Helps Forecast Wildfires (AP) AP - It's barely dawn when Mike Fitzpatrick starts his shift with a blur of colorful maps, figures and endless charts, but already he knows what the day will bring. Lightning will strike in places he expects. Winds will pick up, moist places will dry and flames will roar.
[59485 80620 71006 41487 25062 30520  7514 31457 80997 23835]
[11195 64759 67173 99488 93964  1059 12005 27301 98834 30520]

query: Calif. Aims to Limit Farm-Related Smog (AP) AP - Southern California's smog-fighting agency went after emissions of the bovine variety Friday, adopting the nation's first rules to reduce air pollution from dairy cow manure.
[ 4109 89915 84889 55288 16618  8981 87211 43235 38873 77128]
[37444 38873 39090 30788 54607 39323 37535  5636 83880  7797]

query: Open Letter Against British Copyright Indoctrination in Schools The British Department for Education and Skills (DfES) recently launched a "Music Manifesto" campaign, with the ostensible intention of educating the next generation of British musicians. Unfortunately, they also teamed up with the music industry (EMI, and various artists) to make this popular. EMI has apparently negotiated their end well, so that children in our schools will now be indoctrinated about the illegality of downloading music.The ignorance and audacity of this got to me a little, so I wrote an open letter to the DfES about it. Unfortunately, it's pedantic, as I suppose you have to be when writing to goverment representatives. But I hope you find it useful, and perhaps feel inspired to do something similar, if or when the same thing has happened in your area.
[75546 48506  1114  6428 39821 76155 65396 42327 62168 35063]
[  618 83203 14235 94143 82061 21100 19776 95337 99815 44577]

query: Loosing the War on Terrorism \\"Sven Jaschan, self-confessed author of the Netsky and Sasser viruses, is\responsible for 70 percent of virus infections in 2004, according to a six-month\virus roundup published Wednesday by antivirus company Sophos."\\"The 18-year-old Jaschan was taken into custody in Germany in May by police who\said he had admitted programming both the Netsky and Sasser worms, something\experts at Microsoft confirmed. (A Microsoft antivirus reward program led to the\teenager's arrest.) During the five months preceding Jaschan's capture, there\were at least 25 variants of Netsky and one of the port-scanning network worm\Sasser."\\"Graham Cluley, senior technology consultant at Sophos, said it was staggeri ...\\
[24040  1114 39282 46162 97694 76165  9985 26264 58414 56167]
[24040 36754 35156 41020 23125 33388 23451 22948 23037 33504]

query: FOAFKey: FOAF, PGP, Key Distribution, and Bloom Filters \\FOAF/LOAF  and bloom filters have a lot of interesting properties for social\network and whitelist distribution.\\I think we can go one level higher though and include GPG/OpenPGP key\fingerpring distribution in the FOAF file for simple web-of-trust based key\distribution.\\What if we used FOAF and included the PGP key fingerprint(s) for identities?\This could mean a lot.  You include the PGP key fingerprints within the FOAF\file of your direct friends and then include a bloom filter of the PGP key\fingerprints of your entire whitelist (the source FOAF file would of course need\to be encrypted ).\\Your whitelist would be populated from the social network as your client\discovered new identit ...\\
[56167 70248 74061 29690 69508 29630 21883 29640 16798 87897]
[59988 33697 37498 21805 27455 20865 44343 37473 66214 54982]

query: E-mail scam targets police chief Wiltshire Police warns about "phishing" after its fraud squad chief was targeted.
[68063 53107 49993 66758 91002 47159 11286 31009 81157 46969]
[50242 50245 24316 68063 45394 71344 30545 99731  4387 24362]

query: Card fraud unit nets 36,000 cards In its first two years, the UK's dedicated card fraud unit, has recovered 36,000 stolen cards and 171 arrests - and estimates it saved 65m.
[ 9688  3142 55506 33376 50220 65060 36815 57001 15087 55588]
[ 9688 49257 36434 75404 91498 28282 44014 40711  9396 48230]

考察

top1からtop10までのjaccard係数は0.4から0.2くらいまでなだらかに減少しています。
nが小さいほどjaccard係数が大きいのは、上位の検索結果の重複が多いといういみで、実務上使いやすい傾向かと思います。
jaccard係数は和集合のサイズが分母となるため、nに対する重複している個数の割合として考えると検索結果の上位を取り出したときには5割前後といったところかと思います。
使えるとも使えないとも言い難い数値ですが、検索タスクへの応用を考えると、正解の文書というのはそもそも1、2個なので、そこさえ外さなければ良いと思うと、まあまあ使える可能性もあるかもしれません。

別の言い方をすると、類似していない文章同士の順序が多少ごちゃごちゃしていても問題ないともいえます。
試しにそれぞれの手法での最大類似度と、検索結果のtop1が一致したかどうかを出力してみます。

for i in range(query_size):
    max_idx_fast = np.argmax(similarity_fast[i])
    max_idx_simple = np.argmax(similarity_simple[i])
    is_match = max_idx_fast == max_idx_simple
    print(f"query_id: {i}, is_match: {is_match}, fast_score: {similarity_fast[i][max_idx_fast]:.4f}, simple_score: {similarity_simple[i][max_idx_simple]:.4f}")
query_id: 0, is_match: False, fast_score: 0.3213, simple_score: 0.2128
query_id: 1, is_match: False, fast_score: 0.4353, simple_score: 0.2820
query_id: 2, is_match: False, fast_score: 0.5547, simple_score: 0.2342
query_id: 3, is_match: False, fast_score: 0.3763, simple_score: 0.1357
query_id: 4, is_match: False, fast_score: 0.4933, simple_score: 0.2692
query_id: 5, is_match: False, fast_score: 0.5974, simple_score: 0.1990
query_id: 6, is_match: True, fast_score: 0.5732, simple_score: 0.3144
query_id: 7, is_match: False, fast_score: 0.5673, simple_score: 0.1986
query_id: 8, is_match: False, fast_score: 0.3025, simple_score: 0.2633
query_id: 9, is_match: True, fast_score: 0.2989, simple_score: 0.2287

ベースライン手法の類似度は0.3前後以下とさほど高くなく、類似した文章がそもそもコーパス上にほぼなかった可能性もあります。ただし、高次元かつスパースなベクトルのコサイン類似度は0付近になるのが普通でもあるため、実際のところ0.3前後という値が低いのかどうかは不明です。このあたりは定性的な分析とも合わせて評価が必要かもしれません。

参考

おまけ

検索結果の比較をlangchainのTFIDFRetrieverを用いてやってみました。
本文の結論にさほど影響を与えないので、おまけにのせます。

以下を参考にTFIDFRetrieverをマージする関数を作ります。

【langchain】TFIDFRetrieverのマージ

from langchain_community.retrievers import TFIDFRetriever

def merge_tfidf_retrievers(retrievers: list[TFIDFRetriever]):
    """
    複数のTFIDFRetrieverインスタンスをマージする関数

    :param retrievers: TFIDFRetrieverインスタンスのリスト
    :return: 統合された新しいTFIDFRetrieverインスタンス
    """
    # vectorizerのマージ
    vectorizers = [retriever.vectorizer for retriever in retrievers]
    merged_vectorizer = merge_tfidf_vectorizers(vectorizers, [retriever.docs for retriever in retrievers])
    # tfidf_arrayのマージ
    merged_tokenized_docs = []
    for retriever in retrievers:
        tokenized_docs = retriever.vectorizer.inverse_transform(retriever.tfidf_array)
        merged_tokenized_docs.extend(tokenized_docs)
    params = merged_vectorizer.get_params()
    merged_vectorizer.set_params(analyzer=lambda x: x) # transformの時間節約のため一時的にanalyzerを無効化
    merged_tfidf_array = merged_vectorizer.transform(merged_tokenized_docs)
    merged_vectorizer.set_params(analyzer=params["analyzer"]) # analyzerを元に戻す
    # docsのマージ
    merged_docs = [doc for retriever in retrievers for doc in retriever.docs]
    # TFIDFRetrieverインスタンスの作成
    merged_retriever = TFIDFRetriever(docs=merged_docs, vectorizer=merged_vectorizer, tfidf_array=merged_tfidf_array
                                      , k=retrievers[0].k) # kはretrievers間で全て同じと仮定

    return merged_retriever

作成した関数を使ってfast版とsimple版のretrieverを作成します。
k=1000を指定して、上位1000件の結果を得られるようにします。

retriever_a = TFIDFRetriever.from_texts(corpus_a)
retriever_b = TFIDFRetriever.from_texts(corpus_b)
retriever_fast = merge_tfidf_retrievers([retriever_a, retriever_b])
retriever_fast.k = 1000

retriever_simple = TFIDFRetriever.from_texts(corpus_a + corpus_b)
retriever_simple.k = 1000

top_nを変えながら1000件のクエリに対するjaccard係数の平均値を求めていきます。

top_ns = [1,2,3,4,5,6,7,8,9,10,100,1000]

average_jaccards = []

query_size = 1000
all_texts_fast = []
all_texts_simple = []
for i in range(query_size):
    documents_fast = retriever_fast.invoke(queries[i])
    documents_simple = retriever_simple.invoke(queries[i])

    texts_simple = [document.page_content for document in documents_simple]  
    texts_fast = [document.page_content for document in documents_fast]
    all_texts_fast.append(texts_fast)
    all_texts_simple.append(texts_simple)

for n in top_ns:
    jaccard_scores = []
    for i in range(query_size):

        set_simple = set(all_texts_simple[i][:n])
        set_fast = set(all_texts_fast[i][:n])

        intersection = set_simple.intersection(set_fast)
        union = set_simple.union(set_fast)
        jaccard = len(intersection) / len(union) if union else 0
        jaccard_scores.append(jaccard)
    
    average_jaccard = np.mean(jaccard_scores)               
    std_dev_jaccard = np.std(jaccard_scores)
    average_jaccards.append((n, average_jaccard, std_dev_jaccard))
    print(f"Jaccard coefficient for top {n} ranks: {average_jaccard:.4f} ± {std_dev_jaccard:.4f}")
Jaccard coefficient for top 1 ranks: 0.4950 ± 0.5000
Jaccard coefficient for top 2 ranks: 0.4127 ± 0.3898
Jaccard coefficient for top 3 ranks: 0.3686 ± 0.3255
Jaccard coefficient for top 4 ranks: 0.3432 ± 0.2865
Jaccard coefficient for top 5 ranks: 0.3318 ± 0.2644
Jaccard coefficient for top 6 ranks: 0.3165 ± 0.2447
Jaccard coefficient for top 7 ranks: 0.3086 ± 0.2344
Jaccard coefficient for top 8 ranks: 0.3003 ± 0.2255
Jaccard coefficient for top 9 ranks: 0.2953 ± 0.2193
Jaccard coefficient for top 10 ranks: 0.2907 ± 0.2103
Jaccard coefficient for top 100 ranks: 0.2227 ± 0.1343
Jaccard coefficient for top 1000 ranks: 0.2295 ± 0.1044

TfidfVectorizerで実施した本文の結果と比べて全体的に、jaccard係数が高い結果となりました。
厳密に一致することを期待しましたが、なぜ値が異なるのかよくわかりません。
TFIDFRetriever.invokeの内部でなにか特殊な処理をしているか、TfidfVectorizerとTFIDFRetrieverでパラメータの初期値に違いがあるとかなのかもしれません。

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?