0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

Pythonプログラミング:ウィキペディアのデータを使ってword2vecをしてみる{2. モデル作成編}

Last updated at Posted at 2020-08-08

はじめに

4本立ての記事、2本目です。

  1. データ取得&前処理編
  2. モデル作成編 ★本稿
  3. モデル利用編
  4. モデル応用編

本稿で紹介すること

  • word2vecモデルの作成

word2vec
Word2vec Tutorial
Deep learning with word2vec and gensim
米googleの研究者が開発したWord2Vecで自然言語処理(独自データ)

本稿で紹介しないこと

  • word2vecの仕組み
  • Pythonライブラリの使い方
    • gensim ※単語の分散表現(単語ベクトル)を実現するPythonライブラリ

【まとめ】自然言語処理における単語分散表現(単語ベクトル)と文書分散表現(文書ベクトル)
word2vec(Skip-Gram Model)の仕組みを恐らく日本一簡潔にまとめてみたつもり
【Python】Word2Vecの使い方
gensim/word2vec.py
gensim: models.word2vec – Word2vec embeddings

モデル作成編

ここからは、再びWindowsに戻って進めます。

word2vecモデルの作成

まず、必要なツール群を以下のコマンドでインストール。
2つ目のPythonライブラリ、「cython」をインストールした理由は、Word2vec Tutorialに以下の記載があったためです。

The workers parameter has only effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow).

# gensimのインストール
python3 -m pip install gensim cython==0.29.14

以下、モデル作成のCodeを紹介。
物理OS(Windows10)には8コア(8スレッド)/32GB(16GB×2)を搭載しているので、「workers」パラメータを6で指定しました。
4つのパラメータのみ明示的に指定していますが、全てのパラメータはGitHubのgensim/word2vec.pygensim: models.word2vec – Word2vec embeddingsを参照されたし。

analyzeWiki_Word2Vec1
from gensim.models import Word2Vec
from gensim.models.word2vec import LineSentence
import logging

logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

SIZE = 200
WINDOWS = 10
MIN_COUNT = 5
WORKERS = 6

sentences = LineSentence(r'..\data\jawiki_wakati.txt')
model = Word2Vec(sentences, size=SIZE, window=WINDOWS, min_count=MIN_COUNT, workers=WORKERS)
model.save(r'..\result\jawiki_word2vec_sz%s_wndw%s.model' %(SIZE, WINDOWS))

以下、実行ログの抜粋。

2020-08-05 12:49:56,737 : INFO : collecting all words and their counts
2020-08-05 12:49:56,737 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2020-08-05 12:49:56,815 : INFO : PROGRESS: at sentence #10000, processed 377490 words, keeping 26581 word types
2020-08-05 12:49:56,904 : INFO : PROGRESS: at sentence #20000, processed 676091 words, keeping 40988 word types
2020-08-05 12:49:56,961 : INFO : PROGRESS: at sentence #30000, processed 891169 words, keeping 48463 word types
2020-08-05 12:49:57,018 : INFO : PROGRESS: at sentence #40000, processed 1138800 words, keeping 56069 word types
2020-08-05 12:49:57,099 : INFO : PROGRESS: at sentence #50000, processed 1427677 words, keeping 64453 word types
2020-08-05 12:49:57,160 : INFO : PROGRESS: at sentence #60000, processed 1706712 words, keeping 71925 word types
2020-08-05 12:49:57,237 : INFO : PROGRESS: at sentence #70000, processed 1955766 words, keeping 78716 word types
2020-08-05 12:49:57,304 : INFO : PROGRESS: at sentence #80000, processed 2249847 words, keeping 85841 word types
2020-08-05 12:49:57,393 : INFO : PROGRESS: at sentence #90000, processed 2569671 words, keeping 92234 word types
2020-08-05 12:49:57,481 : INFO : PROGRESS: at sentence #100000, processed 2917832 words, keeping 98565 word types
2020-08-05 12:49:57,563 : INFO : PROGRESS: at sentence #110000, processed 3248351 words, keeping 104647 word types
2020-08-05 12:49:57,635 : INFO : PROGRESS: at sentence #120000, processed 3518921 words, keeping 110144 word types
2020-08-05 12:49:57,720 : INFO : PROGRESS: at sentence #130000, processed 3866746 words, keeping 115584 word types
(省略)
2020-08-05 13:49:44,346 : INFO : EPOCH 5 - PROGRESS: at 99.86% examples, 1181642 words/s, in_qsize 8, out_qsize 1
2020-08-05 13:49:45,221 : INFO : worker thread finished; awaiting finish of 5 more threads
2020-08-05 13:49:45,221 : INFO : worker thread finished; awaiting finish of 4 more threads
2020-08-05 13:49:45,221 : INFO : worker thread finished; awaiting finish of 3 more threads
2020-08-05 13:49:45,236 : INFO : worker thread finished; awaiting finish of 2 more threads
2020-08-05 13:49:45,237 : INFO : worker thread finished; awaiting finish of 1 more threads
2020-08-05 13:49:45,241 : INFO : worker thread finished; awaiting finish of 0 more threads
2020-08-05 13:49:45,242 : INFO : EPOCH - 5 : training on 1038942570 raw words (748395011 effective words) took 633.3s, 1181694 effective words/s
2020-08-05 13:49:45,242 : INFO : training on a 5194712850 raw words (3741906439 effective words) took 3156.1s, 1185593 effective words/s
2020-08-05 13:49:45,243 : INFO : saving Word2Vec object under ..\result\jawiki_word2vec_sz200_wndw10.model, separately None
2020-08-05 13:49:45,244 : INFO : storing np array 'vectors' to ..\result\jawiki_word2vec_sz200_wndw10.model.wv.vectors.npy
2020-08-05 13:50:08,206 : INFO : not storing attribute vectors_norm
2020-08-05 13:50:08,206 : INFO : storing np array 'syn1neg' to ..\result\jawiki_word2vec_sz200_wndw10.model.trainables.syn1neg.npy
2020-08-05 13:50:30,721 : INFO : not storing attribute cum_table
2020-08-05 13:50:34,565 : INFO : saved ..\result\jawiki_word2vec_sz200_wndw10.model

ちなみに、筆者のWindows10のPCスペックに対し、モデル作成に約1時間を要しました。
(内訳は、データ読込みに5分、各EPOCHが10分×5セットの、Total55分でした。)

workersパラメータについて
紹介したサンプルコードでは、WORKERS = 6として実行しました。
が、モデル作成の速度は向上するものの、再現性や精度の観点だと、ちょっと検証が必要かな、と思われます。
興味のある方は、以下のリンクをご参考にぜひ。
Parallelizing Word2Vec in Shared and Distributed
Memory

Parallelizing Word2Vec in Shared and Distributed Memory - IEEE Journals & Magazine
(PDF) Parallelizing Word2Vec in Multi-Core and Many-Core Architectures

まとめ

ウィキペディアの前処理済みデータから、word2vecモデルを作成する方法を紹介。

0
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?