58
45

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

BERTで日本語テキストの感情分析(Tensorflow版)

Last updated at Posted at 2020-06-22

日本語の単一テキストが与えられたときにクラス分類するシンプルなTensorflowのコードが欲しかったので作成

やりたいこと

日本語の単文が与えられたときにクラス分類したい.
(計算効率とか,文章のペアで判定とかは,とりあえず置いとく)

texts = [
    "この犬は可愛いです",
    "その猫は気まぐれです",
    "あの蛇は苦手です"
]
labels = [1, 0, 0] # 1: 好き, 0: 嫌い

検証環境

Tnesorflow==2.2.0
transformers==2.11.0

学習済みモデル

transformersでは日本語での訓練済みモデルとして,下記の4種類を提供くださっています.
(cf. https://huggingface.co/transformers/pretrained_models.html)

A). cl-tohoku/bert-base-japanese
B). cl-tohoku/bert-base-japanese-whole-word-masking
C). cl-tohoku/bert-base-japanese-char
D). cl-tohoku/bert-base-japanese-char-whole-word-masking

A), B)はMeCabが必要. whole-word-maskingの方が性能は良い模様
cf. https://nlp.ist.i.kyoto-u.ac.jp/index.php?ku_bert_japanese

import transformers
name = "cl-tohoku/bert-base-japanese-char-whole-word-masking"
tokenizer = transformers.BertTokenizer.from_pretrained(name)
model = transformers.TFBertModel.from_pretrained(name)

分かち書き

BERTの入力部分として利用するためには,

日本語は単語同士がスペースなどで区切られてないので,一旦tokenizerを使って区切って上げる必要があります.

model_name = "cl-tohoku/bert-base-japanese"
tokenizer = transformers.BertTokenizer.from_pretrained(model_name)
print(tokenizer.tokenize(text))
# ['この', '犬', 'は', '可', '愛', 'い', '##で', '##す']

入力には,この区切った単語に対するID(input_idsに相当)が必要で,tokenizer.encodeで出せます

# input IDsに変換(Token Type IDsとごっちゃにならないように注意)
tokenizer.encode(text)
# [2, 70, 2928, 9, 441, 767, 21, 28455, 28484, 3]

tokenizerによってencodeされる際に単文だと文頭と文末にそれぞれ[CLS],[SEP]という特殊トークンが挿入されます.

for input_id in tokenizer.encode(text):
    print("%d => %s" % (input_id, tokenizer.decode([input_id])))
"""
2 => [CLS]
70 => この
2928 => 犬
9 => は
441 => 可
767 => 愛
21 => い
28455 => ##で
28484 => ##す
3 => [SEP]
"""

attention_maskやtoken_type_idsも,encode_plusを使えば同時に出してくれます.
(attention_maskはinput_idsが0より大きければ1にしているだけで,token_type_idsは単文なのですべて0を入れておけば大丈夫そうです)


tokenizer.encode_plus(text, max_length=15, pad_to_max_length=True)
# {'input_ids': [2, 70, 2928, 9, 441, 767, 21, 28455, 28484, 3, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0]}

モデル構築

どうも公式ドキュメント的には,TFBertForSequenceClassificationを使っているっぽいのですが,汎用性が高そうなTFBertModelを利用しています.

def build_model(model_name, num_classes, max_length):
    input_shape = (max_length, )
    input_ids = tf.keras.layers.Input(input_shape, dtype=tf.int32)
    attention_mask = tf.keras.layers.Input(input_shape, dtype=tf.int32)
    token_type_ids = tf.keras.layers.Input(input_shape, dtype=tf.int32)
    bert_model = transformers.TFBertModel.from_pretrained(model_name)
    last_hidden_state, pooler_output = bert_model(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids
    )
    output = tf.keras.layers.Dense(num_classes, activation="softmax")(pooler_output)
    model = tf.keras.Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[output])
    model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["acc"])
    return model

コード

訓練から予測までの全体のコードです.

import numpy as np
import tensorflow as tf
import transformers
from sklearn.metrics import accuracy_score

# model_nameはここから取得(cf. https://huggingface.co/transformers/pretrained_models.html)
model_name = "cl-tohoku/bert-base-japanese"
tokenizer = transformers.BertTokenizer.from_pretrained(model_name)

# 訓練データ
train_texts = [
    "この犬は可愛いです",
    "その猫は気まぐれです",
    "あの蛇は苦手です"
]
train_labels = [1, 0, 0] # 1: 好き, 0: 嫌い

# テストデータ
test_texts = [
    "その猫はかわいいです",
    "どの鳥も嫌いです",
    "あのヤギは怖いです"
]
test_labels = [1, 0, 0]

# テキストのリストをtransformers用の入力データに変換
def to_features(texts, max_length):
    shape = (len(texts), max_length)
    # input_idsやattention_mask, token_type_idsの説明はglossaryに記載(cf. https://huggingface.co/transformers/glossary.html)
    input_ids = np.zeros(shape, dtype="int32")
    attention_mask = np.zeros(shape, dtype="int32")
    token_type_ids = np.zeros(shape, dtype="int32")
    for i, text in enumerate(texts):
        encoded_dict = tokenizer.encode_plus(text, max_length=max_length, pad_to_max_length=True)
        input_ids[i] = encoded_dict["input_ids"]
        attention_mask[i] = encoded_dict["attention_mask"]
        token_type_ids[i] = encoded_dict["token_type_ids"]
    return [input_ids, attention_mask, token_type_ids]

# 単一テキストをクラス分類するモデルの構築
def build_model(model_name, num_classes, max_length):
    input_shape = (max_length, )
    input_ids = tf.keras.layers.Input(input_shape, dtype=tf.int32)
    attention_mask = tf.keras.layers.Input(input_shape, dtype=tf.int32)
    token_type_ids = tf.keras.layers.Input(input_shape, dtype=tf.int32)
    bert_model = transformers.TFBertModel.from_pretrained(model_name)
    last_hidden_state, pooler_output = bert_model(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids
    )
    output = tf.keras.layers.Dense(num_classes, activation="softmax")(pooler_output)
    model = tf.keras.Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[output])
    optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
    model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=["acc"])
    return model

num_classes = 2
max_length = 15
batch_size = 10
epochs = 3

x_train = to_features(train_texts, max_length)
y_train = tf.keras.utils.to_categorical(train_labels, num_classes=num_classes)
model = build_model(model_name, num_classes=num_classes, max_length=max_length)

# 訓練
model.fit(
    x_train,
    y_train,
    batch_size=batch_size,
    epochs=epochs
)

# 予測
x_test = to_features(test_texts, max_length)
y_test = np.asarray(test_labels)
y_preda = model.predict(x_test)
y_pred = np.argmax(y_preda, axis=1)
print("Accuracy: %.5f" % accuracy_score(y_test, y_pred))
58
45
2

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
58
45

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?