0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

transformersとaozorabunko-cleanで作る日本語DeBERTaモデル

Posted at

transformersaozorabunko-cleanで、日本語DeBERTaモデルを作ってみることにした。ただ、aozorabunko-cleanは各レコードがどう見ても512トークンを超えてしまうので、各行700文字未満に整形しつつtrain.txtに入れている。トークナイザは「Sentencepieceの分割をMeCabっぽくする」のアイデアを借りるべく、各行をfugashiとunidic-liteで切った結果に常用漢字を加えた上で、Unigramトークナイザを鍛えている。

#! /usr/bin/python3
import os,datasets,urllib.request
from transformers import DebertaV2TokenizerFast,DebertaV2Config,DebertaV2ForMaskedLM,DataCollatorForLanguageModeling,TrainingArguments,Trainer
from tokenizers import Tokenizer,models,pre_tokenizers,normalizers,processors,decoders,trainers
with open("train.txt","w",encoding="utf-8") as w:
  d,i=datasets.load_dataset("globis-university/aozorabunko-clean"),0
  for t in d["train"]:
    for s in t["text"].replace("。","。\n").replace("\u3000"," ").split("\n"):
      if i+len(s)<700:
        print(s,end="",file=w)
        i+=len(s)
      else:
        print("\n"+s,end="",file=w)
        i=len(s)
  print("",file=w)
os.system("fugashi -Owakati < train.txt > token.txt")
with urllib.request.urlopen("https://www.unicode.org/wg2/iso10646/edition6/data/JapaneseCoreKanji.txt") as r:
  joyo=[chr(int(t,16)) for t in r.read().decode().strip().split("\n") if not t.startswith("#")]
spt=Tokenizer(models.Unigram())
spt.pre_tokenizer=pre_tokenizers.Sequence([pre_tokenizers.Whitespace(),pre_tokenizers.Punctuation()])
spt.normalizer=normalizers.Sequence([normalizers.Nmt(),normalizers.NFKC()])
spt.post_processor=processors.TemplateProcessing(single="[CLS] $A [SEP]",pair="[CLS] $A [SEP] $B:1 [SEP]:1",special_tokens=[("[CLS]",0),("[SEP]",2)])
spt.decoder=decoders.WordPiece(prefix="",cleanup=True)
spt.train(trainer=trainers.UnigramTrainer(vocab_size=65000,max_piece_length=4,initial_alphabet=joyo,special_tokens=["[CLS]","[PAD]","[SEP]","[UNK]","[MASK]"],unk_token="[UNK]",n_sub_iterations=2),files=["token.txt"])
spt.save("tokenizer.json")
tkz=DebertaV2TokenizerFast(tokenizer_file="tokenizer.json",split_by_punct=True,do_lower_case=False,keep_accents=True,vocab_file="/dev/null",model_max_length=512)
t=tkz.convert_tokens_to_ids(["[CLS]","[PAD]","[SEP]","[UNK]","[MASK]"])
cfg=DebertaV2Config(hidden_size=768,num_hidden_layers=12,num_attention_heads=12,intermediate_size=3072,relative_attention=True,position_biased_input=False,pos_att_type=["p2c","c2p"],max_position_embeddings=tkz.model_max_length,vocab_size=len(tkz),tokenizer_class=type(tkz).__name__,bos_token_id=t[0],pad_token_id=t[1],eos_token_id=t[2])
arg=TrainingArguments(num_train_epochs=3,per_device_train_batch_size=14,output_dir="/tmp",overwrite_output_dir=True,save_total_limit=2)
class ReadLineDS(object):
  def __init__(self,file,tokenizer):
    self.tokenizer=tokenizer
    with open(file,"r",encoding="utf-8") as r:
      self.lines=[s.strip() for s in r if s.strip()>""]
  __len__=lambda self:len(self.lines)
  __getitem__=lambda self,i:self.tokenizer(self.lines[i],truncation=True,add_special_tokens=True,max_length=self.tokenizer.model_max_length-2)
trn=Trainer(args=arg,data_collator=DataCollatorForLanguageModeling(tkz),model=DebertaV2ForMaskedLM(cfg),train_dataset=ReadLineDS("train.txt",tkz))
trn.train()
trn.save_model("deberta-base-aozorabunko-clean")
tkz.save_pretrained("deberta-base-aozorabunko-clean")
from transformers import pipeline
fmp=pipeline("fill-mask","deberta-base-aozorabunko-clean")
print(fmp("夜の底が[MASK]なった。"))

NVIDIA A100-SXM4-40GBを4枚使ってper_device_train_batch_size=14で頑張ったが、箱根駅伝の往路に間に合わず、5時間40分ほどかかって以下の結果が出力された。

[{'score': 0.27027562260627747, 'token': 2108, 'token_str': '白く', 'sequence': '夜の底が白くなった。'}, {'score': 0.18223485350608826, 'token': 3016, 'token_str': '黒く', 'sequence': '夜の底が黒くなった。'}, {'score': 0.04062410816550255, 'token': 2184, 'token_str': '美しく', 'sequence': '夜の底が美しくなった。'}, {'score': 0.04049951210618019, 'token': 110, 'token_str': 'どう', 'sequence': '夜の底がどうなった。'}, {'score': 0.03461405262351036, 'token': 3268, 'token_str': 'わるく', 'sequence': '夜の底がわるくなった。'}]

「夜の底が[MASK]なった。」の[MASK]に「白く」「黒く」「美しく」「どう」「わるく」を埋めてきており、DeBERTaモデルの面目躍如といったところだろう。

0
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?