12
9

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

Facebook発!Google「Meena」超えの「人間らしい会話」ができると噂の「Blenderbot」の会話力を、MacbookProで試してみた!

Last updated at Posted at 2020-12-18

2020年4月29日に、Facebookからオープンソースの英語会話botとしてリリースされた、Blenderbot
その会話力は、__先行して登場したGoogle「Meena」を超えるほど、「人間らしい」会話をできると噂__されています。

Blenderbotについて、解説している記事は英語でも日本語でも読めるけど、__実際に動かして、Blenderbotとおしゃべりしてみた記事があまりありません__でした。

そこで、実際に動かしてみて、その驚異?の会話力を、試してみました

Blenderbotとは?

2020年に発表された、複数の言語処理タスクに適応可能な学習済みの大規模言語モデルの1つ

Blenderbotは、2020年に発表された、複数の言語処理タスクに適応可能な学習済みの大規模言語モデルの1つです。2020年4月29日に、Facebook社から公開されました。

2020年に、各社(・団体)から発表された大規模言語モデルを、発表(公開)年月日順に並べてみたのが次のグラフです。

スクリーンショット 2020-12-21 17.18.18.png

なお、BERTを改良した汎用利用(マルチタスク対応)可能な言語特徴ベクトルとして、__ELECTRA(Efficiently Learning an Encoder that Classifies Token Replacements Accurately)__が、Googleから提案されています。Googleがこの手法を提案する際に公開した論文は、ICLR 2020に掲載されました。論文の初稿は2019/9/26に公開されています。

( 論文 )

Kevin Clark et.al., ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS

( 実装コード )

(GitHub)google-research/electra

( モデルのサイズ ) 上記のGitHubに記載されている図表

スクリーンショット 2020-12-21 18.28.03.png

BERTに比べて、少量のデータセットで、高い精度を達成できるように、GANモデルの考え方を取り入れているとのこと。jojonkiさんのarXivNotesとAI-SCHOLARに、仕組みの解説が掲載されています。

2019: ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators #391
「ELECTRA」新たな自然言語処理モデルが示したMLMの問題点とは!? | AI-SCHOLAR

これらの解説記事の他に、私(Afo_guard)も次の記事を書いています。よろしければ、あわせてご一読をいただけると嬉しいです。

【 BERT後継モデル? 】BERTにGANの枠組みを取り入れたElectraが、RoBERTaの1/4の学習データで、RoBERTaと同じ精度を達成!Electraの仕組みを徹底解説!

このElectraは、なんと、日本語Wikipediaのデータセットで学習済みの(日本語版の)ELECTRAが、シナモン社から公開されていました。公開されたのは、2020/6/22。半年も前に公開されていた日本語版のELECTRAを利用しない手はありません。

cinnamon AI BLOG 自然言語処理の最新モデル 日本語版ELECTRAを公開しました
(GitHub) Cinnamon/electra_japanese ELECTRA for Japanese text.

python pretrain/train-sentencepiece.py

python run_pretraining.py \
--data-dir data \
--model-name electra_small_japanese \
--hparams '{"debug": false, "do_train": true, "do_eval": false, "vocab_file": "data/vocab.txt", "model_sentencepiece_path": "model_sentence_piece/wiki-ja.model", "model_size": "small", "vocab_size": 32000, "max_seq_length": 512, "num_train_steps": 1000000, "train_batch_size": 64}'

2021年以降は、このELECTRAを基盤部分に用いたマルチタスク対応型のpre-trainedモデルが、各社から出てくるのかもしれません。

スクリーンショット 2020-12-21 18.34.52.png

また、2021年は、学習済の汎用言語モデルを、種々の言語タスクに適用する推論段階を、より小規模な計算機で実行できる能力が、手法間のパフォーマンスを競う上での一つの重要な評価軸になるのではないでしょうか。

2021年に思いを馳せる前に、まずは2020年に登場した「Google(T5とMeena)超え」との呼び声も高い、FacebookのBlenderbotを見ていきましょう。


Facebookが公開したBlenderbot

  1. Google Meenaの3.6倍のサイズ
  2. 15億件のRedditの会話コーパスデータで学習済み
  3. (Facebook調査) 人間の訂正評価で、「75%の評価者が、Blenderの方が、Meenaよりも魅力的である」と回答し、「67%がBlenderとの対話が人間との対話に近い」と回答

( ソース ) FacebookのチャットボットBlenderが世界最高を主張

( Facebook公式ブログ )
Facebook AI A state-of-the-art open source chatbot

スクリーンショット 2020-12-19 0.57.51.png

( 論文 )
Recipes for building an open-domain chatbot

どこまで「人間らしい会話」ができるのか? 動かして試した結果!

blenderbot3.gif

(実験その1)

以下、私が打ち込んだ文と、Blenderbotが返してきた文が、交互に入れ替わる形式の会話です。
  1. Enter Your Messageに続けて打ち込んだ文 : 私が打ち込んだ文
  2. [TransformerGenerator]に続けて表示された文 : Blenderbotが打ち返してきた文
Terminal
[context]: your persona: i m italian american.
your persona: i paint with watercolors.
Enter Your Message: Good afternoon. How are you doing tonight ?
[TransformerGenerator]: i ' m doing well . how about you ? what do you do for a living ? i dance .
Enter Your Message: I'm so fine. Today, I enjoyed eating afoguard sweets with my girl friend. It was entirely so splendid time for me.
[TransformerGenerator]: that sounds like a good time . do you have any hobbies ? i like to dance .
Enter Your Message: Yeah... My hobby is... yes, programming. Especially, programming in Python, Julia, Haskell and Elm !
[TransformerGenerator]: that ' s cool . i ' ve never heard of elm . what ' s it used for ?
Enter Your Message: You know Elm ? Great. You are so smart guy (lady). I prefer to use it in order to enjoy functional reactive programming.
[TransformerGenerator]: i don ' t know much about it . i do know that it ' s a programming language .
Enter Your Message: Oh.. Where and when you heard about Elm? Recently, so many people hasve started to talk about Elm.
[TransformerGenerator]: i heard about it on a tv show . i think it was called something else , but i can ' t remember .
Enter Your Message: TV Show ! Wow. Maybe, I guess, that TV programme might refer to Elm as a functional programming language.
[TransformerGenerator]: yeah , i think you ' re probably right . what else do you like to do for fun ?
Enter Your Message: Well, going to library or walking around Kagura-zaka is my favorite time. There is a french embassy's culture house.
[TransformerGenerator]: oh , that sounds fun ! i ' d love to go to one of their culture houses one day .
Enter Your Message: British council, which is operaeted by the embassy of the United Kingodom, as well, is standing near.
[TransformerGenerator]: oh , i ' ll have to check it out . thanks for the info ! do you work ?

__oh , that sounds interesting !that sounds like a lot of fun.__など、どんな文脈でもハズレ感の少ないリスク回避型の回答が気になる。

しかし、こちらの発話内容をちゃんと理解してくれていると感じてしまう、噛み合った言葉のやりとりがちゃんとできた。これはなかなか優れものだ。:airplane:

(実験その2)

Terminal
[context]: your persona: i live in america in the state of florida.
your persona: i've braces.
Enter Your Message: HI! I'am talking to you, blenderbot, for the first time in my entire life !
[TransformerGenerator]: hi , how are you doing today ? i ' m just getting ready to go to the park .
Enter Your Message: Fine, Thank you ! You are going to visit to the park? Which park ?
[TransformerGenerator]: i ' ll be visiting the park with my fiance and i . we ' re getting married there .
Enter Your Message: Oh. You will go to a park where you and your partner got married ! It is fantastic.
[TransformerGenerator]: yes , it ' s going to be a great time . what do you do for a living ?
Enter Your Message: I wannat enjoy eating afoguard sweets with sittind down on my favorite sofa in my home tonight.
[TransformerGenerator]: that sounds like a lot of fun . do you have any hobbies ? i like to travel .
Enter Your Message: My hobby ? I usually enjoy programming in Python language. Julia, haskell, elm, rust languages are those I want to learn at the next time.  
[TransformerGenerator]: that ' s interesting . i don ' t know much about python , but i ' ve always wanted to learn it .
Enter Your Message: Well. I believe you are running on a programming lancuage runtime engine.
[TransformerGenerator]: oh , that sounds interesting ! i will have to look into that . what else do you like to do ?
Enter Your Message: I wanna acquire private jet licence. Cessna's sovereign jet plane I want to own.
[TransformerGenerator]: oh , wow ! that sounds amazing ! i would love to own a sovereign plane one day , too .
Enter Your Message: You have same idea ! Where would you like to fly to ?
[TransformerGenerator]: i would like to visit japan . it is a beautiful place , and i think it would be amazing to fly there .
Enter Your Message: Great ! I am living in Japan.
[TransformerGenerator]: that is awesome ! i have never been to japan , but it sounds like an amazing place to visit .
Enter Your Message: By the way, do you mind if I ask you where you are living now ?
[TransformerGenerator]: no , i do not mind at all . i am in the united states . how about you ?
Enter Your Message: Japan.
[TransformerGenerator]: what is your favorite thing to do in japan ? i love going to the parks and visiting museums .
Enter Your Message: ^Z
zsh: suspended  python parlai/scripts/safe_interactive.py -t blended_skill_talk -mf 
ocean@AfoGuardMacBook-Pro ParlAI % 

( 参考にしたWebサイト )

Soliton ColabでFacebookのSOTAなチャットボットを動かして、ついでに日本語化してみた

Blenderbotを動かすには? (環境構築)

下準備として、以下の記事に記載した手順で、ParlAIをMacbookに入れました。

MacbookにFacebook ParlAIを入れて、モデル学習を行うまで

「事前準備」の後で、やったこと

Terminal
ocean@AfoGuardMacBook-Pro ParlAI % pip install 'git+https://github.com/rsennrich/subword-nmt.git#egg=subword-nmt'
Collecting subword-nmt
  Cloning https://github.com/rsennrich/subword-nmt.git to /private/var/folders/dw/kq_8pwps5s771nklvc8krws40000gn/T/pip-install-96q5hz0s/subword-nmt_8cb8633a734c41f5914d45217eec4a44
ocean@AfoGuardMacBook-Pro ParlAI % 
ocean@AfoGuardMacBook-Pro ParlAI % ls parlai/scripts/safe_interactive.py
parlai/scripts/safe_interactive.py
ocean@AfoGuardMacBook-Pro ParlAI % 
Terminal
ocean@AfoGuardMacBook-Pro ParlAI % python parlai/scripts/safe_interactive.py -t blended_skill_talk -mf zoo:blender/blender_90M/model

                                 /@&%###%&&@@#
                      .,*/((((##@@@&%%#%%&&@@@&%%#/*.
             #@@&&&%%%%##(((///*****//(((###%%%&&&@@@@@&&%#%%#.
         .%&@@@@@&&&%%%####((((////((((####%%%&&&@@@@@&&%%#%%####,
           ./,,#(//**,,.....,,,,***////((((########%%%%%%%%###(((
              /*(//**,,,....,,,,***////((((########%%%%%%%%###(#%*
               (*,...      ...,,,***//////((((((///////(/*...,/#@@@(
               **,,..         ...,,,,,,,,,,........,,*///*...*(#@@@@&&*
               ./,,..          ...,,,,,,,,,........,,*//*,...*#/,,,,,/%#
                (*,..          ...,,,,,,,,,........,,*//*,..,/(      .,#(
                **,..          ...,,,,,,,,,.........,*//*,..,((       .,(#
                 /*,..          ....,,,,,,,.....  ..,***,,,,(#         ..#&
                 **,..          ....,,,,,,,....   ..,***,,,*#.         .,%@
                 ./,...       B l e n d e r B o t ...***,,,*#          .*%@
                  /*,..          ...,,,,,,,....    .,**,,,,/#         ..(%/
                  /*,,..         ...,,,,,,,...    ..,*,,,,,(.         ..#&
                  ,/*,..         ...,,,,,,,...    ..,*,,,,*#         ..*%(
                   /*,..         ...,,,,,.....    ..,*,*,,/(         ..#&
                   /**,..        ...,,,,.....    ...,***,*(.       ,,(%.
                    (/*,,..      ....,,.....     ...,****(&@@@&&&#,
                     (/*,,...   .....,,......     ..,****#@,
                     *(/*,,/....*(###%(,(%%##(*.  ./,,**(
                      ,//**(,........,/((#.........*,**(
                      .(#//*,,,,,,.*.,/((%/,,.....,,*/@
                    ((######//****,/.,/(#%#***,***(&@@@@@(
                   *&%%#####%%%%%%%#//(#%&%%&&@@@@@@@@@@@@*
                   &&%%%###((((((####%%%%&&&&&@@@@@@@@@@&&@.
                  *##%%%##(((((((####%%%%%&&&&@@@@@@@@@&#/*,
                 .(##%#/,  .,*((##%%%&&&&%%%#####%&&@&&%#(/*.
                 /(###(,   .,*/(##%%%&&&&%%%######%&&&&%#(/*,
                */((((*.  ..,//((##%%%%%%%%#######%&&&&%%#(/*,
               .//(((/,   .,*//((###%%%%%%########%%&&&%%#((/,.
              .&####(((((((((######%%%%%%%%&&&&&&&@@@@@@@@@@@@@#
               *&#.   .*/((((#######%%%%%%&&&&&&&@@@@@#/.   (&/
23:09:14 | building data: /Users/ocean/ParlAI/data/models/blender/blender_90M/BST0B.tgz
23:09:14 | Downloading http://parl.ai/downloads/_models/blender/BST0B.tgz to /Users/ocean/ParlAI/data/models/blender/blender_90M/BST0B.tgz
Downloading BST0B.tgz: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 161M/161M [00:14<00:00, 11.1MB/s]
23:09:31 | Overriding opt["task"] to blended_skill_talk (previously: internal:blended_skill_talk,wizard_of_wikipedia,convai2,empathetic_dialogues)
23:09:31 | Overriding opt["model_file"] to /Users/ocean/ParlAI/data/models/blender/blender_90M/model (previously: /checkpoint/edinan/20200210/baseline_BST_retnref/lr=7.5e-06_attention-dropout=0.0_relu-dropout=0.0/model)
23:09:31 | Loading model with `--beam-block-full-context false`
23:09:31 | loading dictionary from /Users/ocean/ParlAI/data/models/blender/blender_90M/model.dict
23:09:31 | num words = 54944
23:09:31 | TransformerGenerator: full interactive mode on.
23:09:32 | DEPRECATED: XLM should only be used for backwards compatibility, as it involves a less-stable layernorm operation.
23:09:32 | Total parameters: 87,508,992 (87,508,992 trainable)
23:09:32 | Loading existing model params from /Users/ocean/ParlAI/data/models/blender/blender_90M/model
23:09:32 | Opt:
23:09:32 |     activation: gelu
23:09:32 |     adafactor_eps: '[1e-30, 0.001]'
23:09:32 |     adam_eps: 1e-08
23:09:32 |     add_p1_after_newln: False
23:09:32 |     aggregate_micro: False
23:09:32 |     allow_missing_init_opts: False
23:09:32 |     attention_dropout: 0.0
23:09:32 |     batchsize: 16
23:09:32 |     beam_block_full_context: False
23:09:32 |     beam_block_list_filename: None
23:09:32 |     beam_block_ngram: 3
23:09:32 |     beam_context_block_ngram: 3
23:09:32 |     beam_delay: 30
23:09:32 |     beam_length_penalty: 0.65
23:09:32 |     beam_min_length: 20
23:09:32 |     beam_size: 10
23:09:32 |     betas: '[0.9, 0.999]'
23:09:32 |     bpe_add_prefix_space: None
23:09:32 |     bpe_debug: False
23:09:32 |     bpe_dropout: None
23:09:32 |     bpe_merge: None
23:09:32 |     bpe_vocab: None
23:09:32 |     compute_tokenized_bleu: False
23:09:32 |     datapath: /Users/ocean/ParlAI/data
23:09:32 |     datatype: train
23:09:32 |     delimiter: '\n'
23:09:32 |     dict_class: parlai.core.dict:DictionaryAgent
23:09:32 |     dict_endtoken: __end__
23:09:32 |     dict_file: /Users/ocean/ParlAI/data/models/blender/blender_90M/model.dict
23:09:32 |     dict_include_test: False
23:09:32 |     dict_include_valid: False
23:09:32 |     dict_initpath: None
23:09:32 |     dict_language: english
23:09:32 |     dict_loaded: True
23:09:32 |     dict_lower: True
23:09:32 |     dict_max_ngram_size: -1
23:09:32 |     dict_maxexs: -1
23:09:32 |     dict_maxtokens: -1
23:09:32 |     dict_minfreq: 0
23:09:32 |     dict_nulltoken: __null__
23:09:32 |     dict_starttoken: __start__
23:09:32 |     dict_textfields: text,labels
23:09:32 |     dict_tokenizer: bpe
23:09:32 |     dict_unktoken: __unk__
23:09:32 |     display_add_fields: 
23:09:32 |     display_examples: False
23:09:32 |     display_partner_persona: True
23:09:32 |     display_prettify: False
23:09:32 |     download_path: None
23:09:32 |     dropout: 0.1
23:09:32 |     dynamic_batching: None
23:09:32 |     embedding_projection: random
23:09:32 |     embedding_size: 512
23:09:32 |     embedding_type: random
23:09:32 |     embeddings_scale: True
23:09:32 |     eval_batchsize: None
23:09:32 |     evaltask: None
23:09:32 |     ffn_size: 2048
23:09:32 |     force_fp16_tokens: True
23:09:32 |     fp16: True
23:09:32 |     fp16_impl: apex
23:09:32 |     gpu: -1
23:09:32 |     gradient_clip: 0.1
23:09:32 |     hide_labels: False
23:09:32 |     history_add_global_end_token: None
23:09:32 |     history_reversed: False
23:09:32 |     history_size: -1
23:09:32 |     image_cropsize: 224
23:09:32 |     image_mode: raw
23:09:32 |     image_size: 256
23:09:32 |     include_checked_sentence: True
23:09:32 |     include_initial_utterances: False
23:09:32 |     include_knowledge: True
23:09:32 |     include_knowledge_separator: False
23:09:32 |     include_personas: True
23:09:32 |     inference: beam
23:09:32 |     init_model: /checkpoint/parlai/zoo/new_reddit/newreddit_trained20190909_usedfordodeca/model
23:09:32 |     init_opt: None
23:09:32 |     interactive_mode: True
23:09:32 |     interactive_task: True
23:09:32 |     invsqrt_lr_decay_gamma: -1
23:09:32 |     label_truncate: 128
23:09:32 |     label_type: response
23:09:32 |     learn_positional_embeddings: True
23:09:32 |     learningrate: 7.5e-06
23:09:32 |     local_human_candidates_file: None
23:09:32 |     log_every_n_secs: 2
23:09:32 |     loglevel: info
23:09:32 |     lr_scheduler: reduceonplateau
23:09:32 |     lr_scheduler_decay: 0.5
23:09:32 |     lr_scheduler_patience: 3
23:09:32 |     max_lr_steps: -1
23:09:32 |     max_train_time: -1
23:09:32 |     metrics: default
23:09:32 |     model: transformer/generator
23:09:32 |     model_file: /Users/ocean/ParlAI/data/models/blender/blender_90M/model
23:09:32 |     model_parallel: False
23:09:32 |     momentum: 0
23:09:32 |     multitask_weights: '[1.0, 3.0, 3.0, 3.0]'
23:09:32 |     n_decoder_layers: -1
23:09:32 |     n_encoder_layers: -1
23:09:32 |     n_heads: 16
23:09:32 |     n_layers: 8
23:09:32 |     n_positions: 512
23:09:32 |     n_segments: 0
23:09:32 |     nesterov: True
23:09:32 |     no_cuda: False
23:09:32 |     num_epochs: -1
23:09:32 |     num_topics: 5
23:09:32 |     numthreads: 1
23:09:32 |     nus: [0.7]
23:09:32 |     optimizer: adamax
23:09:32 |     output_scaling: 1.0
23:09:32 |     override: "{'task': 'blended_skill_talk', 'model_file': '/Users/ocean/ParlAI/data/models/blender/blender_90M/model'}"
23:09:32 |     parlai_home: /private/home/edinan/ParlAI
23:09:32 |     person_tokens: False
23:09:32 |     rank_candidates: False
23:09:32 |     relu_dropout: 0.0
23:09:32 |     safe_personas_only: True
23:09:32 |     safety: all
23:09:32 |     save_after_valid: True
23:09:32 |     save_every_n_secs: 60.0
23:09:32 |     share_word_embeddings: True
23:09:32 |     short_final_eval: False
23:09:32 |     show_advanced_args: False
23:09:32 |     single_turn: False
23:09:32 |     skip_generation: False
23:09:32 |     special_tok_lst: None
23:09:32 |     split_lines: False
23:09:32 |     starttime: Feb10_07-25
23:09:32 |     task: blended_skill_talk
23:09:32 |     temperature: 1.0
23:09:32 |     tensorboard_log: False
23:09:32 |     text_truncate: 512
23:09:32 |     topk: 10
23:09:32 |     topp: 0.9
23:09:32 |     train_experiencer_only: False
23:09:32 |     truncate: -1
23:09:32 |     update_freq: 1
23:09:32 |     use_reply: label
23:09:32 |     validation_cutoff: 1.0
23:09:32 |     validation_every_n_epochs: 0.25
23:09:32 |     validation_every_n_secs: -1
23:09:32 |     validation_max_exs: 20000
23:09:32 |     validation_metric: ppl
23:09:32 |     validation_metric_mode: min
23:09:32 |     validation_patience: 15
23:09:32 |     validation_share_agent: False
23:09:32 |     variant: xlm
23:09:32 |     verbose: False
23:09:32 |     warmup_rate: 0.0001
23:09:32 |     warmup_updates: -1
23:09:32 |     weight_decay: None
23:09:32 | Current ParlAI commit: 99160674564847c8ed68bc21437eab8c9301e95d
Enter [DONE] if you want to end the episode, [EXIT] to quit.
23:09:32 | building data: /Users/ocean/ParlAI/data/OffensiveLanguage
23:09:32 | Downloading http://parl.ai/downloads/offensive_language/OffensiveLanguage.txt to /Users/ocean/ParlAI/data/OffensiveLanguage/OffensiveLanguage.txt
Downloading OffensiveLanguage.txt: 0.00B [00:01, ?B/s]
23:09:34 | building data: /Users/ocean/ParlAI/data/models/dialogue_safety/safety_models_v1.tgz
23:09:34 | Downloading http://parl.ai/downloads/_models/dialogue_safety/safety_models_v1.tgz to /Users/ocean/ParlAI/data/models/dialogue_safety/safety_models_v1.tgz
Downloading safety_models_v1.tgz: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.23G/2.23G [02:59<00:00, 12.4MB/s]
23:13:13 | Overriding opt["model"] to transformer/classifier (previously: transformer_classifier)
23:13:13 | Overriding opt["model_file"] to /Users/ocean/ParlAI/data/models/dialogue_safety/single_turn/model (previously: /checkpoint/edinan/20190828/safety_reddit/contiguous-dropout=0_multitask-weights=0.5,0.1,0.1,0.4,0.2_lr=5e-05_lr-scheduler-patience=3_lr-scheduler-decay=0.9_warmupupdates=1000/model)
23:13:13 | Overriding opt["print_scores"] to True (previously: False)
23:13:13 | Overriding opt["data_parallel"] to False (previously: True)
23:13:13 | loading dictionary from /Users/ocean/ParlAI/data/models/dialogue_safety/single_turn/model.dict
23:13:13 | num words = 54944
23:13:16 | Loading existing model parameters from /Users/ocean/ParlAI/data/models/dialogue_safety/single_turn/model
23:13:19 | Total parameters: 128,042,498 (128,042,498 trainable)
23:13:19 | creating task(s): blended_skill_talk
[ loading personas.. ]

  [NOTE: In the BST paper both partners have a persona.
         You can choose to ignore yours, the model never sees it.
         In the Blender paper, this was not used for humans.
         You can also turn personas off with --include-personas False]

[building data: /Users/ocean/ParlAI/data/blended_skill_talk]
23:13:19 | Downloading http://parl.ai/downloads/blended_skill_talk/blended_skill_talk.tar.gz to /Users/ocean/ParlAI/data/blended_skill_talk/blended_skill_talk.tar.gz
Downloading blended_skill_talk.tar.gz: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 38.1M/38.1M [00:05<00:00, 7.30MB/s]
23:13:26 | Downloading http://parl.ai/downloads/blended_skill_talk/personas_list.txt to /Users/ocean/ParlAI/data/blended_skill_talk/persona_list.txt
Downloading persona_list.txt: 0.00B [00:02, ?B/s]
23:13:28 | Downloading http://parl.ai/downloads/blended_skill_talk/topic_to_persona_list.txt to /Users/ocean/ParlAI/data/blended_skill_talk/topic_to_persona_list.txt
Downloading topic_to_persona_list.txt: 0.00B [00:01, ?B/s]
23:13:29 | Downloading http://parl.ai/downloads/blended_skill_talk/ed_persona_topicifier__train__both_sides.json to /Users/ocean/ParlAI/data/blended_skill_talk/ed_persona_topicifier__train__both_sides.json
Downloading ed_persona_topicifier__train__both_sides.json: 0.00B [00:04, ?B/s]
23:13:33 | Downloading http://parl.ai/downloads/blended_skill_talk/ed_persona_topicifier__train__experiencer_only.json to /Users/ocean/ParlAI/data/blended_skill_talk/ed_persona_topicifier__train__experiencer_only.json
Downloading ed_persona_topicifier__train__experiencer_only.json: 0.00B [00:03, ?B/s]
23:13:36 | Downloading http://parl.ai/downloads/blended_skill_talk/ed_persona_topicifier__valid__experiencer_only.json to /Users/ocean/ParlAI/data/blended_skill_talk/ed_persona_topicifier__valid__experiencer_only.json
Downloading ed_persona_topicifier__valid__experiencer_only.json: 0.00B [00:04, ?B/s]
23:13:41 | Downloading http://parl.ai/downloads/blended_skill_talk/ed_persona_topicifier__test__experiencer_only.json to /Users/ocean/ParlAI/data/blended_skill_talk/ed_persona_topicifier__test__experiencer_only.json
Downloading ed_persona_topicifier__test__experiencer_only.json: 0.00B [00:04, ?B/s]
23:13:46 | Downloading http://parl.ai/downloads/blended_skill_talk/safe_personas_2.txt to /Users/ocean/ParlAI/data/blended_skill_talk/safe_personas.txt
Downloading safe_personas.txt: 0.00B [00:01, ?B/s]
Loading /Users/ocean/ParlAI/data/blended_skill_talk/train.json.
Saving to /Users/ocean/ParlAI/data/blended_skill_talk/train.txt
Loading /Users/ocean/ParlAI/data/blended_skill_talk/valid.json.
Saving to /Users/ocean/ParlAI/data/blended_skill_talk/valid.txt
Loading /Users/ocean/ParlAI/data/blended_skill_talk/test.json.
Saving to /Users/ocean/ParlAI/data/blended_skill_talk/test.txt
[context]: your persona: i live in america in the state of florida.
your persona: i've braces.
Enter Your Message: 

__Enter Your Message:__というプロンプトが表示されました!

英文で、Blenderbotに問いかけて、おしゃべりを開始します。

Terminal
[context]: your persona: i live in america in the state of florida.
your persona: i've braces.
Enter Your Message: HI! I'am talking to you, blenderbot, for the first time in my entire life !
[TransformerGenerator]: hi , how are you doing today ? i ' m just getting ready to go to the park .
Enter Your Message: Fine, Thank you ! You are going to visit to the park? Which park ?
[TransformerGenerator]: i ' ll be visiting the park with my fiance and i . we ' re getting married there .
Enter Your Message: Oh. You will go to a park where you and your partner got married ! It is fantastic.
[TransformerGenerator]: yes , it ' s going to be a great time . what do you do for a living ?
Enter Your Message: I wannat enjoy eating afoguard sweets with sittind down on my favorite sofa in my home tonight.
[TransformerGenerator]: that sounds like a lot of fun . do you have any hobbies ? i like to travel .
Enter Your Message: My hobby ? I usually enjoy programming in Python language. Julia, haskell, elm, rust languages are those I want to learn at the next time.  
[TransformerGenerator]: that ' s interesting . i don ' t know much about python , but i ' ve always wanted to learn it .
Enter Your Message: Well. I believe you are running on a programming lancuage runtime engine.
[TransformerGenerator]: oh , that sounds interesting ! i will have to look into that . what else do you like to do ?
Enter Your Message: I wanna acquire private jet licence. Cessna's sovereign jet plane I want to own.
[TransformerGenerator]: oh , wow ! that sounds amazing ! i would love to own a sovereign plane one day , too .
Enter Your Message: You have same idea ! Where would you like to fly to ?
[TransformerGenerator]: i would like to visit japan . it is a beautiful place , and i think it would be amazing to fly there .
Enter Your Message: Great ! I am living in Japan.
[TransformerGenerator]: that is awesome ! i have never been to japan , but it sounds like an amazing place to visit .
Enter Your Message: By the way, do you mind if I ask you where you are living now ?
[TransformerGenerator]: no , i do not mind at all . i am in the united states . how about you ?
Enter Your Message: Japan.
[TransformerGenerator]: what is your favorite thing to do in japan ? i love going to the parks and visiting museums .
Enter Your Message: ^Z
zsh: suspended  python parlai/scripts/safe_interactive.py -t blended_skill_talk -mf 
ocean@AfoGuardMacBook-Pro ParlAI % 

Google Meena、OpenAI GPT-3、Google T5との精度比較もしてみたいところ。

この記事に最初に眺めたように、__Blenderbot__が公開された2020年は、Transformer/ BERTに基づいたマルチタスク対応型の(超)巨大サイズのpre-trained言語モデルが、複数発表されました。

それぞれの利用シーン(適用先のデータとタスクの組み合わせ)ごとに、各モデルがどのような強みと弱みを持っているのか、各モデルの個性を調べておきたいものです。

各モデルの個性をおさえておくことができれば、流行を追いかけるように、矢継ぎ早に公開される「旬なSOTAモデル」のニュースを読み捨てていくのではなく、自分(たち)を取り巻くデータ分析課題を解決する際に、活きた道具として、それぞれのpre-trained言語モデルを活用できるのだと思います。(商用で使う場合は、各モデルの権利関係を確認するのを忘れずに)

Googleは2020/2/27に、Meenaを公開している。(実装コードはまだ未公開?)
Google AI Blog Tuesday, January 28, 2020, Towards a Conversational Agent that Can Chat About…Anything
上の論文
Googleが発表したチャットボット"Meena"とは?
グーグルのチャットボットが先端AIで進化、人間相手にボケとツッコミを見せた
グーグル、どんな会話もできる「最高」のチャットボットを発表

Googleは2月に、T5も公開している。(2020/7/28に更新版の論文を公開)
Exploring the Limits of Transfer Learning with a Unified
Text-to-Text Transformer

https://github.com/google-research/text-to-text-transfer-transformer
@shotasakamotoさん 「T5(Text-to-Text Transfer Transformer)について少し説明してみる」
Googleの新たな自然言語処理AI「T5」の特徴とは?実際にAIとクイズで対決も可能
第7回 T5 によるテキスト生成の検証

OpenAIが2020/5/28に公開したGPT-3
OpenAI, Language Models are Few-Shot Learners
github.com/openai/gpt-3
GPT-3にチューリングテストを試してみる
Gigazine 文章生成AI「GPT-3」がRedditで1週間誰にも気付かれず人間と会話していたことが判明
Gigazine 超高精度な言語モデル「GPT-3」は本当に「人間そのもの」な会話ができるのか実験した結果は?

公開はこれからですが、LINEも、日本語版の汎用(タスク横断)型大規模言語モデルを公開予定と発表しています。プレスリリースのなかで、「GPT-3」や「T5」への言及がなされています。

LINE、『英語に匹敵する』超巨大言語モデルを日本語で開発へ 用途に特化しない汎用型
LINE、NAVERと共同で、世界初、日本語に特化した超巨大言語モデルを開発 新規開発不要で、対話や翻訳などさまざまな日本語AIの生成を可能に

従来の言語モデルは、各ユースケース(Q&A、対話、等)に対して、自然言語処理エンジニアが個別に学習する必要がありました(特化型言語モデル)。

一方、汎用言語モデルとは、OpenAIが開発した「GPT」※1や、Google の「T5」※2に代表される言語モデルです。
新聞記事や百科事典、小説、コーディングなどといった膨大な言語データを学習させた言語モデルを構築し、その上でコンテキスト設定を行うためのFew-Shot learning*1を実行するだけで、さまざまな言語処理(対話、翻訳、入力補完、文書生成、プログラミングコード等)を行うことが可能となり、個々のユースケースを簡単に実現できることが期待されます。
*1:ブログの書き出しや、プログラミングコードの一部などを与えること。それをもとに、最もそれらしいと判断した文字列を生成します。たとえば、与えた言葉(「おはよう」)に対して、これまで学習した中から最もそれらしいと判断した文字列(「おはようございます」等)を返すといったことが考えられます

今回、日本語に特化した汎用言語モデルを開発するにあたり、1750億以上のパラメーターと、100億ページ以上の日本語データを学習データとして利用予定です。これは現在世界に存在する日本語をベースにした言語モデルのパラメーター量と学習量を大きく超えるものとなります。パラメーター量と学習量については、今後も拡大してまいります。

本取り組みにより、日本語におけるAIの水準が格段に向上し、日本語AIの可能性が大きく広がることが予想されます。

12
9
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
12
9

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?