0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

Huggingfaceが公開しているdatasetsをインストールしてみる

Posted at

###Huggingfaceが公開しているdatasetsをインストールしてみる

(GitHub)huggingface/datasets

Terminal
% pip install datasets
Collecting datasets
  Downloading datasets-1.1.3-py3-none-any.whl (153 kB)
     |████████████████████████████████| 153 kB 9.0 MB/s 

( ・・・省略・・・ )

Installing collected packages: pytz, dill, xxhash, tqdm, pyarrow, pandas, multiprocess, datasets
  Attempting uninstall: tqdm
    Found existing installation: tqdm 4.54.1
    Uninstalling tqdm-4.54.1:
      Successfully uninstalled tqdm-4.54.1
Successfully installed datasets-1.1.3 dill-0.3.3 multiprocess-0.70.11.1 pandas-1.1.5 pyarrow-2.0.0 pytz-2020.4 tqdm-4.49.0 xxhash-2.0.0
% python  

###Python3.6.3で呼び出してみる

Python3.6.3
>>> from datasets import list_datasets, load_dataset, list_metrics, load_metric
>>> print(list_datasets())
['aeslc', 'afrikaans_ner_corpus', 'ag_news', 'ai2_arc', 'ajgt_twitter_ar', 'allegro_reviews', 'allocine', 'amazon_reviews_multi', 'amazon_us_reviews', 'ambig_qa', 'amttl', 'anli', 'aqua_rat', 'arcd', 'arsentd_lev', 'art', 'arxiv_dataset', 'aslg_pc12', 'asnq', 'asset', 'autshumato', 'bible_para', 'big_patent', 'billsum', 'biomrc', 'blended_skill_talk', 'blimp', 'blog_authorship_corpus', 'bookcorpus', 'bookcorpusopen', 'boolq', 'break_data', 'bsd_ja_en', 'c3', 'c4', 'cail2018', 'capes', 'cawac', 'cdsc', 'cdt', 'cfq', 'chr_en', 'circa', 'civil_comments', 'clinc_oos', 'clue', 'cmrc2018', 'cnn_dailymail', 'coached_conv_pref', 'coarse_discourse', 'codah', 'code_search_net', 'com_qa', 'common_gen', 'commonsense_qa', 'compguesswhat', 'conceptnet5', 'conll2000', 'conll2002', 'conll2003', 'conv_ai', 'coqa', 'cornell_movie_dialog', 'cos_e', 'cosmos_qa', 'covid_qa_castorini', 'covid_qa_deepset', 'craigslist_bargains', 'crd3', 'crime_and_punish', 'crows_pairs', 'cs_restaurants', 'csv', 'curiosity_dialogs', 'daily_dialog', 'dane', 'danish_political_comments', 'dart', 'dbpedia_14', 'dbrd', 'deal_or_no_dialog', 'definite_pronoun_resolution', 'dengue_filipino', 'dialog_re', 'disaster_response_messages', 'discofuse', 'docred', 'doqa', 'dream', 'drop', 'dyk', 'e2e_nlg', 'e2e_nlg_cleaned', 'ecb', 'eitb_parcc', 'eli5', 'emea', 'emo', 'emotion', 'empathetic_dialogues', 'enriched_web_nlg', 'eraser_multi_rc', 'esnli', 'eth_py150_open', 'euronews', 'event2Mind', 'evidence_infer_treatment', 'exams', 'fake_news_filipino', 'farsi_news', 'fever', 'finer', 'flores', 'flue', 'fquad', 'gap', 'generated_reviews_enth', 'german_legal_entity_recognition', 'germaner', 'germeval_14', 'gigaword', 'glucose', 'glue', 'go_emotions', 'google_wellformed_query', 'grail_qa', 'great_code', 'guardian_authorship', 'gutenberg_time', 'hans', 'hansards', 'hard', 'hate_speech_filipino', 'hausa_voa_ner', 'hausa_voa_topics', 'health_fact', 'hebrew_sentiment', 'hellaswag', 'hkcancor', 'hotpot_qa', 'hybrid_qa', 'hyperpartisan_news_detection', 'id_nergrit_ner', 'imdb', 'imdb_urdu_reviews', 'indic_glue', 'inquisitive_qg', 'isixhosa_ner_corpus', 'isizulu_ner_corpus', 'iwslt2017', 'jeopardy', 'jfleg', 'jnlpba', 'json', 'kannada_news', 'kelm', 'kilt_tasks', 'kilt_wikipedia', 'kor_hate', 'kor_ner', 'kor_nli', 'kor_nlu', 'kor_qpair', 'labr', 'lambada', 'large_spanish_corpus', 'lc_quad', 'lener_br', 'librispeech_lm', 'limit', 'lince', 'linnaeus', 'lm1b', 'lst20', 'math_dataset', 'math_qa', 'matinf', 'mc_taco', 'med_hop', 'medal', 'medical_questions_pairs', 'metooma', 'metrec', 'mkb', 'mkqa', 'mlqa', 'mlsum', 'mocha', 'movie_rationales', 'mrqa', 'ms_marco', 'ms_terms', 'msr_genomics_kbcomp', 'msr_text_compression', 'msr_zhen_translation_parity', 'msra_ner', 'multi_news', 'multi_nli', 'multi_nli_mismatch', 'multi_woz_v22', 'multi_x_science_sum', 'mwsc', 'myanmar_news', 'natural_questions', 'ncbi_disease', 'nchlt', 'ncslgr', 'neural_code_search', 'newsgroup', 'newsph', 'newsph_nli', 'newsroom', 'nkjp-ner', 'nli_tr', 'norwegian_ner', 'nsmc', 'numer_sense', 'numeric_fused_head', 'oclar', 'offenseval2020_tr', 'onestop_english', 'open_subtitles', 'openbookqa', 'openwebtext', 'opinosis', 'opus100', 'opus_dogc', 'opus_elhuyar', 'opus_euconst', 'opus_finlex', 'opus_fiskmo', 'opus_infopankki', 'opus_memat', 'opus_montenegrinsubs', 'opus_openoffice', 'opus_sardware', 'opus_xhosanavy', 'orange_sum', 'pandas', 'para_crawl', 'paws', 'paws-x', 'pec', 'peoples_daily_ner', 'pg19', 'php', 'piaf', 'pib', 'piqa', 'poem_sentiment', 'polemo2', 'polyglot_ner', 'prachathai67k', 'pragmeval', 'proto_qa', 'psc', 'pubmed_qa', 'py_ast', 'qa4mre', 'qa_zre', 'qangaroo', 'qanta', 'qasc', 'qed', 'qed_amara', 'quac', 'quail', 'quarel', 'quartz', 'quora', 'quoref', 'race', 're_dial', 'reclor', 'reddit', 'reddit_tifu', 'reuters21578', 'roman_urdu', 'ropes', 'rotten_tomatoes', 'sanskrit_classic', 'scan', 'scb_mt_enth_2020', 'schema_guided_dstc8', 'scicite', 'scielo', 'scientific_papers', 'scifact', 'sciq', 'scitail', 'scitldr', 'search_qa', 'sem_eval_2010_task_8', 'sem_eval_2014_task_1', 'sent_comp', 'sentiment140', 'sepedi_ner', 'sesotho_ner_corpus', 'setimes', 'setswana_ner_corpus', 'sharc', 'sharc_modified', 'simple_questions_v2', 'siswati_ner_corpus', 'smartdata', 'sms_spam', 'snli', 'social_bias_frames', 'social_i_qa', 'sogou_news', 'species_800', 'spider', 'squad', 'squad_es', 'squad_it', 'squad_kor_v1', 'squad_v1_pt', 'squad_v2', 'squadshifts', 'stsb_mt_sv', 'style_change_detection', 'super_glue', 'swag', 'swahili_news', 'swedish_ner_corpus', 'swedish_reviews', 'tab_fact', 'tamilmixsentiment', 'tanzil', 'tashkeela', 'taskmaster1', 'taskmaster2', 'taskmaster3', 'ted_hrlr', 'ted_multi', 'telugu_books', 'telugu_news', 'tep_en_fa_para', 'text', 'thainer', 'thaiqa_squad', 'thaisum', 'tilde_model', 'tiny_shakespeare', 'tlc', 'totto', 'trec', 'trivia_qa', 'tsac', 'tunizi', 'tuple_ie', 'turkish_ner', 'turku_ner_corpus', 'tweet_qa', 'tweets_ar_en_parallel', 'tydiqa', 'ubuntu_dialogs_corpus', 'udhr', 'um005', 'un_multi', 'un_pc', 'universal_dependencies', 'urdu_fake_news', 'urdu_sentiment_corpus', 'web_nlg', 'web_of_science', 'web_questions', 'weibo_ner', 'wiki40b', 'wiki_auto', 'wiki_bio', 'wiki_dpr', 'wiki_hop', 'wiki_qa', 'wiki_qa_ar', 'wiki_snippets', 'wiki_split', 'wikiann', 'wikicorpus', 'wikihow', 'wikipedia', 'wikisql', 'wikitext', 'wikitext_tl39', 'winograd_wsc', 'winogrande', 'wiqa', 'wisesight1000', 'wisesight_sentiment', 'wmt14', 'wmt15', 'wmt16', 'wmt17', 'wmt18', 'wmt19', 'wmt_t2t', 'wnut_17', 'wongnai_reviews', 'woz_dialogue', 'x_stance', 'xcopa', 'xglue', 'xnli', 'xor_tydi_qa', 'xquad', 'xquad_r', 'xsum', 'xtreme', 'yahoo_answers_qa', 'yahoo_answers_topics', 'yelp_polarity', 'yelp_review_full', 'yoruba_bbc_topics', 'yoruba_gv_ner', 'zest', 'Fraser/news-category-dataset', 'Fraser/python-lines', 'cdminix/mgb1', 'german-nlp-group/german_common_crawl', 'joelito/ler', 'joelito/sem_eval_2010_task_8', 'k-halid/ar', 'lhoestq/squad', 'mulcyber/europarl-mono', 'piEsposito/br-quad-2.0', 'piEsposito/br_quad_20', 'piEsposito/squad_20_ptbr', 'sshleifer/pseudo_bart_xsum']
>>> 
>>> squad_dataset = load_dataset('squad')
Downloading: 5.24kB [00:00, 2.88MB/s]                                                                                                                                                  
Downloading: 2.19kB [00:00, 1.22MB/s]                                                                                                                                                  
Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.75 MiB, post-processed: Unknown size, total: 119.27 MiB) to /Users/ocean/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41...
Downloading: 30.3MB [00:00, 45.1MB/s]                                                                                                                                                  
Downloading: 4.85MB [00:00, 37.1MB/s]                                                                                                                                                  
Dataset squad downloaded and prepared to /Users/ocean/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41. Subsequent calls will reuse this data.
>>> 
>>> print(squad_dataset['train'][0])
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame'}
>>> 
>>> print(squad_dataset['train'][1])
{'answers': {'answer_start': [188], 'text': ['a copper statue of Christ']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f4190066117f', 'question': 'What is in front of the Notre Dame Main Building?', 'title': 'University_of_Notre_Dame'}
>>> 
>>> print(squad_dataset['train'][2])
{'answers': {'answer_start': [279], 'text': ['the Main Building']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661180', 'question': 'The Basilica of the Sacred heart at Notre Dame is beside to which structure?', 'title': 'University_of_Notre_Dame'}
>>> 
>>> print(list_metrics())
['accuracy', 'bertscore', 'bleu', 'bleurt', 'coval', 'f1', 'gleu', 'glue', 'indic_glue', 'meteor', 'precision', 'recall', 'rouge', 'sacrebleu', 'seqeval', 'squad', 'squad_v2', 'xnli']
>>> 
>>> squad_metric = load_metric('squad')
Downloading: 4.02kB [00:00, 2.22MB/s]                                                                                                                                                  
Downloading: 3.35kB [00:00, 2.05MB/s]                                                                                                                                                  
>>> 
>>> print(squad_metric)
Metric(name: "squad", features: {'predictions': {'id': Value(dtype='string', id=None), 'prediction_text': Value(dtype='string', id=None)}, 'references': {'id': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)}}, usage: """
Computes SQuAD scores (F1 and EM).
Args:
    predictions: List of question-answers dictionaries with the following key-values:
        - 'id': id of the question-answer pair as given in the references (see below)
        - 'prediction_text': the text of the answer
    references: List of question-answers dictionaries with the following key-values:
        - 'id': id of the question-answer pair (see above),
        - 'answers': a Dict in the SQuAD dataset format
            {
                'text': list of possible texts for the answer, as a list of strings
                'answer_start': list of start positions for the answer, as a list of ints
            }
            Note that answer_start values are not taken into account to compute the metric.
Returns:
    'exact_match': Exact match (the normalized answer exactly match the gold answer)
    'f1': The F-score of predicted tokens versus the gold answer
""", stored examples: 0)
>>> 
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?