0
0

More than 3 years have passed since last update.

言語処理100本ノック(2020): 35

Posted at
"""
35. 単語の出現頻度
文章中に出現する単語とその出現頻度を求め,出現頻度の高い順に並べよ.

[[{'surface': '', 'base': '*', 'pos': 'BOS/EOS', 'pos1': '*'},
  {'surface': '一', 'base': '一', 'pos': '名詞', 'pos1': '数'},
  {'surface': '', 'base': '*', 'pos': 'BOS/EOS', 'pos1': '*'}],
 [{'surface': '', 'base': '*', 'pos': 'BOS/EOS', 'pos1': '*'},
  {'surface': '吾輩', 'base': '吾輩', 'pos': '名詞', 'pos1': '代名詞'},
  {'surface': 'は', 'base': 'は', 'pos': '助詞', 'pos1': '係助詞'},
  {'surface': '猫', 'base': '猫', 'pos': '名詞', 'pos1': '一般'},
  {'surface': 'で', 'base': 'だ', 'pos': '助動詞', 'pos1': '*'},
  {'surface': 'ある', 'base': 'ある', 'pos': '助動詞', 'pos1': '*'},
  {'surface': '。', 'base': '。', 'pos': '記号', 'pos1': '句点'},
  {'surface': '', 'base': '*', 'pos': 'BOS/EOS', 'pos1': '*'}],
"""
from collections import Counter
from typing import List

import utils


def get_tf(sentence_list: List[List[dict]]) -> dict:
    words = [word["surface"] for sent in sentence_list for word in sent[1:-1]]
    c = Counter(words)
    return c.most_common()


data = utils.read_json("30_neko_mecab.json")
result = get_tf(data)
# [('の', 9194),
#  ('。', 7486),
#  ('て', 6868),
#  ('、', 6772),
#  ('は', 6420),
#  ('に', 6243),
#  ('を', 6071),
#  ('と', 5508),
#  ('が', 5337),
#  ('た', 3988)]

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0