65
54

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

Swiftで形態素解析してみる

Last updated at Posted at 2016-08-28

iOSの標準APIで日本語文章を形態素解析する方法です🔍✨
と言っても現状、英文では品詞名まで取得できますが、和文では分かち書き(文章において語の区切りに空白を挟んで記述すること)しかできません。
以下コピペで使えます👍🏻

実装

NSLinguisticTaggerを使う

struct Tokenizer {

    // MARK: - Properties
    private static let scheme = NSLinguisticTagSchemeTokenType
    private static let options: NSLinguisticTaggerOptions = [.OmitWhitespace, .OmitPunctuation, .JoinNames]

    // MARK: - Publics
    static func tokenize(text: String) -> [String] {
        let range = text.startIndex ..< text.endIndex
        var tokens: [String] = []

        text.enumerateLinguisticTagsInRange(range, scheme: scheme, options: options, orthography: nil) { (_, range, _, _) in
            let token = text.substringWithRange(range)
            tokens.append(token)
        }

        return tokens
    }
}

Tokenizer.tokenize("すもももももももものうち")
// ["すもも", "も", "もも", "も", "もも", "の", "うち"]

CFStringTokenizerを使う

struct Tokenizer {

    // MARK: - Properties
    private static let flag = UInt(kCFStringTokenizerUnitWord)
    private static let locale = CFLocaleCopyCurrent()

    // MARK: - Publics
    static func tokenize(text: String) -> [String] {
        let range = CFRangeMake(0, text.characters.count)
        let tokenizer = CFStringTokenizerCreate(kCFAllocatorDefault, text, range, flag, locale)
        var type = CFStringTokenizerAdvanceToNextToken(tokenizer), tokens: [String] = []

        while type != .None {
            let current = CFStringTokenizerGetCurrentTokenRange(tokenizer)
            let substring = text.substringWithRange(current)
            tokens.append(substring)
            type = CFStringTokenizerAdvanceToNextToken(tokenizer)
        }

        return tokens
    }
}

Tokenizer.tokenize("すもももももももものうち")
// ["すもも", "も", "もも", "も", "もも", "の", "うち"]

enumerateSubstringsInRangeを使う

struct Tokenizer {

    // MARK: - Publics
    static func tokenize(text: String) -> [String] {
        let range = text.startIndex ..< text.endIndex
        var tokens: [String] = []

        text.enumerateSubstringsInRange(range, options: .ByWords) { (substring, _, _, _) -> () in
            if let substring = substring {
                tokens.append(substring)
            }
        }

        return tokens
    }
}

Tokenizer.tokenize("すもももももももものうち")
// ["すもも", "も", "もも", "も", "もも", "の", "うち"]

実行結果

和文

文章 火のない所に煙は立たぬ。
NSLinguisticTagger 火, の, な, い, 所, に, 煙, は, 立, た, ぬ
CFStringTokenizer 火, の, な, い, 所, に, 煙, は, 立, た, ぬ
enumerateSubstringsInRange 火, の, ない, 所, に, 煙, は, 立た, ぬ

英文

文章 Where there's smoke, there's fire.
NSLinguisticTagger Where, there, ’s, smoke, there, ’s, fire
CFStringTokenizer Where, there’s, smoke, there’s, fire
enumerateSubstringsInRange Where, there’s, smoke, there’s, fire

混合

文章 experience🤑しながらgrowing up⤴️していくのが、僕のphilosophy🎩だから。
NSLinguisticTagger experience, 🤑, し, ながら, growing, up, ⤴, , し, て, い, く, の, が, 僕, の, philosophy, 🎩, だ, から
CFStringTokenizer experience, を, し, ながら, growing, up, し, て, い, く, の, が, 僕, の, philosophy, だ, から
enumerateSubstringsInRange experience, を, し, ながら, growing, up, し, て, いく, の, が, 僕, の, philosophy, だ, から

結論

目的と用途次第でどれを使うか選択するのがいいと思います。
僕のケースでは絵文字も取得したかったのでNSLinguisticTagger を使いました。

65
54
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
65
54

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?