3
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

[機械学習で推奨タグ付け #2] スクレイピングスクリプトの拡張

Posted at

Hi, this is Chogo again. Today is cool day and good day for programing inside warm home :)

So today topic is Scraping again. before that, I'd like to explain my goal of this series. My goal is building a system for tag suggesting with machine learning of Bayesian method. Learning articles and tags I already put on then checking articles for suggesting tags.
I have to many things to learn so I don't know how many articles for the goal, I will do one by one.

Ok so now today's topic is still scraping. article #1 I explained how to scrape articles from Hatenablog. However this script was only for Hatenablog. I have to extend this script for other web sites.

First i'd like to show you modified script.

    scraper = [ 
            ["hatenablog.com","div","class","entry-content"],
            ["qiita.com","section","itemprop", "articleBody"]
            ]
    c = 0
    for domain in scraper:
        print url, domain[0]
        if re.search( domain[0], url):
            break
        c += 1

    response = urllib2.urlopen(url)
    html = response.read()
    
    soup = BeautifulSoup( html, "lxml" )
    soup.originalEnoding
    tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]})
    text = ""
    for con in tag.contents:
        p = re.compile(r'<.*?>')
        text += p.sub('', con.encode('utf8'))

This script can scrape articles from Hatana Blog and Qiita. Below are tags of Hatena blog and Qiita.

Hatena Blog:

    <div class=entry-contents>
    CONTENTS to SCRAPE!
    </div>

Qiita:

    <div class="col-sm-9 itemsShowBody_articleColumn"><section class="markdownContent markdownContent-headingEnabled js-task-list-container clearfix position-relative js-task-list-enabled" id="item-xxx" itemprop="articleBody">
    CONTENTS to SCRAPE!
    </div>

So with BeautifulSoup, I wrote up like this.
Feeding the elements for the soup...

    scraper = [ 
            ["hatenablog.com","div","class","entry-content"],
            ["qiita.com","section","itemprop", "articleBody"]
            ]

then, have the soup!

    tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]}

Good. now I can get the elements for soup for each web site, I can extend the scrape article on other sites!

ども、梅村でげす。今日は寒いです。こんな日は外で遊ばずに、暖かい部屋でプログラミングやで。

さてスクレイピングの続きなのですが、その前に今回のわたくしのゴールを説明をしたいと思います。ゴールは機械学習を使ったタグの推定システムになります。私が個人的にタグ付けをしたブックマーク記事を学習したうえで、記事にあったタグをベイジアン方式で推定しようといった代物になります。
で、やってくうちにいろいろ覚えなきゃいけないことが多いということが判明してきているのでこのシリーズがどこまで続くかは未定です。終わるかなぁ。

さて本題。前回に続いてスクレイピングになります。
前回ははてなブログの記事を部分を抜き出すスクリプトになりましたが、もちろん、ほかのサイトの記事からも抜き出す必要があります。そこで汎用性を持つように改造する必要がございます。

で、早速改造後のスクリプトをご覧いただきます。

    scraper = [ 
            ["hatenablog.com","div","class","entry-content"],
            ["qiita.com","section","itemprop", "articleBody"]
            ]
    c = 0
    for domain in scraper:
        print url, domain[0]
        if re.search( domain[0], url):
            break
        c += 1

    response = urllib2.urlopen(url)
    html = response.read()
    
    soup = BeautifulSoup( html, "lxml" )
    soup.originalEnoding
    tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]})
    text = ""
    for con in tag.contents:
        p = re.compile(r'<.*?>')
        text += p.sub('', con.encode('utf8'))

このスクリプトははてなブログとQiitaの記事からエントリー部分を抜き出してます。それぞれのエントリーは以下のようなタグに囲まれています。

Hatena Blog:

    <div class=entry-contents>
    CONTENTS to SCRAPE!
    </div>

Qiita:

    <div class="col-sm-9 itemsShowBody_articleColumn"><section class="markdownContent markdownContent-headingEnabled js-task-list-container clearfix position-relative js-task-list-enabled" id="item-xxx" itemprop="articleBody">
    CONTENTS to SCRAPE!
    </div>

じゃあ、BeautifulSoupのタグ判定に必要になるところを以下のように指定します。

    scraper = [ 
            ["hatenablog.com","div","class","entry-content"],
            ["qiita.com","section","itemprop", "articleBody"]
            ]

じゃあ、スープを召し上がれ!

    tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]}

といった感じになります。記事を引き抜きたいサイトのタグの情報を追加していけばほかのサイトにも適用することが可能になります。

本日はここまでですが、このシリーズはまだまだ続きます。

3
3
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
3
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?