11
17

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

scrapy + splashで店舗の緯度経度情報を収集する①

Last updated at Posted at 2017-02-19

#1. はじめに

KFC店舗情報から店舗名称、緯度経度、展開サービス等の情報を収集します。
店舗情報はjavascriptで生成されているため、splashを間に噛ませます。

Webスクレイピングの注意事項一覧

#2. 実行環境
AWS
EC2: Amazon Linux (2016.09-release) ,t2.micro
Python 2.7.12

#3. 環境構築

python2.7
sudo yum groupinstall "Development tools" 
sudo yum install python-devel libffi-devel openssl-devel libxml2-devel libxslt-devel
sudo pip install scrapy
sudo pip install service_identity  #Amazon Linuxはデフォルトでインストール済みのため不要

sudo yum -y install docker-io
sudo service docker start
sudo chkconfig docker on

sudo pip install scrapy-splash

docker pull scrapinghub/splash
docker run -p 8050:8050 scrapinghub/splash

splashはdockerで賄います。お手軽。

(参考にしたサイト)
Python製クローラー「Scrapy」の始め方メモ
scrapy-splashを使ってJavaScript利用ページを簡単スクレイピング
github (scrapy-splash)

#4. scrapy
##project作成
projectとspiderの雛型を作成します。

python2.7
export PRJ_NAME=KFCShopSpider
scrapy startproject ${PRJ_NAME}
cd ./${PRJ_NAME}/${PRJ_NAME}/spider
scrapy genspider ${PRJ_NAME} kfc.co.jp

##item定義
取得したい項目を定義します。
今回は店舗名、住所、map_url(緯度経度)、各種サービスの有無を取得します。

~/KFCShopSpider/KFCShopSpider/items.py
# -*- coding: utf-8 -*-

import scrapy

class KFCShopspiderItem(scrapy.Item):
    name = scrapy.Field()
    address = scrapy.Field()
    map_url = scrapy.Field()
    DriveThrough = scrapy.Field()
    Parking = scrapy.Field()
    Delivery = scrapy.Field()
    Wlan = scrapy.Field()
    pass

##初期設定
相手先に負荷をかけないように。
USER_AGENT、ROBOTSTXT_OBEY、DOWNLOAD_DELAYあたりは必須で。

~/KFCShopSpider/KFCShopSpider/settings.py
# -*- coding: utf-8 -*-

BOT_NAME = 'KFCShopSpider'
SPIDER_MODULES = ['KFCShopSpider.spiders']
NEWSPIDER_MODULE = 'KFCShopSpider.spiders'

USER_AGENT = 'KFCShopSpider (+http://www.yourdomain.com)'
ROBOTSTXT_OBEY = True
DOWNLOAD_DELAY = 3

SPIDER_MIDDLEWARES = {
     'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}

DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

SPLASH_URL = 'http://localhost:8050/'
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

##spiderプログラム

~/KFCShopSpider/KFCShopSpider/spider/KFCShop_spider.py
# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders import CrawlSpider
from scrapy_splash import SplashRequest
from ..items import KFCShopspiderItem

class KFCShopSpider(CrawlSpider):
    name = "KFCShopSpider"
    allowed_domains = ["kfc.co.jp"]

    start_urls = []
    shop_url_home ='http://www.kfc.co.jp/search/fuken.html?t=attr_con&kencode='
    # 47都道府県の検索結果ページを起点にする。
    for i in range(1,48):
        prfct_id = '{0:02d}'.format(i)
        url = shop_url_home + prfct_id
        start_urls.append(url)

    # レンダリング済みのresponseを取得します。
    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, self.parse,
                args={'wait': 0.5},
        )

    def parse(self, response):
        # 各店舗情報(=list要素)まで指定する。
        stores = response.xpath('//ul[@id="outShop"]/li')
        for store in stores:
            item = KFCShopspiderItem()
            # 相対パスで記述する。
            item['address']     = store.xpath('./span[@class="scAddress"]/text()[1]').extract()
            item['map_url']     = store.xpath('./ul/li[2]/div/a/@href').extract()
            item['DriveThrough']= store.xpath('./span[@class="scIcon"]/img[contains(./@src,"check04")]/@alt').extract()
            item['Parking']     = store.xpath('./span[@class="scIcon"]/img[contains(./@src,"check05")]/@alt').extract()
            item['Delivery']    = store.xpath('./span[@class="scIcon"]/img[contains(./@src,"check02")]/@alt').extract()
            item['Wlan']        = store.xpath('./span[@class="scIcon"]/img[contains(./@src,"check03")]/@alt').extract()
            yield item

        # 各検索結果の'次へ'のlink先を取得してpaeseメソッドをcallする。
        next_page= response.xpath('//li[@class="next"]/a/@href')
        if next_page:
            # '次へ'が店舗リストの上下に2つあるので、1つ目の要素だけ取得
            url = response.urljoin(next_page[0].extract())
            yield SplashRequest(url, self.parse)

#5. デバッグ

xpathの確認はchromeのdeveloper tool(F12 key)が便利です。
Elements viewから確認したい要素を右クリック>Copy>Copy XPathで取得できます。

python2.7
scrapy shell "http://localhost:8050/render.html?url={レンダリングしたいurl}"

#6. 実行

約10分で1,000店舗強の情報を取得できました。

python2.7
scrapy crawl ${PRJ_NAME} -o hoge.csv
11
17
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
11
17

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?