LoginSignup
1
0

More than 5 years have passed since last update.

PythonでNPBのチームデータが取りたい。2(球団名とPDFのpath)

Posted at

少しスッキリさせた。同じprint文を何度も書いている辺りは絶対にもっとスッキリ出来るはず。

getURL.py
import bs4
import requests
url = requests.get('http://jpbpa.net/register/')
url.raise_for_status()
soup = bs4.BeautifulSoup(url.text, "html.parser")
elems = soup.select('a')
for elem in elems:
    team = elem.getText()
    path = elem.get('href')
    repath = path.replace('..', 'http://jpbpa.net')
    # elem.getText=球団名、elem.get('href')=PDFのpath
    if elem.getText() == "ロッテ":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "ソフトバンク":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "西武":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "楽天":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "オリックス":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "日本ハム":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "広島":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "阪神":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "DeNA":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "巨人":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "中日":
        print('{}({})'.format(team, repath))
    elif elem.getText() == "ヤクルト":
        print('{}({})'.format(team, repath))
1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0