1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

Data Every Day: Top Women Chess Players

Posted at

tldr

KggleのTop Women Chess Players
Chess Grandmaster Prediction - Data Every Day #023に沿ってやっていきます。

実行環境はGoogle Colaboratorです。

インポート

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns

import sklearn.preprocessing as sp
from sklearn.model_selection import train_test_split

from sklearn.linear_model import LogisticRegression

from sklearn.metrics import f1_score

データのダウンロード

Google Driveをマウントします。

from google.colab import drive
drive.mount('/content/drive')
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).

KaggleのAPIクライアントを初期化し、認証します。
認証情報はGoogle Drive内(/content/drive/My Drive/Colab Notebooks/Kaggle)にkaggle.jsonとして置いてあります。

import os
kaggle_path = "/content/drive/My Drive/Colab Notebooks/Kaggle"
os.environ['KAGGLE_CONFIG_DIR'] = kaggle_path

from kaggle.api.kaggle_api_extended import KaggleApi
api = KaggleApi()
api.authenticate() 

Kaggle APIを使ってデータをダウンロードします。

dataset_id = 'vikasojha98/top-women-chess-players'
dataset = api.dataset_list_files(dataset_id)
file_name = dataset.files[0].name
file_path = os.path.join(api.get_default_download_dir(), file_name)
file_path
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.10 / client 1.5.9)





'/content/top_women_chess_players_aug_2020.csv'
api.dataset_download_file(dataset_id, file_name, force=True, quiet=False)
100%|██████████| 459k/459k [00:00<00:00, 53.4MB/s]

Downloading top_women_chess_players_aug_2020.csv to /content









True

データの読み込み

Padasを使ってダウンロードしてきたCSVファイルを読み込みます。

data = pd.read_csv(file_path)
data
Fide id Name Federation Gender Year_of_birth Title Standard_Rating Rapid_rating Blitz_rating Inactive_flag
0 700070 Polgar, Judit HUN F 1976.0 GM 2675 2646.0 2736.0 wi
1 8602980 Hou, Yifan CHN F 1994.0 GM 2658 2621.0 2601.0 NaN
2 5008123 Koneru, Humpy IND F 1987.0 GM 2586 2483.0 2483.0 NaN
3 4147103 Goryachkina, Aleksandra RUS F 1998.0 GM 2582 2502.0 2441.0 NaN
4 700088 Polgar, Susan HUN F 1969.0 GM 2577 NaN NaN wi
... ... ... ... ... ... ... ... ... ... ...
8548 3302288 Reinkens, Natalia BOL F NaN NaN 1801 NaN NaN wi
8549 343960 Saffova, Michaela CZE F 1994.0 NaN 1801 1791.0 1765.0 NaN
8550 5038294 Shetye, Siddhali IND F 1992.0 NaN 1801 1884.0 1824.0 wi
8551 2072491 Trakru, Priya USA F 2001.0 WFM 1801 NaN NaN wi
8552 4666399 Vorpahl, Sina Fleur GER F 1982.0 NaN 1801 NaN NaN wi

8553 rows × 10 columns

下準備

不要な列の削除

data = data.drop(['Fide id', 'Name', 'Gender'], axis=1)

欠損値の処理

data.isnull().sum()
Federation            0
Year_of_birth       292
Title              5435
Standard_Rating       0
Rapid_rating       4945
Blitz_rating       5081
Inactive_flag      2701
dtype: int64
data.dtypes
Federation          object
Year_of_birth      float64
Title               object
Standard_Rating      int64
Rapid_rating       float64
Blitz_rating       float64
Inactive_flag       object
dtype: object

NULL値が入っている数値のセルに平均値を入れていく

numerical_features = ['Year_of_birth', 'Standard_Rating', 'Rapid_rating', 'Blitz_rating']
for column in numerical_features:
    data[column] = data[column].fillna(data[column].mean())
data.isnull().sum()
Federation            0
Year_of_birth         0
Title              5435
Standard_Rating       0
Rapid_rating          0
Blitz_rating          0
Inactive_flag      2701
dtype: int64
data['Title'].unique()
array(['GM', 'IM', 'WGM', 'FM', 'WFM', 'WIM', nan, 'CM', 'WCM', 'WH'],
      dtype=object)
data['Inactive_flag'].unique()
array(['wi', nan], dtype=object)

「Inactive_flag」は活動していない場合のみ「wi」が入っているので、NA値には「wa」をいれる

data['Inactive_flag'] = data['Inactive_flag'].fillna('wa')
title_dummies = pd.get_dummies(data['Title'])
title_dummies
CM FM GM IM WCM WFM WGM WH WIM
0 0 0 1 0 0 0 0 0 0
1 0 0 1 0 0 0 0 0 0
2 0 0 1 0 0 0 0 0 0
3 0 0 1 0 0 0 0 0 0
4 0 0 1 0 0 0 0 0 0
... ... ... ... ... ... ... ... ... ...
8548 0 0 0 0 0 0 0 0 0
8549 0 0 0 0 0 0 0 0 0
8550 0 0 0 0 0 0 0 0 0
8551 0 0 0 0 0 1 0 0 0
8552 0 0 0 0 0 0 0 0 0

8553 rows × 9 columns

title_dummies.sum()
CM        8
FM       36
GM       37
IM      119
WCM     247
WFM    1545
WGM     316
WH        1
WIM     809
dtype: int64

今回はグランド・マスターかどうか判断するのでGMのみ残します。

data = pd.concat([data, title_dummies['GM']], axis=1)
data = data.drop('Title', axis=1)
data.isnull().sum()
Federation         0
Year_of_birth      0
Standard_Rating    0
Rapid_rating       0
Blitz_rating       0
Inactive_flag      0
GM                 0
dtype: int64

エンコード

data['Inactive_flag'].unique()
array(['wi', 'wa'], dtype=object)
バイナリ値は `LabelEncoder`を使います。
encoder = LabelEncoder()
data['Inactive_flag'] = encoder.fit_transform(data['Inactive_flag'])
data
Federation Year_of_birth Standard_Rating Rapid_rating Blitz_rating Inactive_flag GM
0 HUN 1976.000000 2675 2646.000000 2736.000000 1 1
1 CHN 1994.000000 2658 2621.000000 2601.000000 0 1
2 IND 1987.000000 2586 2483.000000 2483.000000 0 1
3 RUS 1998.000000 2582 2502.000000 2441.000000 0 1
4 HUN 1969.000000 2577 1931.680155 1925.155242 1 1
... ... ... ... ... ... ... ...
8548 BOL 1985.291732 1801 1931.680155 1925.155242 1 0
8549 CZE 1994.000000 1801 1791.000000 1765.000000 0 0
8550 IND 1992.000000 1801 1884.000000 1824.000000 1 0
8551 USA 2001.000000 1801 1931.680155 1925.155242 1 0
8552 GER 1982.000000 1801 1931.680155 1925.155242 1 0

8553 rows × 7 columns

data['Federation'].unique()
array(['HUN', 'CHN', 'IND', 'RUS', 'UKR', 'LTU', 'GEO', 'KAZ', 'IRI',
       'GER', 'SWE', 'BUL', 'TUR', 'GRE', 'AZE', 'FRA', 'ROU', 'USA',
       'MGL', 'POL', 'BLR', 'QAT', 'ESP', 'ENG', 'INA', 'ARM', 'CZE',
       'PER', 'SRB', 'NED', 'SCO', 'UZB', 'ITA', 'CUB', 'VIE', 'ECU',
       'AUS', 'ARG', 'CRO', 'SVK', 'SGP', 'ISR', 'LUX', 'SLO', 'EST',
       'CAN', 'LAT', 'AUT', 'SUI', 'MNC', 'MDA', 'BRA', 'BEL', 'COL',
       'PHI', 'PAR', 'BRU', 'MEX', 'BIH', 'MAS', 'NOR', 'MNE', 'TKM',
       'IRL', 'VEN', 'EGY', 'IRQ', 'FIN', 'BOL', 'DEN', 'MKD', 'KGZ',
       'ESA', 'CHI', 'RSA', 'FID', 'UAE', 'LBN', 'MYA', 'ISL', 'BAN',
       'POR', 'KSA', 'NAM', 'URU', 'ALG', 'WLS', 'PUR', 'ALB', 'KOR',
       'TJK', 'SRI', 'JAM', 'ANG', 'NGR', 'BAR', 'BER', 'ZIM', 'BOT',
       'JPN', 'DOM', 'CRC', 'SYR', 'GUA', 'SEY', 'JOR', 'NZL', 'MAR',
       'MAC', 'TTO', 'NCA', 'ZAM', 'PAN', 'THA', 'GCI', 'AHO', 'HKG',
       'MLT', 'HON', 'LBA', 'SUR', 'UGA', 'CPV', 'MAD'], dtype=object)
data = data.drop('Federation', axis=1)

データの分割

XとYにデータを分割します。

y = data['GM']
X = data.drop('GM', axis=1)

スケーリング

MinMaxScalerを使ってデータを0-1の間にスケーリングします。

scaler = sp.MinMaxScaler()
X = pd.DataFrame(scaler.fit_transform(X), columns=X.columns)
X
Year_of_birth Standard_Rating Rapid_rating Blitz_rating Inactive_flag
0 0.622222 1.000000 1.000000 1.000000 1.0
1 0.822222 0.980549 0.982419 0.914394 0.0
2 0.744444 0.898169 0.885373 0.839569 0.0
3 0.866667 0.893593 0.898734 0.812936 0.0
4 0.544444 0.887872 0.497665 0.485831 1.0
... ... ... ... ... ...
8548 0.725464 0.000000 0.497665 0.485831 1.0
8549 0.822222 0.000000 0.398734 0.384274 0.0
8550 0.800000 0.000000 0.464135 0.421687 1.0
8551 0.900000 0.000000 0.497665 0.485831 1.0
8552 0.688889 0.000000 0.497665 0.485831 1.0

8553 rows × 5 columns

トレーニング

データをトレーニングデータとテストデータに分割します

X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8)
今回はグランドマスターか否かという判定なので ロジスティック回帰を使用します。
model = LogisticRegression()
model.fit(X_train, y_train)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
                   intercept_scaling=1, l1_ratio=None, max_iter=100,
                   multi_class='auto', n_jobs=None, penalty='l2',
                   random_state=None, solver='lbfgs', tol=0.0001, verbose=0,
                   warm_start=False)

結果

モデルの精度

print(f'Model Accuracy: {model.score(X_test, y_test)}')
Model Accuracy: 0.9959088252483927

とても良い精度がでましたが、F1スコアを見てみます

y_pred = model.predict(X_test)
print(f'Model F1 Score: {f1_score(y_test, y_pred)}')
Model F1 Score: 0.2222222222222222

低い結果になりました。
そもそもテストデータの0,1はそれぞれどのくらいの割合なのでしょうか。

print(f'Percent Grandmaster: {y_test.sum() / len(y)}')
Percent Grandmaster: 0.0009353443236291359

これからわかる通り、0,1の割合にかなり差があります。こういった場合、単純にすべてをグランド・マスターではないと判断するモデルでも相当な精度がでてしまいます。もっと大量のサンプルが必要かなと思います。

1
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?