1
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

Titanicデータを使って初めてロジスティック回帰分析をやってみた

Last updated at Posted at 2020-12-30

#目的
基本的にインフラスペックなので、サーバサイドプログラム以外触ったことがないものの、ビッグデータ基盤等も関与しているため、そこでどんなことをサイエンティストの方々がやっているのか、ちょっとでも把握しようと思い、ベタベタなtitanicデータを利用した生存分析をやってみた。Pythonでロジスティック回帰分析って大凡どうやってやってるのかを把握することを目的とする。

#環境条件

  • Python動作サーバ
    • EC2:t2.micro
    • OS:Red Hat Enterprise Linux 8 (HVM), SSD Volume Type
    • Disk:汎用SSD(GP2) 10GB
    • Python:3.7

#実施手順
(Pythonの動作環境整備は割愛)

以下の流れで実施した。
1.データの準備
2.学習データの確認
3.学習データの加工
4.モデルの構築
5.テストデータの確認
6.テストデータの加工
7.モデルによる予測

##1.データの準備
データはこちらからダウンロードし、EC2上に配置した。

##2.学習データの確認

変数 定義 備考
PassengerId 乗客ID プライマリーキー
Survived 生存結果 0=死亡, 1=生存 目的変数   
Pclass チケットクラス 1=1st, 2=2nd, 3=3rd
Name 名前
Sex 性別 male=男性, female=女性     
Age 年齢
SibSp 同乗している兄弟姉妹や配偶者の数 0,1,2,3,4,5,8
Parch 同乗している両親や子供の数 0,1,2,3,4,5,6,9
Ticket チケット番号
Fare 料金
Cabin 客室番号
Embarked どの港から乗船したか C=Cherbourg, Q=Queenstown, S=Southampton

###データフレームの読み込み

python37
import pandas as pd
df_train = pd.read_csv("train.csv")

###欠損値の確認

AgeとCabinとEmbarkedに欠損値が存在する。

df_train.isnull().sum()

PassengerId 0 Survived 0 Pclass 0 Name 0 Sex 0 Age 177 SibSp 0 Parch 0 Ticket 0 Fare 0 Cabin 687 Embarked 2 dtype: int64

df_train.info()

<class 'pandas.core.frame.DataFrame'> RangeIndex: 891 entries, 0 to 890 Data columns (total 12 columns): PassengerId 891 non-null int64 Survived 891 non-null int64 Pclass 891 non-null int64 Name 891 non-null object Sex 891 non-null object Age 714 non-null float64 SibSp 891 non-null int64 Parch 891 non-null int64 Ticket 891 non-null object Fare 891 non-null float64 Cabin 204 non-null object Embarked 889 non-null object dtypes: float64(2), int64(5), object(5) memory usage: 83.6+ KB

欠損値を含む行を捨てずに使おうとする場合、何らかの補完をする必要がある。
Ageは欠損値は、数値データなので、平均値か中央値で補完するのが妥当だと判断。
何となく今回は中央値で補完することにしてみた。(本当はちゃんとデータを見ながら判断すべき。)
EmbarkedとCabinデータについては欠損データをそれぞれ'U'と'Unknown'で補完することにした。
ただ、Embarkedについては、そのままダミー変数作成すればいいが、Cabinは値のバリエーションが多く、
そのままダミー変数を作成すると大変そうだったのと、Unknownか否かで傾向が違いそうに見えたので、
Unknownなら0その他なら1になる様な値設定にすることにした。
以下は欠損値を補完したあとのデータで傾向を確認したもの。

pd.crosstab(df_train['Embarked'], df_train['Survived'])

Survived 0 1 Embarked C 75 93 Q 47 30 S 427 217 U 0 2

pd.crosstab(df_train['Cabin'], df_train['Survived'])

Survived 0 1 Cabin A10 1 0 A14 1 0 A16 0 1 A19 1 0 A20 0 1 A23 0 1 A24 1 0 A26 0 1 A31 0 1 A32 1 0 A34 0 1 A36 1 0 A5 1 0 A6 0 1 A7 1 0 B101 0 1 B102 1 0 B18 0 2 B19 1 0 B20 0 2 B22 1 1 B28 0 2 B3 0 1 B30 1 0 B35 0 2 B37 1 0 B38 1 0 B39 0 1 B4 0 1 B41 0 1 ... ... ... E121 0 2 E17 0 1 E24 0 2 E25 0 2 E31 1 0 E33 0 2 E34 0 1 E36 0 1 E38 1 0 E40 0 1 E44 1 1 E46 1 0 E49 0 1 E50 0 1 E58 1 0 E63 1 0 E67 1 1 E68 0 1 E77 1 0 E8 0 2 F E69 0 1 F G63 1 0 F G73 2 0 F2 1 2 F33 0 3 F38 1 0 F4 0 2 G6 2 2 T 1 0 Unknown 481 206

##3.学習データの加工
上記方針に則り、データ加工を実施する。

## Ageの欠損値を中央値で補完
df_train['Age'] = df_train['Age'].fillna(df_train['Age'].median())
## Cabinの欠損値をUnknownで補完
df_train['Cabin'] = df_train['Cabin'].fillna('Unknown')
## Embarkedの欠損値をU(Unknown)で補完
df_train['Embarked'] = df_train['Embarked'].fillna('U')

補完後のデータを用い、ダミー変数を作成

df_train_dummies = pd.get_dummies(df_train, columns=['Sex','Pclass','SibSp','Parch','Embarked'])
df_train_dummies.info()

<class 'pandas.core.frame.DataFrame'> RangeIndex: 891 entries, 0 to 890 Data columns (total 30 columns): PassengerId 891 non-null int64 Survived 891 non-null int64 Name 891 non-null object Age 891 non-null float64 Ticket 891 non-null object Fare 891 non-null float64 Cabin 891 non-null object Sex_female 891 non-null uint8 Sex_male 891 non-null uint8 Pclass_1 891 non-null uint8 Pclass_2 891 non-null uint8 Pclass_3 891 non-null uint8 SibSp_0 891 non-null uint8 SibSp_1 891 non-null uint8 SibSp_2 891 non-null uint8 SibSp_3 891 non-null uint8 SibSp_4 891 non-null uint8 SibSp_5 891 non-null uint8 SibSp_8 891 non-null uint8 Parch_0 891 non-null uint8 Parch_1 891 non-null uint8 Parch_2 891 non-null uint8 Parch_3 891 non-null uint8 Parch_4 891 non-null uint8 Parch_5 891 non-null uint8 Parch_6 891 non-null uint8 Embarked_C 891 non-null uint8 Embarked_Q 891 non-null uint8 Embarked_S 891 non-null uint8 Embarked_U 891 non-null uint8 dtypes: float64(2), int64(2), object(3), uint8(23) memory usage: 68.8+ KB

AgeとFareの数値データを標準化

from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
df_train_dummies['Age_scale'] = scaler.fit_transform(df_train_dummies.loc[:, ['Age']])
df_train_dummies['Fare_scale'] = scaler.fit_transform(df_train_dummies.loc[:, ['Fare']])
df_train_dummies.info()

<class 'pandas.core.frame.DataFrame'> RangeIndex: 891 entries, 0 to 890 Data columns (total 32 columns): PassengerId 891 non-null int64 Survived 891 non-null int64 Name 891 non-null object Age 891 non-null float64 Ticket 891 non-null object Fare 891 non-null float64 Cabin 891 non-null object Sex_female 891 non-null uint8 Sex_male 891 non-null uint8 Pclass_1 891 non-null uint8 Pclass_2 891 non-null uint8 Pclass_3 891 non-null uint8 SibSp_0 891 non-null uint8 SibSp_1 891 non-null uint8 SibSp_2 891 non-null uint8 SibSp_3 891 non-null uint8 SibSp_4 891 non-null uint8 SibSp_5 891 non-null uint8 SibSp_8 891 non-null uint8 Parch_0 891 non-null uint8 Parch_1 891 non-null uint8 Parch_2 891 non-null uint8 Parch_3 891 non-null uint8 Parch_4 891 non-null uint8 Parch_5 891 non-null uint8 Parch_6 891 non-null uint8 Embarked_C 891 non-null uint8 Embarked_Q 891 non-null uint8 Embarked_S 891 non-null uint8 Embarked_U 891 non-null uint8 Age_scale 891 non-null float64 Fare_scale 891 non-null float64 dtypes: float64(4), int64(2), object(3), uint8(23) memory usage: 82.7+ KB

Cabinについて、Unknownが0、その他が1になる様に変数を生成

df_train_dummies['Cabin_New'] = df_train_dummies['Cabin'].map(lambda x: 0 if x == 'Unknown' else 1).astype(int)
df_train_dummies.info()

<class 'pandas.core.frame.DataFrame'> RangeIndex: 891 entries, 0 to 890 Data columns (total 33 columns): PassengerId 891 non-null int64 Survived 891 non-null int64 Name 891 non-null object Age 891 non-null float64 Ticket 891 non-null object Fare 891 non-null float64 Cabin 891 non-null object Sex_female 891 non-null uint8 Sex_male 891 non-null uint8 Pclass_1 891 non-null uint8 Pclass_2 891 non-null uint8 Pclass_3 891 non-null uint8 SibSp_0 891 non-null uint8 SibSp_1 891 non-null uint8 SibSp_2 891 non-null uint8 SibSp_3 891 non-null uint8 SibSp_4 891 non-null uint8 SibSp_5 891 non-null uint8 SibSp_8 891 non-null uint8 Parch_0 891 non-null uint8 Parch_1 891 non-null uint8 Parch_2 891 non-null uint8 Parch_3 891 non-null uint8 Parch_4 891 non-null uint8 Parch_5 891 non-null uint8 Parch_6 891 non-null uint8 Embarked_C 891 non-null uint8 Embarked_Q 891 non-null uint8 Embarked_S 891 non-null uint8 Embarked_U 891 non-null uint8 Age_scale 891 non-null float64 Fare_scale 891 non-null float64 Cabin_New 891 non-null int64 dtypes: float64(4), int64(3), object(3), uint8(23) memory usage: 89.7+ KB

##4.モデルの構築
生成した変数を用いモデルを構築

from sklearn.linear_model import LogisticRegression
##説明変数の設定
X = df_train_dummies[['Sex_female','Sex_male','Pclass_1','Pclass_2','Pclass_3','SibSp_0','SibSp_1','SibSp_2','SibSp_3','SibSp_4','SibSp_5','SibSp_8','Parch_0','Parch_1','Parch_2','Parch_3','Parch_4','Parch_5','Parch_6','Embarked_C','Embarked_Q','Embarked_S','Embarked_U','Age_scale','Fare_scale','Cabin_New']]
##目的変数の設定
y = df_train_dummies['Survived']
model = LogisticRegression()
result = model.fit(X, y)

モデルの評価を実施

##係数の確認
result.coef_

array([[ 1.08240551, -1.49531858, 0.44672894, 0.1123565 , -0.9719985 , 0.76315352, 0.84866325, 0.44745114, -0.8278121 , -0.42535544, -0.51058583, -0.70842761, 0.22631714, 0.45736976, 0.05137953, 0.30703108, -0.71378859, -0.41660858, -0.3246134 , -0.05695347, -0.04812828, -0.47921984, 0.17138853, -0.47504073, 0.08458894, 0.83782699]])

性別の情報が大きく影響を与えていることがわかった。

model.score(X, y)

0.8181818181818182

##5.テストデータの確認

df_test = pd.read_csv("test.csv")

###欠損値の確認

AgeとCabinとFareに欠損値が存在する。

df_test.isnull().sum()

PassengerId 0 Pclass 0 Name 0 Sex 0 Age 86 SibSp 0 Parch 0 Ticket 0 Fare 1 Cabin 327 Embarked 0 dtype: int64

df_test.info()

<class 'pandas.core.frame.DataFrame'> RangeIndex: 418 entries, 0 to 417 Data columns (total 11 columns): PassengerId 418 non-null int64 Pclass 418 non-null int64 Name 418 non-null object Sex 418 non-null object Age 332 non-null float64 SibSp 418 non-null int64 Parch 418 non-null int64 Ticket 418 non-null object Fare 417 non-null float64 Cabin 91 non-null object Embarked 418 non-null object dtypes: float64(2), int64(4), object(5) memory usage: 36.0+ KB

##6.テストデータの加工
学習用データと同様に加工を実施。FareもAgeと同様に中央値で補完。

## Ageの欠損値を中央値で補完
df_test['Age'] = df_test['Age'].fillna(df_train['Age'].median())
## Cabinの欠損値をUnknownで補完
df_test['Cabin'] = df_test['Cabin'].fillna('Unknown')
## Fareの欠損値を中央値で補完
df_test['Fare'] = df_test['Fare'].fillna(df_test['Fare'].median())

補完後のデータを用い、ダミー変数を作成

df_test_dummies = pd.get_dummies(df_test, columns=['Sex','Pclass','SibSp','Parch','Embarked'])

Embarkedに欠損値がないため、Embarked_Uは全て0の値を投入。

df_test_dummies['Embarked_U'] = 0
df_test_dummies.info()

<class 'pandas.core.frame.DataFrame'> RangeIndex: 418 entries, 0 to 417 Data columns (total 29 columns): PassengerId 418 non-null int64 Name 418 non-null object Age 418 non-null float64 Ticket 418 non-null object Fare 418 non-null float64 Cabin 418 non-null object Sex_female 418 non-null uint8 Sex_male 418 non-null uint8 Pclass_1 418 non-null uint8 Pclass_2 418 non-null uint8 Pclass_3 418 non-null uint8 SibSp_0 418 non-null uint8 SibSp_1 418 non-null uint8 SibSp_2 418 non-null uint8 SibSp_3 418 non-null uint8 SibSp_4 418 non-null uint8 SibSp_5 418 non-null uint8 SibSp_8 418 non-null uint8 Parch_0 418 non-null uint8 Parch_1 418 non-null uint8 Parch_2 418 non-null uint8 Parch_3 418 non-null uint8 Parch_4 418 non-null uint8 Parch_5 418 non-null uint8 Parch_6 418 non-null uint8 Parch_9 418 non-null uint8 Embarked_C 418 non-null uint8 Embarked_Q 418 non-null uint8 Embarked_S 418 non-null uint8 dtypes: float64(2), int64(1), object(3), uint8(23) memory usage: 29.1+ KB

AgeとFareの数値データを標準化

df_test_dummies['Age_scale'] = scaler.fit_transform(df_test_dummies.loc[:, ['Age']])
df_test_dummies['Fare_scale'] = scaler.fit_transform(df_test_dummies.loc[:, ['Fare']])
df_test_dummies.info()

<class 'pandas.core.frame.DataFrame'> RangeIndex: 418 entries, 0 to 417 Data columns (total 31 columns): PassengerId 418 non-null int64 Name 418 non-null object Age 418 non-null float64 Ticket 418 non-null object Fare 418 non-null float64 Cabin 418 non-null object Sex_female 418 non-null uint8 Sex_male 418 non-null uint8 Pclass_1 418 non-null uint8 Pclass_2 418 non-null uint8 Pclass_3 418 non-null uint8 SibSp_0 418 non-null uint8 SibSp_1 418 non-null uint8 SibSp_2 418 non-null uint8 SibSp_3 418 non-null uint8 SibSp_4 418 non-null uint8 SibSp_5 418 non-null uint8 SibSp_8 418 non-null uint8 Parch_0 418 non-null uint8 Parch_1 418 non-null uint8 Parch_2 418 non-null uint8 Parch_3 418 non-null uint8 Parch_4 418 non-null uint8 Parch_5 418 non-null uint8 Parch_6 418 non-null uint8 Parch_9 418 non-null uint8 Embarked_C 418 non-null uint8 Embarked_Q 418 non-null uint8 Embarked_S 418 non-null uint8 Age_scale 418 non-null float64 Fare_scale 418 non-null float64 dtypes: float64(4), int64(1), object(3), uint8(23) memory usage: 35.6+ KB

Cabinについて、Unknownが0、その他が1になる様に変数を生成

df_test_dummies['Cabin_New'] = df_test_dummies['Cabin'].map(lambda x: 0 if x == 'Unknown' else 1).astype(int)

##7.モデルによる予測

##予測に利用するデータを定義
df_test_dummies_x = df_test_dummies[['Sex_female','Sex_male','Pclass_1','Pclass_2','Pclass_3','SibSp_0','SibSp_1','SibSp_2','SibSp_3','SibSp_4','SibSp_5','SibSp_8','Parch_0','Parch_1','Parch_2','Parch_3','Parch_4','Parch_5','Parch_6','Embarked_C','Embarked_Q','Embarked_S','Embarked_U','Age_scale','Fare_scale','Cabin_New']]
##予測処理の実行
predict = model.predict(df_test_dummies_x)

予測結果のCSV出力

output_csv = pd.concat([df_test_dummies['PassengerId'], pd.Series(predict)], axis=1)
output_csv.columns = ['PassengerId', 'Survived']
output_csv.to_csv('./submition.csv', index=False)

##最後に
とりあえずやってみました!な感じだが、Kaggleに投稿してみた。
スコアは0.76076
2020年12月30日時点で、14284位だった。

1
2
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?