6
8

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

データ分析でよく使うスニペット集【初手LightGBM編】

Last updated at Posted at 2024-03-12

はじめに

機械学習モデルを作る際、まずはベースラインを作成します。
最初のSTEPとしては、ひとまず初手LightGBMするのが通説のようで、この記事では、自分用に、初手LightGBMを行って、submitするまでのコードをスニペットとしてまとめます。

前処理に関してはこちらの記事でまとめています。
https://qiita.com/arima_/items/f56f15c13726a74dc1ad

なお、以下のコードはTitanicデータをイメージしています。

分類タスクの場合

データセットの作成(共通)

# 初手LightGBMで使用する欠損値のないデータを準備
x_train = df_train[['Pclass', 'Fare']]
y_train = df_train['Survived']

クロスバリデーションでLightGBM(分類タスク専用)

# ライブラリの読み込みとパラメータの設定
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.metrics import accuracy_score
import lightgbm as lgb

params = {
    "boosting_type": "gbdt",
    "objective": "binary",
    "metric": "auc",
    "learning_rate": 0.1,
    "num_leaves": 16,
    "n_estimators": 100000,
    "random_state": 42,
    "importance_type": "gain",
    "verbose": 1
}
# 学習
metrics = []
imp = pd.DataFrame()

n_splits = 5

cv = StratifiedKFold(n_splits=n_splits,shuffle=True,random_state=42)

for nfold, (train_idx, val_idx) in enumerate(cv.split(x_train, y_train)):
    print("-"*10, nfold, "-"*10)

    x_tr, y_tr = x_train.iloc[train_idx], y_train.iloc[train_idx]
    x_va, y_va = x_train.iloc[val_idx], y_train.iloc[val_idx]

    print("学習用:",x_tr.shape, y_tr.shape)
    print("評価用:",x_va.shape, y_va.shape) 

    model = lgb.LGBMClassifier(**params)
    model.fit(
        x_tr, 
        y_tr, 
        eval_set=[(x_tr, y_tr), (x_va, y_va)],
        callbacks = [
            lgb.callback.early_stopping(stopping_rounds=100),
            lgb.callback.log_evaluation(period=100)
            ])

    y_tr_pred = model.predict(x_tr)
    y_va_pred = model.predict(x_va)

    metric_tr = accuracy_score(y_tr, y_tr_pred)
    metric_va = accuracy_score(y_va, y_va_pred)

    print(metric_tr, metric_va)

    metrics.append([nfold, metric_tr, metric_va])

    _imp = pd.DataFrame({
        "col": x_train.columns,
        "imp": model.feature_importances_,
        "nfold": nfold
        })

    imp = pd.concat([imp, _imp], axis=0, ignore_index=True)
# 精度の確認
metrics = np.array(metrics)

print("[tr]: {:3f}+-{:2f}".format(metrics[:, 1].mean(), metrics[:, 1].std()))
print("[va]: {:3f}+-{:2f}".format(metrics[:, 2].mean(), metrics[:, 2].std()))

結果の確認

imp = imp.groupby("col")["imp"].agg("mean")
imp.columns = ["imp"]
imp = imp.reset_index(drop=False)
imp = imp.sort_values(by="imp", ascending=False)
imp
# impを可視化
import matplotlib.pyplot as plt

## DataFrameを作成
imp_df = pd.DataFrame(imp)

## matplotlibで可視化
plt.figure(figsize=(10, 6))
plt.barh(imp_df['col'], imp_df['imp'], color='skyblue')
plt.xlabel('Importance')
plt.ylabel('Features')
plt.title('Feature Importance')
plt.gca().invert_yaxis()  # y軸を逆順にして、重要度が高い特徴量を上に表示
plt.show()
# 訓練済みのLightGBMモデルオブジェクトを想定
# model = あなたのLightGBMモデル

# 最初の決定木を可視化する
ax = lgb.plot_tree(model, tree_index=0, figsize=(100, 40), show_info=['split_gain', 'internal_value', 'internal_count', 'leaf_count'])
plt.show()

submit

# kaggleコンペ「home-credit」での例
# 0~1の確率をsubmitする場合
y_pred = model.predict_proba(X_test)[:, 1]

# dfを作成
submission = pd.DataFrame({
    "case_id": X_test.reset_index()["case_id"],
    "score": y_pred
}).set_index('case_id')

# csvファイルを保存
submission.to_csv("./submission.csv")
6
8
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
6
8

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?