LoginSignup
1
0

クロスバリデーション(交差検証)のモデル保存と精度確認関数

Posted at

昨日書いた記事のアップグレードで、今度は回帰にも対応した関数を作ってみました。

関数

import pandas as pd
from sklearn.metrics import classification_report
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
def closs_val_model(model, x, y, cv=50, cla=True):
    ylist = list(set(y.values.astype("str")))
    models = []
    acc = []
    for i in range(cv):
        x_test = x.loc[int(i*len(x)/cv):int((i+1)*len(x)/cv)]
        y_test = y.loc[int(i*len(x)/cv):int((i+1)*len(x)/cv)]
        x_train = pd.concat([x.loc[0:int((i)*len(x)/cv)], x.loc[int((i+2)*len(x)/cv):]])
        y_train = pd.concat([y.loc[0:int((i)*len(x)/cv)], y.loc[int((i+2)*len(x)/cv):]])
        model = model
        model.fit(x_train, y_train)
        y_pred = model.predict(x_test)
        if cla:
            rep = classification_report(y_test, y_pred, output_dict=True)
            tmp = []
            for i in range(len(ylist)):
                try:
                    tmp.append(rep[str(ylist[i])]["precision"])
                    tmp.append(rep[str(ylist[i])]["recall"])
                    tmp.append(rep[str(ylist[i])]["f1-score"])
                    tmp.append(rep[str(ylist[i])]["support"])
                except:
                    tmp.append(None)
                    tmp.append(None)
                    tmp.append(None)
                    tmp.append(None)
                    print(rep)
            tmp.append(rep["accuracy"])
            acc.append(tmp)
            models.append([model, rep["accuracy"]])
            columns = []
            for i in range(len(ylist)):
                for col in ["precision", "recall", "f1-score", "support"]:
                    columns.append(ylist[i]+"_"+col)
            columns.append("accuracy")
            df_acc = pd.DataFrame(acc)
            df_acc.columns = columns
        else:
            tmp = []
            tmp.append(r2_score(y_test, y_pred))
            tmp.append(mean_absolute_error(y_test, y_pred))
            tmp.append(mean_squared_error(y_test, y_pred))
            tmp.append(mean_squared_error(y_test, y_pred)**0.5)
            acc.append(tmp)
            df_acc = pd.DataFrame(acc)
            df_acc.columns = ["R2", "MAE", "MSE", "RMSE"]
            models.append([model, r2_score(y_test, y_pred)])
    return df_acc, models, df_acc.describe()

関数の仕様としてクロスバリデーションは初期値50回で、分類になっています。
回帰を行う場合は関数の「cla」を「False」にすればできます。

使用例

import pandas as pd
from lightgbm import LGBMRegressor

df = pd.read_csv("boston.csv")
y = df["PRICE"]
x = df.drop("PRICE", axis=1)
model = LGBMRegressor()
df_acc, models, df_dcb = closs_val_model(model, x, y, cv=50, cla=False)
df_dcb

image.png
R2でフィッティング具合を見てMAEで誤差を見てみると良いかもしれません。
また、回帰なので分類と違いラベルではなく具体的な値なのでサンプル数も必ずクロスバリデーションの数と同じになります。

まとめ

これ使い道あるかなあ?

1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0