LoginSignup
0
1

More than 5 years have passed since last update.

KNeighborsRegressorの解析

Posted at
fig, axes = plt.subplots(1, 3, figsize=(15, 4))
#subplots = figure内のplot生成数。(行数, 列数, plot番号)  plot番号は生成した行列数の左上から右、左下と1,2,3,,,と続く
#figsize = inch表記のオプション引数。(縦サイズ, 横サイズ)  グラフサイズ変更可能。デフォルトは(8,6)

line = np.linspace(-3, 3, 1000).reshape(-1, 1)
#linspace = 等差数列生成。(start, stop, num) num=生成する配列(ndarray)の要素数を設定可。
#reshape = (行数, 列数) -1を設定すると元の形状から推測された値が自動で入る (-1, 1)なら1列ベクトルに自動で数値が入る

for n_neighbors, ax in zip([1, 3, 9], axes):
    #1, 3, 9の近傍点で比較
    reg = KNeighborsRegressor(n_neighbors=n_neighbors) #インスタンス生成

    reg.fit(X_train, y_train) #モデル学習

    ax.plot(line, reg.predict(line))
    ax.plot(X_train, y_train, '^', c=mglearn.cm2(0), markersize=8)
    ax.plot(X_test, y_test, 'v', c=mglearn.cm2(1), markersize=8)

    ax.set_title(
            "{} neighbor(s) \n train score: {: .2f} test score {: .2f}".format(
                    n_neighbors, reg.score(X_train, y_train), 
                    reg.score(X_test, y_test)))
    ax.set_xlabel("feature")
    ax.set_ylabel("target")

axes[1].legend(["model predictions", "training data/target", "test data/target"], loc="best");
0
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
1