Help us understand the problem. What is going on with this article?

DeepChemでLightGBMを使う

はじめに

DeepChemでは、scikit-learnの予測アルゴリズムの他、参考URLにあるようにXGBoost等も使える。実際、DeepChemを使ったMoculeNetというベンチマークでは、比較対象の既存手法の中にXGBoostも存在する。しかし昨今はLightGBMも比較対象にしたいとうニーズもあるでしょうということで、今回DeepChemからLightGBMを利用するコードを書いてみた。

環境

  • python3.6
  • DeepChem 2.2.1.dev54
  • scikit-learn 0.21.2
  • lightgbm 2.2.3

ソース

ソースは以下の通り。ほぼ参考URLの流用でいけたので解説は省略。
コマンドラインからCSVファイル、目的変数のカラム、smilesのカラム指定して実行すればよい。

import argparse
import deepchem as dc
from lightgbm import LGBMRegressor


def model_builder(model_params, model_dir):
    estimator = LGBMRegressor()
    estimator.set_params(**model_params)
    return dc.models.SklearnModel(estimator, model_dir)


def main():

    parser = argparse.ArgumentParser()
    parser.add_argument("-train", type=str, required=True, help="trainig data file(csv)")
    parser.add_argument("-target_col", type=str, required=True)
    parser.add_argument("-smiles_col", type=str, default="smiles")
    args = parser.parse_args()

    featurizer = dc.feat.RDKitDescriptors()

    # 学習データの読み込み
    loader = dc.data.CSVLoader(tasks=[args.target_col],
                               smiles_field=args.smiles_col,
                               featurizer=featurizer)

    dataset = loader.featurize(args.train)

    splitter = dc.splits.RandomSplitter(dataset)

    train_dataset, valid_dataset, test_dataset = splitter.train_valid_test_split(dataset, frac_train=0.8,
                                                                                 frac_valid=0.1, frac_test=0.1)
    transformers = [
        dc.trans.NormalizationTransformer(transform_y=True,
                                          dataset=train_dataset)
    ]

    for transformer in transformers:
        train_dataset = transformer.transform(train_dataset)
        valid_dataset = transformer.transform(valid_dataset)
        test_dataset = transformer.transform(test_dataset)

    print(train_dataset.X.shape)
    print(valid_dataset.X.shape)
    print(test_dataset.X.shape)

    metric = dc.metrics.Metric(dc.metrics.r2_score)
    optimizer = dc.hyper.HyperparamOpt(model_builder, verbose=True)
    params_dict = {
             "num_leaves": [50, 100, 150, 200],
             "max_depth": [50],
             'min_data_in_leaf': [50],
             "learning_rate": [0.01],
             "n_estimators": [2000, 2500, 3000],
             'reg_lambda': [0],
        }
    best_model, best_model_hyperparams, all_model_results = optimizer.hyperparam_search(params_dict, train_dataset, valid_dataset, transformers, metric=metric)


if __name__ == "__main__":
    main()

おわりに

scikit-learnのインターフェースに準拠しているものであれば、同じやり方でいけるはずだ。

参考

Why do not you register as a user and use Qiita more conveniently?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
Comments
Sign up for free and join this conversation.
If you already have a Qiita account
Why do not you register as a user and use Qiita more conveniently?
You need to log in to use this function. Qiita can be used more conveniently after logging in.
You seem to be reading articles frequently this month. Qiita can be used more conveniently after logging in.
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away