@maruten

Are you sure you want to delete the question?

If your question is resolved, you may close it.

Leaving a resolved question undeleted may help others!

We hope you find it useful!

時系列予測の前処理のハイパーパラメータも含めた調整

解決したいこと

kerastunerを用いてGRUのハイパーパラメータと前処理のwindow_size, batch_sizeの調整を行いたいです。以下のコードでtuner.search()にvalidation_dataを入れたいのですが,下記のエラーが出てしまいます。

発生している問題・エラー

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

該当するソースコード

def windowed_dataset_test(series, window_size, batch_size):
    dataset = tf.data.Dataset.from_tensor_slices(series)
    dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
    dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
    dataset = dataset.map(lambda window: (window[:-1], window[-1]))
    dataset = dataset.batch(batch_size).prefetch(1)
    return dataset
class MyHyperModel(HyperModel):
    def build(self, hp):
        strategy = tf.distribute.MirroredStrategy()
        window_size = hp.Int('window_size', min_value=7, max_value=90, step=10)
        with strategy.scope():
            model = tf.keras.models.Sequential()
            for i in range(hp.Int("num_layers", min_value=1, max_value=5, step=1)):  # tuning the num of layers
                model.add(tf.keras.layers.GRU(
                    units=hp.Int("units", min_value=10, max_value=500, step=10),  # tuning the num of units
                    input_shape=[window_size, 1],  # use self.window_size here
                    return_sequences=True,
                ))
            model.add(tf.keras.layers.Dense(1))
            learning_rate = hp.Float("lr", min_value=1e-8, max_value=1e-2, sampling="log")  # tuning the learning late
            model.compile(
                optimizer=tf.optimizers.SGD(learning_rate=learning_rate, momentum=hp.Float('momentum', min_value=0, max_value=0.9, step=0.1)),
                # tuning the momentum
                loss="mse",
                metrics="mse")
        return model

    def fit(self, hp: object, model: object, x: object, **kwargs: object) -> object:
        batch_size = hp.Choice('batch_size', [8, 16, 32, 64, 128])
        x = windowed_dataset_test(x_train, hp.get('window_size'), batch_size)

        return model.fit(x, **kwargs)

tuner = keras_tuner.BayesianOptimization(
    MyHyperModel(),
    objective=Objective('mse', direction='min'),
    max_trials=5,
    overwrite=True,
    directory="hyperprameter_tuning",
    project_name="GRU_only_price_baysian",
)

series_price = df['nationwide_price']
x_train = series_price['2010-03-13':'2021-12-31'].to_numpy()
x_test = series_price['2022-01-01':].to_numpy()
val_split = int(len(x_train) * 0.9)  # or any other split ratio
x_val = x_train[val_split:]
x_train = x_train[:val_split]
with tf.device('/device:GPU:0'):
    tuner.search(x_train, epochs=3, validation_data=x_val)

hypermodel = MyHyperModel()
print(tuner.results_summary())

自分で試したこと

validation dataを省くと無事学習は進みますが,とてつもないunderfittingに陥ります。
validatin dataも調整中のwindow size とbatch sizeを使用してwindow dataにしたいです。

0 likes

1Answer

エラー文が短くてどこで発生してるかわかりませんので,全文提供願えますか?ユーザ名等の情報は伏せて構いません.

0Like

Comments

  1. @maruten

    Questioner

    失礼しました。

    RuntimeError Traceback (most recent call last)
    in ()
    49 x_train = x_train[:val_split]
    50 with tf.device('/device:GPU:0'):
    ---> 51 tuner.search(x_train, epochs=3, validation_data=x_val)
    52
    53 hypermodel = MyHyperModel()

    4 frames
    /usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/oracle.py in _check_consecutive_failures(self)
    532 consecutive_failures = 0
    533 if consecutive_failures == self.max_consecutive_failed_trials:
    --> 534 raise RuntimeError(
    535 "Number of consecutive failures exceeded the limit "
    536 f"of {self.max_consecutive_failed_trials}.\n"

    RuntimeError: Number of consecutive failures exceeded the limit of 3.
    Traceback (most recent call last):
    File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/base_tuner.py", line 273, in _try_run_and_update_trial
    self._run_and_update_trial(trial, *fit_args, **fit_kwargs)
    File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/base_tuner.py", line 238, in _run_and_update_trial
    results = self.run_trial(trial, *fit_args, **fit_kwargs)
    File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/tuner.py", line 314, in run_trial
    obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
    File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/tuner.py", line 233, in _build_and_fit_model
    results = self.hypermodel.fit(hp, model, *args, **kwargs)
    File "", line 33, in fit
    return model.fit(x, **kwargs)
    File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
    File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py", line 1664, in fit
    if validation_data:
    ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

  2. validation_dataはtupleで渡さないといけなかったはずなので,どうしてもx_valだけを渡したい場合,validation_data=(x_val,)のようにしたら良いんじゃないでしょうか?

    ちなみに,Keras Tunerを使わずに実行して動かないなら1,それはKeras Tunerの問題ではないということです.問題の切り分けをしっかり行なってください.

    1. というか多分モデルの出力が用意されてないので動かないと思う

Your answer might help someone💌