@Kone7

Are you sure you want to delete the question?

Leaving a resolved question undeleted may help others!

画像で異常検知をしようとしたのですがAttributeErrorが出て先に進めません。解決方法を教えてください。

解決したいこと

プログラミング初心者です。
https://qiita.com/michelle0915/items/28bc5b844bd0d7ab597b
上記の投稿を参考に独自の画像を用いて異常検知を行おうとしたのですが異常箇所の特定の部分のコードがよく理解できずエラーを吐いてもどこが悪いのかわかりません。

うまく動かすにはどのように手を加えたらよろしいでしょうか?

発生している問題・エラー

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/var/folders/vm/snb6b63s01b67s_d0m_qb5q40000gp/T/ipykernel_28079/2542085184.py in <module>
      1 # 画像の復元
----> 2 z_points = encoder.predict(original_images)
      3 reconst_images = decoder.predict(z_points)
      4 
      5 # 元画像との差分を計算

~/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
    907         max_queue_size=max_queue_size,
    908         workers=workers,
--> 909         use_multiprocessing=use_multiprocessing)
    910 
    911   def reset_metrics(self):

~/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in predict(self, model, x, batch_size, verbose, steps, callbacks, **kwargs)
    713     batch_size = model._validate_or_infer_batch_size(batch_size, steps, x)
    714     x, _, _ = model._standardize_user_data(
--> 715         x, check_steps=True, steps_name='steps', steps=steps)
    716     return predict_loop(
    717         model,

~/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
   2470           feed_input_shapes,
   2471           check_batch_axis=False,  # Don't enforce the batch size.
-> 2472           exception_prefix='input')
   2473 
   2474     # Get typespecs for the input data and sanitize it if necessary.

~/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    517   if shapes is not None:
    518     data = [
--> 519         standardize_single_array(x, shape) for (x, shape) in zip(data, shapes)
    520     ]
    521   else:

~/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in <listcomp>(.0)
    517   if shapes is not None:
    518     data = [
--> 519         standardize_single_array(x, shape) for (x, shape) in zip(data, shapes)
    520     ]
    521   else:

~/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in standardize_single_array(x, expected_shape)
    442         'Expected an array data type but received an integer: {}'.format(x))
    443 
--> 444   if (x.shape is not None and len(x.shape) == 1 and
    445       (expected_shape is None or len(expected_shape) != 1)):
    446     if tensor_util.is_tensor(x):

AttributeError: 'str' object has no attribute 'shape'

該当するソースコード

# 学習データの読み込み&前処理
train_images = glob.glob("//*.png")
train = []
for i in train_images:
    image = cv2.imread(i)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    train.append(image)

train = np.array(train)
train = train.astype('float32') / 255
# 学習用ハイパーパラメータ
LEARNING_RATE = 0.0005
BATCH_SIZE = 8
Z_DIM = 50
EPOCHS = 3
# エンコーダ
encoder_input = Input(shape=(128,128,3), name='encoder_input')
x = encoder_input
x = Conv2D(filters=32, kernel_size=3, strides=1, padding='same', name='encoder_conv_0')(x)
x = LeakyReLU()(x)
x = Conv2D(filters=32, kernel_size=3, strides=1, padding='same', name='encoder_conv_0_1')(x)
x = LeakyReLU()(x)
x = Conv2D(filters=64, kernel_size=3, strides=2, padding='same', name='encoder_conv_1')(x)
x = LeakyReLU()(x)
x = Conv2D(filters=64, kernel_size=3, strides=2, padding='same', name='encoder_conv_2')(x)
x = LeakyReLU()(x)
x = Conv2D(filters=64, kernel_size=3, strides=1, padding='same', name='encoder_conv_3')(x)
x = LeakyReLU()(x)
shape_before_flattening = K.int_shape(x)[1:]
x = Flatten()(x)
encoder_output = Dense(Z_DIM, name='encoder_output')(x)
encoder = Model(encoder_input, encoder_output)
# デコーダ
decoder_input = Input(shape=(Z_DIM,), name='decoder_input')
x = Dense(np.prod(shape_before_flattening))(decoder_input)
x = Reshape(shape_before_flattening)(x)
x = Conv2DTranspose(filters=64, kernel_size=3, strides=1, padding='same', name='decoder_conv_t_0')(x)
x = LeakyReLU()(x)
x = Conv2DTranspose(filters=64, kernel_size=3, strides=2, padding='same', name='decoder_conv_t_1')(x)
x = LeakyReLU()(x)
x = Conv2DTranspose(filters=32, kernel_size=3, strides=2, padding='same', name='decoder_conv_t_2')(x)
x = LeakyReLU()(x)
x = Conv2DTranspose(filters=32, kernel_size=3, strides=1, padding='same', name='decoder_conv_t_2_5')(x)
x = LeakyReLU()(x)
x = Conv2DTranspose(filters=3, kernel_size=3, strides=1, padding='same', name='decoder_conv_t_3')(x)
x = Activation('sigmoid')(x)
decoder_output = x
decoder = Model(decoder_input, decoder_output)
# エンコーダ/デコーダ連結
model_input = encoder_input
model_output = decoder(encoder_output)
model = Model(model_input, model_output)
# 学習用設定設定(最適化関数、損失関数)
optimizer = Adam(learning_rate=LEARNING_RATE)

def r_loss(y_true, y_pred):
  return K.mean(K.square(y_true - y_pred), axis=[1,2,3])

model.compile(optimizer=optimizer, loss=r_loss)
# 学習実行
model.fit(
    train,
    train,
    batch_size=BATCH_SIZE,
    epochs=EPOCHS
)
#テスト用画像の読み込み
original_images = glob.glob("//*.png")
test = []
for n in original_images:
    img = cv2.imread(n)
    img= cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    test.append(img)

test = np.array(test)
test = test.astype('float32') / 255
# 画像の復元
z_points = encoder.predict(original_images)
reconst_images = decoder.predict(z_points)

# 元画像との差分を計算
diff_images = np.absolute(reconst_images - original_images)

ここでエラー表示

自分で試したこと

下の部分が文字として認識されてしまっているのが悪いのではと考えたのですが対処法がわか利ませんでした

original_images = glob.glob("//*.png")
test = []
for n in original_images:
    img = cv2.imread(n)
    img= cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    test.append(img)

test = np.array(test)
test = test.astype('float32') / 255
0 likes

1Answer

合っているかは分かりませんが、リンク先のプログラムを見ていると、original_imagesではなく、処理後のtestの方を使っていると思います。
original_imagesは、単純に画像ファイルを選択しているだけなので、形式は、strです。imgの部分で画像ファイルを読み込んで、np.arrayに変換し、encoderで処理できる形式に合わせているように見えます。

1Like

Comments

  1. @Kone7

    Questioner

    確かにnp.arrayに変換しておいて変換前のまま入れてますね。
    気がつきませんでした...
    回答いただきありがとうございます。

Your answer might help someone💌