LoginSignup
0
2

More than 3 years have passed since last update.

Semantic Segmentationのいろんなモデルが扱えるgithubがあったので、それで、性能出ししてみる。DeepLabV3+とか。

Last updated at Posted at 2020-11-27

目的

Semantic Segmentationのいろんなモデルが扱えるgithubがあったので、それで、性能出ししてみる。
該当のgithubは、以下。
https://github.com/luyanger1799/Amazing-Semantic-Segmentation

結果の前に。

下記の結果、その1、その2は、間違えています!!!
すみません。

        Data Augmentation Rate --> 0.0

だと、オーグメンテーション、全く、作用していません。
ミス!!!!!!
(誰か、教えてよーーーー!)
立て直し中です、しばし。

結果

【無効】その1(オーグメンテーションの効果を探る)<--間違えています。上記のとおり。

まずは、この記事でのリファレンス。

設定は、以下。

***** Begin training *****
Dataset --> CamVid
Num Images --> 421
Model --> DeepLabV3Plus
Base Model --> ResNet50
Crop Height --> 256
Crop Width --> 256
Num Epochs --> 300
Initial Epoch --> 0
Batch Size --> 8
Num Classes --> 32
Data Augmentation:
        Data Augmentation Rate --> 0.0
        Vertical Flip --> False
        Horizontal Flip --> True
        Brightness Alteration --> None
        Rotation --> 5.0
        Zoom --> [0.1]
        Channel Shift --> 0.0

最初の10回。val_mean_io_u: 0.2916が最高。

Epoch 1/300
52/52 [==============================] - 61s 1s/step - loss: 89854.7969 - mean_io_u: 0.0889 - val_loss: 4838029.0000 - val_mean_io_u: 0.0375

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00029999177814635854.
Epoch 2/300
52/52 [==============================] - 54s 1s/step - loss: 56099.2969 - mean_io_u: 0.1373 - val_loss: 626711.7500 - val_mean_io_u: 0.0610

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00029996711348705303.
Epoch 3/300
52/52 [==============================] - 66s 1s/step - loss: 48637.7344 - mean_io_u: 0.1652 - val_loss: 584699.3125 - val_mean_io_u: 0.0639

Epoch 00004: LearningRateScheduler reducing learning rate to 0.0002999260087268414.
Epoch 4/300
52/52 [==============================] - 91s 2s/step - loss: 43062.8906 - mean_io_u: 0.1899 - val_loss: 82439.8672 - val_mean_io_u: 0.1594

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0002998684683733238.
Epoch 5/300
52/52 [==============================] - 119s 2s/step - loss: 40139.8750 - mean_io_u: 0.2087 - val_loss: 48446.8047 - val_mean_io_u: 0.2208

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0002997944987364483.
Epoch 6/300
52/52 [==============================] - 157s 3s/step - loss: 36975.4180 - mean_io_u: 0.2235 - val_loss: 48954.8555 - val_mean_io_u: 0.2482

Epoch 00007: LearningRateScheduler reducing learning rate to 0.00029970410792781927.
Epoch 7/300
52/52 [==============================] - 191s 4s/step - loss: 34668.2188 - mean_io_u: 0.2472 - val_loss: 42038.1367 - val_mean_io_u: 0.2570

Epoch 00008: LearningRateScheduler reducing learning rate to 0.00029959730585980745.
Epoch 8/300
52/52 [==============================] - 218s 4s/step - loss: 32943.8516 - mean_io_u: 0.2600 - val_loss: 37517.3281 - val_mean_io_u: 0.2845

Epoch 00009: LearningRateScheduler reducing learning rate to 0.00029947410424446315.
Epoch 9/300
52/52 [==============================] - 249s 5s/step - loss: 30667.6152 - mean_io_u: 0.2835 - val_loss: 36129.5820 - val_mean_io_u: 0.2520

Epoch 00010: LearningRateScheduler reducing learning rate to 0.00029933451659223183.
Epoch 10/300
52/52 [==============================] - 262s 5s/step - loss: 29595.3750 - mean_io_u: 0.2999 - val_loss: 36362.9062 - val_mean_io_u: 0.2916



末尾の10回。val_mean_io_u: 0.4908が最高。

Epoch 00291: LearningRateScheduler reducing learning rate to 9.214417895274202e-07.
Epoch 291/300
52/52 [==============================] - 361s 7s/step - loss: 6189.0298 - mean_io_u: 0.7719 - val_loss: 48670.3945 - val_mean_io_u: 0.4908

Epoch 00292: LearningRateScheduler reducing learning rate to 7.654834077681545e-07.
Epoch 292/300
52/52 [==============================] - 361s 7s/step - loss: 6234.5962 - mean_io_u: 0.7714 - val_loss: 38513.7070 - val_mean_io_u: 0.4161

Epoch 00293: LearningRateScheduler reducing learning rate to 6.258957555368196e-07.
Epoch 293/300
52/52 [==============================] - 361s 7s/step - loss: 6169.3179 - mean_io_u: 0.7722 - val_loss: 50993.7188 - val_mean_io_u: 0.3860

Epoch 00294: LearningRateScheduler reducing learning rate to 5.026941401925086e-07.
Epoch 294/300
52/52 [==============================] - 361s 7s/step - loss: 6170.3516 - mean_io_u: 0.7707 - val_loss: 53318.4062 - val_mean_io_u: 0.4438

Epoch 00295: LearningRateScheduler reducing learning rate to 3.958920721806797e-07.
Epoch 295/300
52/52 [==============================] - 361s 7s/step - loss: 6187.1567 - mean_io_u: 0.7709 - val_loss: 52796.4141 - val_mean_io_u: 0.4067

Epoch 00296: LearningRateScheduler reducing learning rate to 3.055012635516537e-07.
Epoch 296/300
52/52 [==============================] - 361s 7s/step - loss: 6196.3765 - mean_io_u: 0.7724 - val_loss: 46177.6641 - val_mean_io_u: 0.4249

Epoch 00297: LearningRateScheduler reducing learning rate to 2.3153162667618543e-07.
Epoch 297/300
52/52 [==============================] - 361s 7s/step - loss: 6186.9629 - mean_io_u: 0.7720 - val_loss: 41963.2383 - val_mean_io_u: 0.4324

Epoch 00298: LearningRateScheduler reducing learning rate to 1.7399127315854657e-07.
Epoch 298/300
52/52 [==============================] - 361s 7s/step - loss: 6207.0586 - mean_io_u: 0.7711 - val_loss: 53273.1016 - val_mean_io_u: 0.4153

Epoch 00299: LearningRateScheduler reducing learning rate to 1.3288651294691732e-07.
Epoch 299/300
52/52 [==============================] - 361s 7s/step - loss: 6169.7905 - mean_io_u: 0.7702 - val_loss: 45624.3047 - val_mean_io_u: 0.4071

Epoch 00300: LearningRateScheduler reducing learning rate to 1.0822185364145413e-07.
Epoch 300/300
52/52 [==============================] - 361s 7s/step - loss: 6178.2095 - mean_io_u: 0.7718 - val_loss: 51067.3398 - val_mean_io_u: 0.3722

【無効】その2(オーグメンテーションを増やす)<--間違えています。上記のとおり。

設定は、以下。

***** Begin training *****
Dataset --> CamVid
Num Images --> 421
Model --> DeepLabV3Plus
Base Model --> ResNet50
Crop Height --> 256
Crop Width --> 256
Num Epochs --> 300
Initial Epoch --> 0
Batch Size --> 8
Num Classes --> 32
Data Augmentation:
        Data Augmentation Rate --> 0.0
        Vertical Flip --> False
        Horizontal Flip --> True
        Brightness Alteration --> None
        Rotation --> 10.0★
        Zoom --> [0.2]★
        Channel Shift --> 0.1★

最初の10回。val_mean_io_u: 0.3243が最高。好スタート、たまたま?。

Epoch 1/300
52/52 [==============================] - 63s 1s/step - loss: 89454.6719 - mean_io_u: 0.0912 - val_loss: 3623744.7500 - val_mean_io_u: 0.0438

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00029999177814635854.
Epoch 2/300
52/52 [==============================] - 92s 2s/step - loss: 56037.2383 - mean_io_u: 0.1385 - val_loss: 448488.5000 - val_mean_io_u: 0.0565

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00029996711348705303.
Epoch 3/300
52/52 [==============================] - 343s 7s/step - loss: 47359.6797 - mean_io_u: 0.1658 - val_loss: 113093.2969 - val_mean_io_u: 0.1230

Epoch 00004: LearningRateScheduler reducing learning rate to 0.0002999260087268414.
Epoch 4/300
52/52 [==============================] - 363s 7s/step - loss: 43799.0898 - mean_io_u: 0.1870 - val_loss: 64006.3242 - val_mean_io_u: 0.1854

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0002998684683733238.
Epoch 5/300
52/52 [==============================] - 363s 7s/step - loss: 39738.7617 - mean_io_u: 0.2064 - val_loss: 53173.1133 - val_mean_io_u: 0.2211

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0002997944987364483.
Epoch 6/300
52/52 [==============================] - 363s 7s/step - loss: 37022.7500 - mean_io_u: 0.2232 - val_loss: 42792.3008 - val_mean_io_u: 0.2476

Epoch 00007: LearningRateScheduler reducing learning rate to 0.00029970410792781927.
Epoch 7/300
52/52 [==============================] - 362s 7s/step - loss: 36197.5508 - mean_io_u: 0.2375 - val_loss: 48736.1406 - val_mean_io_u: 0.2034

Epoch 00008: LearningRateScheduler reducing learning rate to 0.00029959730585980745.
Epoch 8/300
52/52 [==============================] - 364s 7s/step - loss: 33251.0586 - mean_io_u: 0.2575 - val_loss: 45869.1484 - val_mean_io_u: 0.2790

Epoch 00009: LearningRateScheduler reducing learning rate to 0.00029947410424446315.
Epoch 9/300
52/52 [==============================] - 363s 7s/step - loss: 31285.2715 - mean_io_u: 0.2837 - val_loss: 41630.4805 - val_mean_io_u: 0.3243

Epoch 00010: LearningRateScheduler reducing learning rate to 0.00029933451659223183.
Epoch 10/300
52/52 [==============================] - 361s 7s/step - loss: 29771.8164 - mean_io_u: 0.3023 - val_loss: 34137.9453 - val_mean_io_u: 0.2640

その3(オーグメンテーションを入れる)

設定は、以下。

***** Begin training *****
Dataset --> CamVid
Num Images --> 421
Model --> DeepLabV3Plus
Base Model --> ResNet50
Crop Height --> 256
Crop Width --> 256
Num Epochs --> 300
Initial Epoch --> 0
Batch Size --> 8
Num Classes --> 32
Data Augmentation:
        Data Augmentation Rate --> 0.5★
        Vertical Flip --> False
        Horizontal Flip --> True
        Brightness Alteration --> None
        Rotation --> 10.0★
        Zoom --> 0.2★
        Channel Shift --> 0.1

様子は、以下。

Epoch 79/300
52/52 [==] - 360s 7s/step - loss: 18545.3730 - mean_io_u: 0.4889 - val_loss: 47864.3594 - val_mean_io_u: 0.3305

Epoch 00080: LearningRateScheduler reducing learning rate to 0.00025154755165391495.
Epoch 80/300
52/52 [==] - 360s 7s/step - loss: 18201.3906 - mean_io_u: 0.4930 - val_loss: 31374.0371 - val_mean_io_u: 0.4434

Epoch 00081: LearningRateScheduler reducing learning rate to 0.00025038613442351075.
Epoch 81/300
52/52 [==] - 360s 7s/step - loss: 19358.6328 - mean_io_u: 0.4808 - val_loss: 25203.5059 - val_mean_io_u: 0.4636

Epoch 00082: LearningRateScheduler reducing learning rate to 0.0002492137142052816.
Epoch 82/300
52/52 [==] - 360s 7s/step - loss: 18497.4629 - mean_io_u: 0.4865 - val_loss: 37067.1367 - val_mean_io_u: 0.3622

Epoch 00083: LearningRateScheduler reducing learning rate to 0.00024803041956831634.
Epoch 83/300
52/52 [==] - 360s 7s/step - loss: 20073.7539 - mean_io_u: 0.4628 - val_loss: 40759.3125 - val_mean_io_u: 0.4400

Epoch 00084: LearningRateScheduler reducing learning rate to 0.0002468363802742064.
Epoch 84/300
52/52 [==] - 360s 7s/step - loss: 17822.4258 - mean_io_u: 0.4938 - val_loss: 34194.4219 - val_mean_io_u: 0.3999

Epoch 00085: LearningRateScheduler reducing learning rate to 0.000245631727262816.
Epoch 85/300
52/52 [==] - 363s 7s/step - loss: 18174.8242 - mean_io_u: 0.4932 - val_loss: 33820.6367 - val_mean_io_u: 0.3827

Epoch 00086: LearningRateScheduler reducing learning rate to 0.0002444165926379231.
Epoch 86/300
52/52 [==] - 360s 7s/step - loss: 19058.8965 - mean_io_u: 0.4800 - val_loss: 35308.7734 - val_mean_io_u: 0.4374

悪くはない。

その4(オーグメンテーションを入れる。少し値をかえる。)

設定は、以下。

***** Begin training *****
Dataset --> CamVid
Num Images --> 421
Model --> DeepLabV3Plus
Base Model --> ResNet50
Crop Height --> 256
Crop Width --> 256
Num Epochs --> 300
Initial Epoch --> 0
Batch Size --> 8
Num Classes --> 32
Data Augmentation:
        Data Augmentation Rate --> 0.5
        Vertical Flip --> False
        Horizontal Flip --> True
        Brightness Alteration --> None
        Rotation --> 5.0
        Zoom --> 0.1
        Channel Shift --> 0.1

様子は、以下。

Epoch 00117: LearningRateScheduler reducing learning rate to 0.0002023183784959062.
Epoch 117/300
52/52 [==] - 363s 7s/step - loss: 16876.9238 - mean_io_u: 0.5123 - val_loss: 32107.6934 - val_mean_io_u: 0.4336

Epoch 00118: LearningRateScheduler reducing learning rate to 0.00020084375114078147.
Epoch 118/300
52/52 [==] - 362s 7s/step - loss: 15673.7490 - mean_io_u: 0.5248 - val_loss: 36437.9805 - val_mean_io_u: 0.4470

Epoch 00119: LearningRateScheduler reducing learning rate to 0.00019936355367845054.
Epoch 119/300
52/52 [==] - 362s 7s/step - loss: 15959.3994 - mean_io_u: 0.5219 - val_loss: 40523.1992 - val_mean_io_u: 0.4580

Epoch 00120: LearningRateScheduler reducing learning rate to 0.0001978779484292456.
Epoch 120/300
52/52 [==] - 362s 7s/step - loss: 15329.0020 - mean_io_u: 0.5312 - val_loss: 43972.2344 - val_mean_io_u: 0.3720

Epoch 00121: LearningRateScheduler reducing learning rate to 0.00019638709830652336.
Epoch 121/300
52/52 [==] - 362s 7s/step - loss: 15349.1074 - mean_io_u: 0.5255 - val_loss: 36461.8984 - val_mean_io_u: 0.4222

Epoch 00122: LearningRateScheduler reducing learning rate to 0.00019489116679880018.
Epoch 122/300
52/52 [==] - 362s 7s/step - loss: 15965.5615 - mean_io_u: 0.5266 - val_loss: 27753.3066 - val_mean_io_u: 0.4364

Epoch 00123: LearningRateScheduler reducing learning rate to 0.0001933903179518235.
Epoch 123/300
52/52 [==] - 363s 7s/step - loss: 16275.4512 - mean_io_u: 0.5250 - val_loss: 23859.1914 - val_mean_io_u: 0.4963

Epoch 00124: LearningRateScheduler reducing learning rate to 0.00019188471635058244.
Epoch 124/300
52/52 [==] - 363s 7s/step - loss: 15704.1211 - mean_io_u: 0.5308 - val_loss: 34267.9141 - val_mean_io_u: 0.3939

Epoch 00125: LearningRateScheduler reducing learning rate to 0.00019037452710125906.
Epoch 125/300
52/52 [==] - 363s 7s/step - loss: 15373.4512 - mean_io_u: 0.5292 - val_loss: 37838.2383 - val_mean_io_u: 0.4265

Epoch 00126: LearningRateScheduler reducing learning rate to 0.000188859915813123.
Epoch 126/300
52/52 [==] - 362s 7s/step - loss: 16315.5293 - mean_io_u: 0.5243 - val_loss: 32161.3086 - val_mean_io_u: 0.4616

Epoch 00127: LearningRateScheduler reducing learning rate to 0.00018734104858036995.
Epoch 127/300
52/52 [==] - 363s 7s/step - loss: 16153.8672 - mean_io_u: 0.5211 - val_loss: 31863.3027 - val_mean_io_u: 0.4547

Epoch 00128: LearningRateScheduler reducing learning rate to 0.0001858180919639082.
Epoch 128/300
52/52 [==] - 362s 7s/step - loss: 15989.2305 - mean_io_u: 0.5274 - val_loss: 31548.3965 - val_mean_io_u: 0.4523

Epoch 00129: LearningRateScheduler reducing learning rate to 0.00018429121297309283.
Epoch 129/300
52/52 [==] - 362s 7s/step - loss: 16574.8555 - mean_io_u: 0.5251 - val_loss: 36970.0547 - val_mean_io_u: 0.4592

Epoch 00130: LearningRateScheduler reducing learning rate to 0.00018276057904741157.
Epoch 130/300
52/52 [==] - 363s 7s/step - loss: 15560.6650 - mean_io_u: 0.5310 - val_loss: 28971.3867 - val_mean_io_u: 0.4283

Epoch 00131: LearningRateScheduler reducing learning rate to 0.000181226358038123.
Epoch 131/300
52/52 [==] - 362s 7s/step - loss: 15101.3955 - mean_io_u: 0.5385 - val_loss: 42130.1641 - val_mean_io_u: 0.4547

Epoch 00132: LearningRateScheduler reducing learning rate to 0.00017968871818984994.
Epoch 132/300
52/52 [==] - 363s 7s/step - loss: 14920.5439 - mean_io_u: 0.5449 - val_loss: 35276.6406 - val_mean_io_u: 0.4440

Epoch 00133: LearningRateScheduler reducing learning rate to 0.0001781478281221294.
Epoch 133/300
52/52 [==] - 362s 7s/step - loss: 15017.3975 - mean_io_u: 0.5361 - val_loss: 31138.0371 - val_mean_io_u: 0.4209

悪くはない。ただ、結果3との差は、そんなたいしたことない。

その5(ResNetを少し値をかえる。)

***** Begin training *****
Dataset --> CamVid
Num Images --> 421
Model --> DeepLabV3Plus
Base Model --> ResNet101
Crop Height --> 256
Crop Width --> 256
Num Epochs --> 300
Initial Epoch --> 0
Batch Size --> 4
Num Classes --> 32
Data Augmentation:
        Data Augmentation Rate --> 0.5
        Vertical Flip --> False
        Horizontal Flip --> True
        Brightness Alteration --> None
        Rotation --> 5.0
        Zoom --> 0.1
        Channel Shift --> 0.1

Epoch 51/300
105/105 [==] - 439s 4s/step - loss: 20754.2754 - mean_io_u: 0.4675 - val_loss: 35808.4609 - val_mean_io_u: 0.3351

Epoch 00052: LearningRateScheduler reducing learning rate to 0.0002791182669492413.
Epoch 52/300
105/105 [==] - 439s 4s/step - loss: 21148.6484 - mean_io_u: 0.4660 - val_loss: 24649.5918 - val_mean_io_u: 0.4525

Epoch 00053: LearningRateScheduler reducing learning rate to 0.00027831187081106795.
Epoch 53/300
105/105 [==] - 439s 4s/step - loss: 21197.6836 - mean_io_u: 0.4637 - val_loss: 38648.6953 - val_mean_io_u: 0.4103

Epoch 00054: LearningRateScheduler reducing learning rate to 0.0002774914093133802.
Epoch 54/300
105/105 [==] - 439s 4s/step - loss: 21719.2695 - mean_io_u: 0.4615 - val_loss: 36425.7266 - val_mean_io_u: 0.3813

Epoch 00055: LearningRateScheduler reducing learning rate to 0.00027665697242902713.
Epoch 55/300
105/105 [==] - 439s 4s/step - loss: 20564.6484 - mean_io_u: 0.4682 - val_loss: 27104.0098 - val_mean_io_u: 0.3563

Epoch 00056: LearningRateScheduler reducing learning rate to 0.0002758086516634163.
Epoch 56/300
105/105 [==] - 439s 4s/step - loss: 21082.5332 - mean_io_u: 0.4665 - val_loss: 36071.4336 - val_mean_io_u: 0.3828

Epoch 00057: LearningRateScheduler reducing learning rate to 0.0002749465400444794.
Epoch 57/300
105/105 [==] - 439s 4s/step - loss: 20906.1387 - mean_io_u: 0.4670 - val_loss: 29424.6289 - val_mean_io_u: 0.3872

Epoch 00058: LearningRateScheduler reducing learning rate to 0.0002740707321124705.
Epoch 58/300
105/105 [==] - 439s 4s/step - loss: 20159.6309 - mean_io_u: 0.4746 - val_loss: 32868.6758 - val_mean_io_u: 0.4212

Epoch 00059: LearningRateScheduler reducing learning rate to 0.0002731813239095989.
Epoch 59/300
105/105 [==] - 439s 4s/step - loss: 30735.2520 - mean_io_u: 0.3735 - val_loss: 100592.9219 - val_mean_io_u: 0.2188

Epoch 00060: LearningRateScheduler reducing learning rate to 0.00027227841296949664.
Epoch 60/300
105/105 [==] - 439s 4s/step - loss: 23701.4707 - mean_io_u: 0.4360 - val_loss: 45182.4414 - val_mean_io_u: 0.3620

Epoch 00061: LearningRateScheduler reducing learning rate to 0.00027136209830652333.
Epoch 61/300
105/105 [==] - 439s 4s/step - loss: 22356.2168 - mean_io_u: 0.4572 - val_loss: 35422.8008 - val_mean_io_u: 0.4299

Epoch 00062: LearningRateScheduler reducing learning rate to 0.0002704324804049076.
Epoch 62/300
105/105 [==] - 439s 4s/step - loss: 20632.4336 - mean_io_u: 0.4744 - val_loss: 21433.0664 - val_mean_io_u: 0.4589

Epoch 00063: LearningRateScheduler reducing learning rate to 0.0002694896612077282.
Epoch 63/300
105/105 [==] - 439s 4s/step - loss: 20519.9043 - mean_io_u: 0.4735 - val_loss: 31553.0039 - val_mean_io_u: 0.4271

Epoch 00064: LearningRateScheduler reducing learning rate to 0.00026853374410573476.
Epoch 64/300
105/105 [==] - 439s 4s/step - loss: 20935.2773 - mean_io_u: 0.4713 - val_loss: 28307.8750 - val_mean_io_u: 0.4411

Epoch 00065: LearningRateScheduler reducing learning rate to 0.00026756483392600966.
Epoch 65/300
105/105 [==] - 439s 4s/step - loss: 19320.7598 - mean_io_u: 0.4790 - val_loss: 32983.2422 - val_mean_io_u: 0.3830

Epoch 00066: LearningRateScheduler reducing learning rate to 0.0002665830369204728.
Epoch 66/300
105/105 [==] - 439s 4s/step - loss: 19557.0469 - mean_io_u: 0.4836 - val_loss: 28492.5684 - val_mean_io_u: 0.4533

Epoch 00067: LearningRateScheduler reducing learning rate to 0.00026558846075422956.
Epoch 67/300
105/105 [==] - 439s 4s/step - loss: 18712.0879 - mean_io_u: 0.4891 - val_loss: 28251.6973 - val_mean_io_u: 0.4499

Epoch 00068: LearningRateScheduler reducing learning rate to 0.0002645812144937646.
Epoch 68/300
105/105 [==] - 442s 4s/step - loss: 19172.1816 - mean_io_u: 0.4865 - val_loss: 33057.2656 - val_mean_io_u: 0.4645

Epoch 00069: LearningRateScheduler reducing learning rate to 0.00026356140859498087.
Epoch 69/300
105/105 [==] - 439s 4s/step - loss: 18947.2773 - mean_io_u: 0.4874 - val_loss: 27134.8066 - val_mean_io_u: 0.4332

Epoch 00070: LearningRateScheduler reducing learning rate to 0.0002625291548910874.
Epoch 70/300
105/105 [==] - 439s 4s/step - loss: 19917.4180 - mean_io_u: 0.4809 - val_loss: 30916.6211 - val_mean_io_u: 0.4319

Epoch 00071: LearningRateScheduler reducing learning rate to 0.00026148456658033525.
Epoch 71/300
105/105 [==] - 441s 4s/step - loss: 19324.2188 - mean_io_u: 0.4826 - val_loss: 23521.9668 - val_mean_io_u: 0.4877

Epoch 00072: LearningRateScheduler reducing learning rate to 0.00026042775821360413.
Epoch 72/300
105/105 [==] - 439s 4s/step - loss: 18859.3301 - mean_io_u: 0.4872 - val_loss: 28276.8438 - val_mean_io_u: 0.4266

Epoch 00073: LearningRateScheduler reducing learning rate to 0.00025935884568184064.
Epoch 73/300
105/105 [==] - 439s 4s/step - loss: 19002.8145 - mean_io_u: 0.4871 - val_loss: 32405.9004 - val_mean_io_u: 0.4064

Epoch 00074: LearningRateScheduler reducing learning rate to 0.0002582779462033494.
Epoch 74/300
105/105 [==] - 439s 4s/step - loss: 20063.4043 - mean_io_u: 0.4799 - val_loss: 61415.6641 - val_mean_io_u: 0.2587

Epoch 00075: LearningRateScheduler reducing learning rate to 0.0002571851783109388.
Epoch 75/300
105/105 [==] - 439s 4s/step - loss: 20974.9668 - mean_io_u: 0.4674 - val_loss: 37915.5391 - val_mean_io_u: 0.3546

Epoch 00076: LearningRateScheduler reducing learning rate to 0.00025608066183892275.
Epoch 76/300
105/105 [==] - 439s 4s/step - loss: 19704.2754 - mean_io_u: 0.4837 - val_loss: 28454.9336 - val_mean_io_u: 0.4140

Epoch 00077: LearningRateScheduler reducing learning rate to 0.00025496451790997917.
Epoch 77/300
105/105 [==] - 442s 4s/step - loss: 19353.9805 - mean_io_u: 0.4883 - val_loss: 29147.9941 - val_mean_io_u: 0.4403

Epoch 00078: LearningRateScheduler reducing learning rate to 0.00025383686892186747.
Epoch 78/300
105/105 [==] - 439s 4s/step - loss: 18555.5781 - mean_io_u: 0.4943 - val_loss: 33496.7188 - val_mean_io_u: 0.4315

Epoch 00079: LearningRateScheduler reducing learning rate to 0.00025269783853400685.
Epoch 79/300
105/105 [==] - 439s 4s/step - loss: 18904.0332 - mean_io_u: 0.4918 - val_loss: 34029.1016 - val_mean_io_u: 0.3891

Epoch 00080: LearningRateScheduler reducing learning rate to 0.00025154755165391495.
Epoch 80/300
105/105 [==] - 439s 4s/step - loss: 18675.8184 - mean_io_u: 0.4976 - val_loss: 31313.4941 - val_mean_io_u: 0.3922

Epoch 00081: LearningRateScheduler reducing learning rate to 0.00025038613442351075.
Epoch 81/300
105/105 [==] - 439s 4s/step - loss: 18792.6855 - mean_io_u: 0.4952 - val_loss: 30992.4316 - val_mean_io_u: 0.4525

Epoch 00082: LearningRateScheduler reducing learning rate to 0.0002492137142052816.
Epoch 82/300
105/105 [==] - 439s 4s/step - loss: 18501.2676 - mean_io_u: 0.5000 - val_loss: 25794.7461 - val_mean_io_u: 0.4364

Epoch 00083: LearningRateScheduler reducing learning rate to 0.00024803041956831634.
Epoch 83/300
105/105 [==] - 439s 4s/step - loss: 19045.1777 - mean_io_u: 0.4892 - val_loss: 32127.4961 - val_mean_io_u: 0.4550

Epoch 00084: LearningRateScheduler reducing learning rate to 0.0002468363802742064.
Epoch 84/300
105/105 [==] - 439s 4s/step - loss: 18327.3652 - mean_io_u: 0.5005 - val_loss: 27739.3125 - val_mean_io_u: 0.4778

Epoch 00085: LearningRateScheduler reducing learning rate to 0.000245631727262816.
Epoch 85/300
105/105 [==] - 439s 4s/step - loss: 17754.4766 - mean_io_u: 0.5002 - val_loss: 28532.9180 - val_mean_io_u: 0.4875

Epoch 00086: LearningRateScheduler reducing learning rate to 0.0002444165926379231.
Epoch 86/300
105/105 [==] - 439s 4s/step - loss: 17839.9180 - mean_io_u: 0.5031 - val_loss: 30797.6836 - val_mean_io_u: 0.4736

Epoch 00087: LearningRateScheduler reducing learning rate to 0.00024319110965273264.
Epoch 87/300
105/105 [==] - 439s 4s/step - loss: 18355.6367 - mean_io_u: 0.4975 - val_loss: 40958.4688 - val_mean_io_u: 0.4513

Epoch 00088: LearningRateScheduler reducing learning rate to 0.0002419554126952638.
Epoch 88/300
105/105 [==] - 439s 4s/step - loss: 18732.6699 - mean_io_u: 0.4837 - val_loss: 27917.7129 - val_mean_io_u: 0.4678

Epoch 00089: LearningRateScheduler reducing learning rate to 0.00024070963727361311.
Epoch 89/300
105/105 [==] - 439s 4s/step - loss: 16974.5566 - mean_io_u: 0.5161 - val_loss: 27659.2441 - val_mean_io_u: 0.4578

Epoch 00090: LearningRateScheduler reducing learning rate to 0.00023945392000109406.
Epoch 90/300
105/105 [==] - 439s 4s/step - loss: 17880.8027 - mean_io_u: 0.5086 - val_loss: 33514.3477 - val_mean_io_u: 0.4030

Epoch 00091: LearningRateScheduler reducing learning rate to 0.00023818839858125631.
Epoch 91/300
105/105 [==] - 439s 4s/step - loss: 17840.9590 - mean_io_u: 0.5088 - val_loss: 37918.2656 - val_mean_io_u: 0.3954

Epoch 00092: LearningRateScheduler reducing learning rate to 0.00023691321179278464.
Epoch 92/300
105/105 [==] - 439s 4s/step - loss: 18588.4316 - mean_io_u: 0.5025 - val_loss: 40244.8008 - val_mean_io_u: 0.4129

Epoch 00093: LearningRateScheduler reducing learning rate to 0.00023562849947428055.
Epoch 93/300
105/105 [==] - 439s 4s/step - loss: 19025.6387 - mean_io_u: 0.4988 - val_loss: 29079.4004 - val_mean_io_u: 0.4428

Epoch 00094: LearningRateScheduler reducing learning rate to 0.00023433440250892698.
Epoch 94/300
105/105 [==] - 441s 4s/step - loss: 17575.1172 - mean_io_u: 0.5111 - val_loss: 29741.3945 - val_mean_io_u: 0.4217

Epoch 00095: LearningRateScheduler reducing learning rate to 0.00023303106280903944.
Epoch 95/300
105/105 [==] - 439s 4s/step - loss: 17676.0742 - mean_io_u: 0.5118 - val_loss: 39939.6328 - val_mean_io_u: 0.4386

Epoch 00096: LearningRateScheduler reducing learning rate to 0.00023171862330050329.
Epoch 96/300
105/105 [==] - 441s 4s/step - loss: 17121.9590 - mean_io_u: 0.5116 - val_loss: 29737.2188 - val_mean_io_u: 0.5074

Epoch 00097: LearningRateScheduler reducing learning rate to 0.00023039722790710052.
Epoch 97/300
105/105 [==] - 439s 4s/step - loss: 16902.2910 - mean_io_u: 0.5219 - val_loss: 40803.8086 - val_mean_io_u: 0.4419

Epoch 00098: LearningRateScheduler reducing learning rate to 0.0002290670215347268.
Epoch 98/300
105/105 [==] - 439s 4s/step - loss: 18266.8066 - mean_io_u: 0.5088 - val_loss: 38198.0547 - val_mean_io_u: 0.4305

Epoch 00099: LearningRateScheduler reducing learning rate to 0.00022772815005550086.
Epoch 99/300
105/105 [==] - 442s 4s/step - loss: 17128.5957 - mean_io_u: 0.5155 - val_loss: 25639.7383 - val_mean_io_u: 0.4487

Epoch 00100: LearningRateScheduler reducing learning rate to 0.00022638076029176817.
Epoch 100/300
105/105 [==] - 439s 4s/step - loss: 18106.3809 - mean_io_u: 0.5102 - val_loss: 32126.6250 - val_mean_io_u: 0.5040

Epoch 00101: LearningRateScheduler reducing learning rate to 0.00022502499999999998.
Epoch 101/300
105/105 [==] - 439s 4s/step - loss: 16706.1816 - mean_io_u: 0.5222 - val_loss: 26992.8965 - val_mean_io_u: 0.4040

Epoch 00102: LearningRateScheduler reducing learning rate to 0.00022366101785459034.
Epoch 102/300
105/105 [==] - 439s 4s/step - loss: 16880.5645 - mean_io_u: 0.5211 - val_loss: 33327.0391 - val_mean_io_u: 0.4451

Epoch 00103: LearningRateScheduler reducing learning rate to 0.00022228896343155218.
Epoch 103/300
105/105 [==] - 439s 4s/step - loss: 17056.1016 - mean_io_u: 0.5218 - val_loss: 26874.1836 - val_mean_io_u: 0.4479

Epoch 00104: LearningRateScheduler reducing learning rate to 0.00022090898719211464.
Epoch 104/300
105/105 [==] - 439s 4s/step - loss: 17030.3848 - mean_io_u: 0.5206 - val_loss: 33408.6328 - val_mean_io_u: 0.4553

Epoch 00105: LearningRateScheduler reducing learning rate to 0.00021952124046622327.
Epoch 105/300
105/105 [==] - 439s 4s/step - loss: 17607.9121 - mean_io_u: 0.5163 - val_loss: 31780.9844 - val_mean_io_u: 0.4763

Epoch 00106: LearningRateScheduler reducing learning rate to 0.000218125875435945.
Epoch 106/300
105/105 [==] - 439s 4s/step - loss: 18413.9883 - mean_io_u: 0.5059 - val_loss: 29617.6406 - val_mean_io_u: 0.4220

Epoch 00107: LearningRateScheduler reducing learning rate to 0.00021672304511877985.
Epoch 107/300
105/105 [==] - 439s 4s/step - loss: 17377.0508 - mean_io_u: 0.5217 - val_loss: 32673.9883 - val_mean_io_u: 0.4035

Epoch 00108: LearningRateScheduler reducing learning rate to 0.0002153129033508805.
Epoch 108/300
105/105 [==] - 439s 4s/step - loss: 16938.3281 - mean_io_u: 0.5204 - val_loss: 30095.5469 - val_mean_io_u: 0.4556

Epoch 00109: LearningRateScheduler reducing learning rate to 0.00021389560477018263.
Epoch 109/300
105/105 [==] - 439s 4s/step - loss: 16809.3828 - mean_io_u: 0.5247 - val_loss: 42255.5508 - val_mean_io_u: 0.3945

Epoch 00110: LearningRateScheduler reducing learning rate to 0.00021247130479944715.
Epoch 110/300
105/105 [==] - 439s 4s/step - loss: 16895.0078 - mean_io_u: 0.5218 - val_loss: 34139.1914 - val_mean_io_u: 0.4332

Epoch 00111: LearningRateScheduler reducing learning rate to 0.00021104015962921622.
Epoch 111/300
105/105 [==] - 439s 4s/step - loss: 17304.8828 - mean_io_u: 0.5222 - val_loss: 30055.7598 - val_mean_io_u: 0.4552

Epoch 00112: LearningRateScheduler reducing learning rate to 0.0002096023262006853.
Epoch 112/300
105/105 [==] - 439s 4s/step - loss: 17113.0742 - mean_io_u: 0.5180 - val_loss: 50182.1328 - val_mean_io_u: 0.3320

Epoch 00113: LearningRateScheduler reducing learning rate to 0.00020815796218849283.
Epoch 113/300
105/105 [==] - 439s 4s/step - loss: 17079.6914 - mean_io_u: 0.5191 - val_loss: 28678.8125 - val_mean_io_u: 0.5060

Epoch 00114: LearningRateScheduler reducing learning rate to 0.00020670722598342914.
Epoch 114/300
105/105 [==] - 439s 4s/step - loss: 16533.6133 - mean_io_u: 0.5296 - val_loss: 36502.8125 - val_mean_io_u: 0.4180

Epoch 00115: LearningRateScheduler reducing learning rate to 0.00020525027667506746.
Epoch 115/300
105/105 [==] - 439s 4s/step - loss: 16671.6680 - mean_io_u: 0.5281 - val_loss: 31694.0723 - val_mean_io_u: 0.4586

Epoch 00116: LearningRateScheduler reducing learning rate to 0.00020378727403431773.
Epoch 116/300
105/105 [==] - 439s 4s/step - loss: 16059.1523 - mean_io_u: 0.5346 - val_loss: 36608.8867 - val_mean_io_u: 0.4365

Epoch 00117: LearningRateScheduler reducing learning rate to 0.0002023183784959062.
Epoch 117/300
105/105 [==] - 439s 4s/step - loss: 16285.6836 - mean_io_u: 0.5314 - val_loss: 34845.7578 - val_mean_io_u: 0.4614

Epoch 00118: LearningRateScheduler reducing learning rate to 0.00020084375114078147.
Epoch 118/300
105/105 [==] - 439s 4s/step - loss: 17148.4414 - mean_io_u: 0.5179 - val_loss: 31063.0938 - val_mean_io_u: 0.4196

Epoch 00119: LearningRateScheduler reducing learning rate to 0.00019936355367845054.
Epoch 119/300
105/105 [==] - 439s 4s/step - loss: 18019.8027 - mean_io_u: 0.5157 - val_loss: 30122.4219 - val_mean_io_u: 0.4677

Epoch 00120: LearningRateScheduler reducing learning rate to 0.0001978779484292456.
Epoch 120/300
105/105 [==] - 439s 4s/step - loss: 17251.8750 - mean_io_u: 0.5203 - val_loss: 32635.9121 - val_mean_io_u: 0.4971

Epoch 00121: LearningRateScheduler reducing learning rate to 0.00019638709830652336.
Epoch 121/300
105/105 [==] - 439s 4s/step - loss: 16250.2646 - mean_io_u: 0.5269 - val_loss: 36267.1016 - val_mean_io_u: 0.4771

Epoch 00122: LearningRateScheduler reducing learning rate to 0.00019489116679880018.
Epoch 122/300
105/105 [==] - 442s 4s/step - loss: 16181.2207 - mean_io_u: 0.5300 - val_loss: 22203.4297 - val_mean_io_u: 0.5335

Epoch 00123: LearningRateScheduler reducing learning rate to 0.0001933903179518235.
Epoch 123/300
105/105 [==] - 442s 4s/step - loss: 16958.2773 - mean_io_u: 0.5224 - val_loss: 31023.5469 - val_mean_io_u: 0.4794

Epoch 00124: LearningRateScheduler reducing learning rate to 0.00019188471635058244.
Epoch 124/300
105/105 [==] - 439s 4s/step - loss: 15756.4961 - mean_io_u: 0.5373 - val_loss: 35082.0195 - val_mean_io_u: 0.4197

Epoch 00125: LearningRateScheduler reducing learning rate to 0.00019037452710125906.
Epoch 125/300
105/105 [==] - 439s 4s/step - loss: 16814.4590 - mean_io_u: 0.5277 - val_loss: 34495.8633 - val_mean_io_u: 0.4346

Epoch 00126: LearningRateScheduler reducing learning rate to 0.000188859915813123.
Epoch 126/300
105/105 [==] - 439s 4s/step - loss: 17091.5977 - mean_io_u: 0.5220 - val_loss: 33207.6406 - val_mean_io_u: 0.4411

Epoch 00127: LearningRateScheduler reducing learning rate to 0.00018734104858036995.
Epoch 127/300
105/105 [==] - 439s 4s/step - loss: 16243.8096 - mean_io_u: 0.5276 - val_loss: 38811.0781 - val_mean_io_u: 0.4554

Epoch 00128: LearningRateScheduler reducing learning rate to 0.0001858180919639082.
Epoch 128/300
105/105 [==] - 439s 4s/step - loss: 17294.2188 - mean_io_u: 0.5242 - val_loss: 25867.3496 - val_mean_io_u: 0.5094

Epoch 00129: LearningRateScheduler reducing learning rate to 0.00018429121297309283.
Epoch 129/300
105/105 [==] - 439s 4s/step - loss: 15691.8193 - mean_io_u: 0.5382 - val_loss: 33632.3008 - val_mean_io_u: 0.3933

Epoch 00130: LearningRateScheduler reducing learning rate to 0.00018276057904741157.
Epoch 130/300
105/105 [==] - 439s 4s/step - loss: 16711.9922 - mean_io_u: 0.5244 - val_loss: 26973.7695 - val_mean_io_u: 0.5059

Epoch 00131: LearningRateScheduler reducing learning rate to 0.000181226358038123.
Epoch 131/300
105/105 [==] - 439s 4s/step - loss: 15986.9053 - mean_io_u: 0.5332 - val_loss: 25013.0625 - val_mean_io_u: 0.4985

Epoch 00132: LearningRateScheduler reducing learning rate to 0.00017968871818984994.
Epoch 132/300
105/105 [==] - 439s 4s/step - loss: 14841.0586 - mean_io_u: 0.5500 - val_loss: 34063.1523 - val_mean_io_u: 0.3936

Epoch 00133: LearningRateScheduler reducing learning rate to 0.0001781478281221294.
Epoch 133/300
105/105 [==] - 439s 4s/step - loss: 15922.7646 - mean_io_u: 0.5356 - val_loss: 39326.2734 - val_mean_io_u: 0.4607

Epoch 00134: LearningRateScheduler reducing learning rate to 0.00017660385681092153.
Epoch 134/300
105/105 [==] - 439s 4s/step - loss: 15496.8467 - mean_io_u: 0.5408 - val_loss: 30490.5430 - val_mean_io_u: 0.4853

Epoch 00135: LearningRateScheduler reducing learning rate to 0.00017505697357007952.
Epoch 135/300
105/105 [==] - 440s 4s/step - loss: 15590.5410 - mean_io_u: 0.5368 - val_loss: 41027.1172 - val_mean_io_u: 0.3790

Epoch 00136: LearningRateScheduler reducing learning rate to 0.00017350734803278262.
Epoch 136/300
105/105 [==] - 442s 4s/step - loss: 16203.2998 - mean_io_u: 0.5323 - val_loss: 29206.0820 - val_mean_io_u: 0.4849

Epoch 00137: LearningRateScheduler reducing learning rate to 0.00017195515013293362.
Epoch 137/300
105/105 [==] - 439s 4s/step - loss: 15366.5479 - mean_io_u: 0.5419 - val_loss: 43269.9414 - val_mean_io_u: 0.4547

Epoch 00138: LearningRateScheduler reducing learning rate to 0.00017040055008652393.
Epoch 138/300
105/105 [==] - 439s 4s/step - loss: 15838.4717 - mean_io_u: 0.5369 - val_loss: 38508.1680 - val_mean_io_u: 0.4348

Epoch 00139: LearningRateScheduler reducing learning rate to 0.0001688437183729674.
Epoch 139/300
105/105 [==] - 439s 4s/step - loss: 16346.1699 - mean_io_u: 0.5324 - val_loss: 38645.0078 - val_mean_io_u: 0.4444

Epoch 00140: LearningRateScheduler reducing learning rate to 0.00016728482571640535.
Epoch 140/300
105/105 [==] - 439s 4s/step - loss: 16047.3330 - mean_io_u: 0.5365 - val_loss: 42424.5977 - val_mean_io_u: 0.4213

Epoch 00141: LearningRateScheduler reducing learning rate to 0.00016572404306698467.
Epoch 141/300
105/105 [==] - 439s 4s/step - loss: 15129.7031 - mean_io_u: 0.5431 - val_loss: 32691.6191 - val_mean_io_u: 0.4485

Epoch 00142: LearningRateScheduler reducing learning rate to 0.00016416154158211123.
Epoch 142/300
105/105 [==] - 439s 4s/step - loss: 15502.5469 - mean_io_u: 0.5399 - val_loss: 29551.8164 - val_mean_io_u: 0.4484

Epoch 00143: LearningRateScheduler reducing learning rate to 0.0001625974926076807.
Epoch 143/300
105/105 [==] - 439s 4s/step - loss: 15238.3672 - mean_io_u: 0.5422 - val_loss: 32409.0820 - val_mean_io_u: 0.5215

Epoch 00144: LearningRateScheduler reducing learning rate to 0.00016103206765928835.
Epoch 144/300
105/105 [==] - 439s 4s/step - loss: 14863.4492 - mean_io_u: 0.5477 - val_loss: 35020.7578 - val_mean_io_u: 0.4357

Epoch 00145: LearningRateScheduler reducing learning rate to 0.00015946543840342055.
Epoch 145/300
105/105 [==] - 439s 4s/step - loss: 15653.0801 - mean_io_u: 0.5367 - val_loss: 28742.7930 - val_mean_io_u: 0.4910

Epoch 00146: LearningRateScheduler reducing learning rate to 0.00015789777663862943.
Epoch 146/300
105/105 [==] - 439s 4s/step - loss: 15151.9258 - mean_io_u: 0.5452 - val_loss: 30015.6309 - val_mean_io_u: 0.5141

Epoch 00147: LearningRateScheduler reducing learning rate to 0.0001563292542766935.
Epoch 147/300
105/105 [==] - 439s 4s/step - loss: 15686.4990 - mean_io_u: 0.5366 - val_loss: 26746.8066 - val_mean_io_u: 0.5204

Epoch 00148: LearningRateScheduler reducing learning rate to 0.0001547600433237653.
Epoch 148/300
105/105 [==] - 439s 4s/step - loss: 14849.4512 - mean_io_u: 0.5457 - val_loss: 26787.9004 - val_mean_io_u: 0.5005

Epoch 00149: LearningRateScheduler reducing learning rate to 0.00015319031586150938.
Epoch 149/300
105/105 [==] - 439s 4s/step - loss: 15384.6621 - mean_io_u: 0.5408 - val_loss: 42342.4141 - val_mean_io_u: 0.3995

Epoch 00150: LearningRateScheduler reducing learning rate to 0.00015162024402823104.
Epoch 150/300
105/105 [==] - 440s 4s/step - loss: 15260.9531 - mean_io_u: 0.5421 - val_loss: 44724.9922 - val_mean_io_u: 0.4080

Epoch 00151: LearningRateScheduler reducing learning rate to 0.00015005.
Epoch 151/300
105/105 [==] - 440s 4s/step - loss: 15703.0400 - mean_io_u: 0.5371 - val_loss: 42014.1406 - val_mean_io_u: 0.3670

Epoch 00152: LearningRateScheduler reducing learning rate to 0.00014847975597176893.
Epoch 152/300
105/105 [==] - 439s 4s/step - loss: 14999.8779 - mean_io_u: 0.5477 - val_loss: 41360.4258 - val_mean_io_u: 0.4130

Epoch 00153: LearningRateScheduler reducing learning rate to 0.00014690968413849065.
Epoch 153/300
105/105 [==] - 439s 4s/step - loss: 15147.6094 - mean_io_u: 0.5442 - val_loss: 44195.5430 - val_mean_io_u: 0.4053

Epoch 00154: LearningRateScheduler reducing learning rate to 0.00014533995667623465.
Epoch 154/300
105/105 [==] - 439s 4s/step - loss: 15379.9580 - mean_io_u: 0.5374 - val_loss: 38679.0078 - val_mean_io_u: 0.4564

Epoch 00155: LearningRateScheduler reducing learning rate to 0.0001437707457233065.
Epoch 155/300
105/105 [==] - 439s 4s/step - loss: 15133.6992 - mean_io_u: 0.5424 - val_loss: 33237.2891 - val_mean_io_u: 0.4706

Epoch 00156: LearningRateScheduler reducing learning rate to 0.00014220222336137057.
Epoch 156/300
105/105 [==] - 439s 4s/step - loss: 15792.9209 - mean_io_u: 0.5378 - val_loss: 42421.9766 - val_mean_io_u: 0.4122

Epoch 00157: LearningRateScheduler reducing learning rate to 0.00014063456159657948.
Epoch 157/300
105/105 [==] - 439s 4s/step - loss: 14866.0439 - mean_io_u: 0.5469 - val_loss: 35057.2266 - val_mean_io_u: 0.4576

Epoch 00158: LearningRateScheduler reducing learning rate to 0.00013906793234071165.
Epoch 158/300
105/105 [==] - 439s 4s/step - loss: 14899.1494 - mean_io_u: 0.5486 - val_loss: 34308.3750 - val_mean_io_u: 0.4540

Epoch 00159: LearningRateScheduler reducing learning rate to 0.00013750250739231928.
Epoch 159/300
105/105 [==] - 439s 4s/step - loss: 14986.5469 - mean_io_u: 0.5424 - val_loss: 32706.8066 - val_mean_io_u: 0.5045

Epoch 00160: LearningRateScheduler reducing learning rate to 0.00013593845841788877.
Epoch 160/300
105/105 [==] - 441s 4s/step - loss: 15825.5137 - mean_io_u: 0.5364 - val_loss: 28209.9414 - val_mean_io_u: 0.5429

Epoch 00161: LearningRateScheduler reducing learning rate to 0.00013437595693301539.
Epoch 161/300
105/105 [==] - 439s 4s/step - loss: 15648.6895 - mean_io_u: 0.5423 - val_loss: 37491.4844 - val_mean_io_u: 0.4430

Epoch 00162: LearningRateScheduler reducing learning rate to 0.00013281517428359466.
Epoch 162/300
105/105 [==] - 439s 4s/step - loss: 15473.0303 - mean_io_u: 0.5438 - val_loss: 40086.1602 - val_mean_io_u: 0.4003

Epoch 00163: LearningRateScheduler reducing learning rate to 0.00013125628162703257.
Epoch 163/300
105/105 [==] - 439s 4s/step - loss: 14534.6484 - mean_io_u: 0.5505 - val_loss: 37669.7188 - val_mean_io_u: 0.5280

Epoch 00164: LearningRateScheduler reducing learning rate to 0.00012969944991347607.
Epoch 164/300
105/105 [==] - 439s 4s/step - loss: 14084.4209 - mean_io_u: 0.5533 - val_loss: 42714.6250 - val_mean_io_u: 0.4664

Epoch 00165: LearningRateScheduler reducing learning rate to 0.0001281448498670664.
Epoch 165/300
105/105 [==] - 439s 4s/step - loss: 14980.8652 - mean_io_u: 0.5475 - val_loss: 35214.4766 - val_mean_io_u: 0.4721

Epoch 00166: LearningRateScheduler reducing learning rate to 0.0001265926519672174.
Epoch 166/300
105/105 [==] - 439s 4s/step - loss: 15107.9053 - mean_io_u: 0.5475 - val_loss: 40163.2617 - val_mean_io_u: 0.4441

Epoch 00167: LearningRateScheduler reducing learning rate to 0.00012504302642992043.
Epoch 167/300
105/105 [==] - 439s 4s/step - loss: 15310.4600 - mean_io_u: 0.5428 - val_loss: 43188.4688 - val_mean_io_u: 0.4248

Epoch 00168: LearningRateScheduler reducing learning rate to 0.0001234961431890785.
Epoch 168/300
105/105 [==] - 439s 4s/step - loss: 14590.9004 - mean_io_u: 0.5512 - val_loss: 34130.9609 - val_mean_io_u: 0.4800

Epoch 00169: LearningRateScheduler reducing learning rate to 0.00012195217187787059.
Epoch 169/300
105/105 [==] - 439s 4s/step - loss: 15064.8652 - mean_io_u: 0.5298 - val_loss: 48833.5391 - val_mean_io_u: 0.3706

Epoch 00170: LearningRateScheduler reducing learning rate to 0.00012041128181015001.
Epoch 170/300
105/105 [==] - 439s 4s/step - loss: 14781.8125 - mean_io_u: 0.5481 - val_loss: 29504.9023 - val_mean_io_u: 0.4256

Epoch 00171: LearningRateScheduler reducing learning rate to 0.00011887364196187697.
Epoch 171/300
105/105 [==] - 439s 4s/step - loss: 14996.9404 - mean_io_u: 0.5446 - val_loss: 29586.9902 - val_mean_io_u: 0.4676

Epoch 00172: LearningRateScheduler reducing learning rate to 0.00011733942095258844.
Epoch 172/300
105/105 [==] - 439s 4s/step - loss: 15171.8896 - mean_io_u: 0.5439 - val_loss: 35399.5000 - val_mean_io_u: 0.4251

Epoch 00173: LearningRateScheduler reducing learning rate to 0.00011580878702690719.
Epoch 173/300
105/105 [==] - 439s 4s/step - loss: 15654.2832 - mean_io_u: 0.5389 - val_loss: 36140.3047 - val_mean_io_u: 0.4393

Epoch 00174: LearningRateScheduler reducing learning rate to 0.00011428190803609182.
Epoch 174/300
105/105 [==] - 439s 4s/step - loss: 14080.2959 - mean_io_u: 0.5555 - val_loss: 36942.0859 - val_mean_io_u: 0.4264

Epoch 00175: LearningRateScheduler reducing learning rate to 0.00011275895141963004.
Epoch 175/300
105/105 [==] - 439s 4s/step - loss: 13979.7373 - mean_io_u: 0.5575 - val_loss: 25810.1699 - val_mean_io_u: 0.5380

Epoch 00176: LearningRateScheduler reducing learning rate to 0.00011124008418687702.
Epoch 176/300
105/105 [==] - 439s 4s/step - loss: 13951.3096 - mean_io_u: 0.5577 - val_loss: 32986.9883 - val_mean_io_u: 0.4925

Epoch 00177: LearningRateScheduler reducing learning rate to 0.00010972547289874094.
Epoch 177/300
105/105 [==] - 439s 4s/step - loss: 14600.2334 - mean_io_u: 0.5481 - val_loss: 36212.9297 - val_mean_io_u: 0.4245

Epoch 00178: LearningRateScheduler reducing learning rate to 0.00010821528364941755.
Epoch 178/300
105/105 [==] - 439s 4s/step - loss: 15454.8037 - mean_io_u: 0.5417 - val_loss: 27372.0566 - val_mean_io_u: 0.5221

Epoch 00179: LearningRateScheduler reducing learning rate to 0.00010670968204817645.
Epoch 179/300
105/105 [==] - 439s 4s/step - loss: 14609.6143 - mean_io_u: 0.5489 - val_loss: 45445.5781 - val_mean_io_u: 0.4194

Epoch 00180: LearningRateScheduler reducing learning rate to 0.00010520883320119981.
Epoch 180/300
105/105 [==] - 439s 4s/step - loss: 14325.7695 - mean_io_u: 0.5538 - val_loss: 38379.3516 - val_mean_io_u: 0.4185

Epoch 00181: LearningRateScheduler reducing learning rate to 0.00010371290169347664.
Epoch 181/300
105/105 [==] - 439s 4s/step - loss: 15173.7168 - mean_io_u: 0.5472 - val_loss: 38813.4648 - val_mean_io_u: 0.4471

Epoch 00182: LearningRateScheduler reducing learning rate to 0.0001022220515707544.
Epoch 182/300
105/105 [==] - 439s 4s/step - loss: 15060.0059 - mean_io_u: 0.5458 - val_loss: 32711.5840 - val_mean_io_u: 0.4740

Epoch 00183: LearningRateScheduler reducing learning rate to 0.00010073644632154944.
Epoch 183/300
105/105 [==] - 439s 4s/step - loss: 14538.9590 - mean_io_u: 0.5515 - val_loss: 27709.0254 - val_mean_io_u: 0.4558

Epoch 00184: LearningRateScheduler reducing learning rate to 9.925624885921855e-05.
Epoch 184/300
105/105 [==] - 439s 4s/step - loss: 14644.1387 - mean_io_u: 0.5504 - val_loss: 28857.8711 - val_mean_io_u: 0.5074

Epoch 00185: LearningRateScheduler reducing learning rate to 9.778162150409383e-05.
Epoch 185/300
105/105 [==] - 439s 4s/step - loss: 14537.2686 - mean_io_u: 0.5535 - val_loss: 30100.6758 - val_mean_io_u: 0.4568

Epoch 00186: LearningRateScheduler reducing learning rate to 9.631272596568224e-05.
Epoch 186/300
105/105 [==] - 439s 4s/step - loss: 14919.8242 - mean_io_u: 0.5493 - val_loss: 32299.7461 - val_mean_io_u: 0.4748

Epoch 00187: LearningRateScheduler reducing learning rate to 9.484972332493257e-05.
Epoch 187/300
105/105 [==] - 468s 4s/step - loss: 14790.5566 - mean_io_u: 0.5492 - val_loss: 34132.8086 - val_mean_io_u: 0.4599

Epoch 00188: LearningRateScheduler reducing learning rate to 9.339277401657084e-05.
Epoch 188/300
105/105 [==] - 439s 4s/step - loss: 15337.8887 - mean_io_u: 0.5428 - val_loss: 34241.4297 - val_mean_io_u: 0.4788

Epoch 00189: LearningRateScheduler reducing learning rate to 9.194203781150716e-05.
Epoch 189/300
105/105 [==] - 439s 4s/step - loss: 14176.1562 - mean_io_u: 0.5522 - val_loss: 26045.0352 - val_mean_io_u: 0.5178

Epoch 00190: LearningRateScheduler reducing learning rate to 9.049767379931463e-05.
Epoch 190/300
105/105 [==] - 439s 4s/step - loss: 14273.8350 - mean_io_u: 0.5571 - val_loss: 32763.4414 - val_mean_io_u: 0.5322

Epoch 00191: LearningRateScheduler reducing learning rate to 8.905984037078375e-05.
Epoch 191/300
105/105 [==] - 439s 4s/step - loss: 14640.9912 - mean_io_u: 0.5338 - val_loss: 40018.9766 - val_mean_io_u: 0.4519

Epoch 00192: LearningRateScheduler reducing learning rate to 8.762869520055285e-05.
Epoch 192/300
105/105 [==] - 439s 4s/step - loss: 14759.4453 - mean_io_u: 0.5493 - val_loss: 33715.3281 - val_mean_io_u: 0.4344

Epoch 00193: LearningRateScheduler reducing learning rate to 8.620439522981734e-05.
Epoch 193/300
105/105 [==] - 439s 4s/step - loss: 13657.6465 - mean_io_u: 0.5630 - val_loss: 31174.0781 - val_mean_io_u: 0.4388

Epoch 00194: LearningRateScheduler reducing learning rate to 8.478709664911951e-05.
Epoch 194/300
105/105 [==] - 439s 4s/step - loss: 14163.4580 - mean_io_u: 0.5557 - val_loss: 30409.0059 - val_mean_io_u: 0.4512

Epoch 00195: LearningRateScheduler reducing learning rate to 8.337695488122015e-05.
Epoch 195/300
105/105 [==] - 439s 4s/step - loss: 13633.5303 - mean_io_u: 0.5604 - val_loss: 30746.8555 - val_mean_io_u: 0.4689

Epoch 00196: LearningRateScheduler reducing learning rate to 8.197412456405496e-05.
Epoch 196/300
105/105 [==] - 439s 4s/step - loss: 14424.0098 - mean_io_u: 0.5509 - val_loss: 37804.8555 - val_mean_io_u: 0.4550

Epoch 00197: LearningRateScheduler reducing learning rate to 8.057875953377676e-05.
Epoch 197/300
105/105 [==] - 439s 4s/step - loss: 13904.6211 - mean_io_u: 0.5537 - val_loss: 26515.2500 - val_mean_io_u: 0.5394

Epoch 00198: LearningRateScheduler reducing learning rate to 7.919101280788536e-05.
Epoch 198/300
105/105 [==] - 439s 4s/step - loss: 14204.1436 - mean_io_u: 0.5584 - val_loss: 30020.1211 - val_mean_io_u: 0.5011

Epoch 00199: LearningRateScheduler reducing learning rate to 7.781103656844776e-05.
Epoch 199/300
105/105 [==] - 439s 4s/step - loss: 14643.4521 - mean_io_u: 0.5498 - val_loss: 31174.7871 - val_mean_io_u: 0.4424

Epoch 00200: LearningRateScheduler reducing learning rate to 7.643898214540963e-05.
Epoch 200/300
105/105 [==] - 439s 4s/step - loss: 14203.6465 - mean_io_u: 0.5557 - val_loss: 30767.7656 - val_mean_io_u: 0.5028

Epoch 00201: LearningRateScheduler reducing learning rate to 7.507499999999997e-05.
Epoch 201/300
105/105 [==] - 439s 4s/step - loss: 13700.2197 - mean_io_u: 0.5625 - val_loss: 41935.9883 - val_mean_io_u: 0.4834

Epoch 00202: LearningRateScheduler reducing learning rate to 7.371923970823181e-05.
Epoch 202/300
105/105 [==] - 439s 4s/step - loss: 14243.2266 - mean_io_u: 0.5527 - val_loss: 45971.5117 - val_mean_io_u: 0.3965

Epoch 00203: LearningRateScheduler reducing learning rate to 7.237184994449914e-05.
Epoch 203/300
105/105 [==] - 439s 4s/step - loss: 14570.1348 - mean_io_u: 0.5486 - val_loss: 36789.5000 - val_mean_io_u: 0.4306

Epoch 00204: LearningRateScheduler reducing learning rate to 7.10329784652732e-05.
Epoch 204/300
105/105 [==] - 439s 4s/step - loss: 13828.4834 - mean_io_u: 0.5593 - val_loss: 42057.9688 - val_mean_io_u: 0.4318

Epoch 00205: LearningRateScheduler reducing learning rate to 6.970277209289948e-05.
Epoch 205/300
105/105 [==] - 439s 4s/step - loss: 14754.8467 - mean_io_u: 0.5485 - val_loss: 39776.7734 - val_mean_io_u: 0.3951

Epoch 00206: LearningRateScheduler reducing learning rate to 6.838137669949673e-05.
Epoch 206/300
105/105 [==] - 439s 4s/step - loss: 13831.8857 - mean_io_u: 0.5552 - val_loss: 41408.0000 - val_mean_io_u: 0.4243

Epoch 00207: LearningRateScheduler reducing learning rate to 6.706893719096057e-05.
Epoch 207/300
105/105 [==] - 439s 4s/step - loss: 14524.7061 - mean_io_u: 0.5532 - val_loss: 48952.7969 - val_mean_io_u: 0.3976

Epoch 00208: LearningRateScheduler reducing learning rate to 6.576559749107305e-05.
Epoch 208/300
105/105 [==] - 439s 4s/step - loss: 14147.7705 - mean_io_u: 0.5547 - val_loss: 33560.4258 - val_mean_io_u: 0.4895

Epoch 00209: LearningRateScheduler reducing learning rate to 6.447150052571948e-05.
Epoch 209/300
105/105 [==] - 439s 4s/step - loss: 14234.3896 - mean_io_u: 0.5516 - val_loss: 29967.4727 - val_mean_io_u: 0.4647

Epoch 00210: LearningRateScheduler reducing learning rate to 6.318678820721529e-05.
Epoch 210/300
105/105 [==] - 439s 4s/step - loss: 13664.8105 - mean_io_u: 0.5649 - val_loss: 38095.2734 - val_mean_io_u: 0.4894

Epoch 00211: LearningRateScheduler reducing learning rate to 6.191160141874366e-05.
Epoch 211/300
105/105 [==] - 439s 4s/step - loss: 14412.4248 - mean_io_u: 0.5549 - val_loss: 33475.1875 - val_mean_io_u: 0.4442

Epoch 00212: LearningRateScheduler reducing learning rate to 6.0646079998905885e-05.
Epoch 212/300
105/105 [==] - 439s 4s/step - loss: 14136.7988 - mean_io_u: 0.5590 - val_loss: 32640.9531 - val_mean_io_u: 0.4696

Epoch 00213: LearningRateScheduler reducing learning rate to 5.939036272638689e-05.
Epoch 213/300
105/105 [==] - 440s 4s/step - loss: 14590.7520 - mean_io_u: 0.5527 - val_loss: 34158.7539 - val_mean_io_u: 0.5053

Epoch 00214: LearningRateScheduler reducing learning rate to 5.8144587304736207e-05.
Epoch 214/300
105/105 [==] - 439s 4s/step - loss: 14252.8418 - mean_io_u: 0.5518 - val_loss: 42583.5117 - val_mean_io_u: 0.4413

Epoch 00215: LearningRateScheduler reducing learning rate to 5.6908890347267367e-05.
Epoch 215/300
105/105 [==] - 439s 4s/step - loss: 14509.8428 - mean_io_u: 0.5488 - val_loss: 39850.9844 - val_mean_io_u: 0.4784

Epoch 00216: LearningRateScheduler reducing learning rate to 5.56834073620769e-05.
Epoch 216/300
105/105 [==] - 439s 4s/step - loss: 14100.8154 - mean_io_u: 0.5572 - val_loss: 34432.9297 - val_mean_io_u: 0.4208

Epoch 00217: LearningRateScheduler reducing learning rate to 5.446827273718397e-05.
Epoch 217/300
105/105 [==] - 439s 4s/step - loss: 13779.1221 - mean_io_u: 0.5590 - val_loss: 33797.9766 - val_mean_io_u: 0.4345

Epoch 00218: LearningRateScheduler reducing learning rate to 5.326361972579363e-05.
Epoch 218/300
105/105 [==] - 439s 4s/step - loss: 14427.8564 - mean_io_u: 0.5499 - val_loss: 38356.6836 - val_mean_io_u: 0.3999

Epoch 00219: LearningRateScheduler reducing learning rate to 5.206958043168373e-05.
Epoch 219/300
105/105 [==] - 439s 4s/step - loss: 14423.2373 - mean_io_u: 0.5496 - val_loss: 34448.3672 - val_mean_io_u: 0.4738

Epoch 00220: LearningRateScheduler reducing learning rate to 5.088628579471837e-05.
Epoch 220/300
105/105 [==] - 439s 4s/step - loss: 13499.8672 - mean_io_u: 0.5649 - val_loss: 40659.5977 - val_mean_io_u: 0.4423

Epoch 00221: LearningRateScheduler reducing learning rate to 4.9713865576489204e-05.
Epoch 221/300
105/105 [==] - 439s 4s/step - loss: 13214.4062 - mean_io_u: 0.5645 - val_loss: 37023.3906 - val_mean_io_u: 0.4415

Epoch 00222: LearningRateScheduler reducing learning rate to 4.8552448346085036e-05.
Epoch 222/300
105/105 [==] - 439s 4s/step - loss: 13621.0322 - mean_io_u: 0.5641 - val_loss: 39336.8359 - val_mean_io_u: 0.5182

Epoch 00223: LearningRateScheduler reducing learning rate to 4.740216146599312e-05.
Epoch 223/300
105/105 [==] - 439s 4s/step - loss: 14423.4717 - mean_io_u: 0.5503 - val_loss: 36699.8633 - val_mean_io_u: 0.4428

Epoch 00224: LearningRateScheduler reducing learning rate to 4.6263131078132514e-05.
Epoch 224/300
105/105 [==] - 439s 4s/step - loss: 14319.3672 - mean_io_u: 0.5513 - val_loss: 31253.1133 - val_mean_io_u: 0.4316

Epoch 00225: LearningRateScheduler reducing learning rate to 4.513548209002085e-05.
Epoch 225/300
105/105 [==] - 439s 4s/step - loss: 13973.7246 - mean_io_u: 0.5550 - val_loss: 40381.6328 - val_mean_io_u: 0.3943

Epoch 00226: LearningRateScheduler reducing learning rate to 4.4019338161077204e-05.
Epoch 226/300
105/105 [==] - 439s 4s/step - loss: 13087.0059 - mean_io_u: 0.5683 - val_loss: 38013.1484 - val_mean_io_u: 0.4428

Epoch 00227: LearningRateScheduler reducing learning rate to 4.291482168906117e-05.
Epoch 227/300
105/105 [==] - 439s 4s/step - loss: 14050.5684 - mean_io_u: 0.5552 - val_loss: 41338.3867 - val_mean_io_u: 0.4950

Epoch 00228: LearningRateScheduler reducing learning rate to 4.182205379665059e-05.
Epoch 228/300
105/105 [==] - 439s 4s/step - loss: 13758.3418 - mean_io_u: 0.5599 - val_loss: 38963.6328 - val_mean_io_u: 0.4688

Epoch 00229: LearningRateScheduler reducing learning rate to 4.074115431815937e-05.
Epoch 229/300
105/105 [==] - 439s 4s/step - loss: 13745.5801 - mean_io_u: 0.5608 - val_loss: 53638.7617 - val_mean_io_u: 0.3923

Epoch 00230: LearningRateScheduler reducing learning rate to 3.9672241786395886e-05.
Epoch 230/300
105/105 [==] - 439s 4s/step - loss: 13849.7422 - mean_io_u: 0.5598 - val_loss: 36536.1836 - val_mean_io_u: 0.4519

Epoch 00231: LearningRateScheduler reducing learning rate to 3.8615433419664717e-05.
Epoch 231/300
105/105 [==] - 439s 4s/step - loss: 13429.1416 - mean_io_u: 0.5643 - val_loss: 39454.6641 - val_mean_io_u: 0.4574

Epoch 00232: LearningRateScheduler reducing learning rate to 3.75708451089126e-05.
Epoch 232/300
105/105 [==] - 439s 4s/step - loss: 14164.0566 - mean_io_u: 0.5528 - val_loss: 38913.8359 - val_mean_io_u: 0.4487

Epoch 00233: LearningRateScheduler reducing learning rate to 3.653859140501914e-05.
Epoch 233/300
105/105 [==] - 439s 4s/step - loss: 13199.7168 - mean_io_u: 0.5677 - val_loss: 43196.8789 - val_mean_io_u: 0.4751

Epoch 00234: LearningRateScheduler reducing learning rate to 3.551878550623541e-05.
Epoch 234/300
105/105 [==] - 439s 4s/step - loss: 14364.4043 - mean_io_u: 0.5510 - val_loss: 48101.1953 - val_mean_io_u: 0.4453

Epoch 00235: LearningRateScheduler reducing learning rate to 3.4511539245770414e-05.
Epoch 235/300
105/105 [==] - 439s 4s/step - loss: 13883.1836 - mean_io_u: 0.5573 - val_loss: 38235.3203 - val_mean_io_u: 0.4665

Epoch 00236: LearningRateScheduler reducing learning rate to 3.3516963079527215e-05.
Epoch 236/300
105/105 [==] - 439s 4s/step - loss: 13697.2041 - mean_io_u: 0.5609 - val_loss: 29029.4883 - val_mean_io_u: 0.5325

Epoch 00237: LearningRateScheduler reducing learning rate to 3.2535166073990334e-05.
Epoch 237/300
105/105 [==] - 439s 4s/step - loss: 13871.2295 - mean_io_u: 0.5575 - val_loss: 42103.8438 - val_mean_io_u: 0.4221

Epoch 00238: LearningRateScheduler reducing learning rate to 3.156625589426525e-05.
Epoch 238/300
105/105 [==] - 439s 4s/step - loss: 14293.4121 - mean_io_u: 0.5559 - val_loss: 49225.9609 - val_mean_io_u: 0.3955

Epoch 00239: LearningRateScheduler reducing learning rate to 3.0610338792271784e-05.

その6(ResNetをさらに少し値をかえる。)

***** Begin training *****
Dataset --> CamVid
Num Images --> 421
Model --> DeepLabV3Plus
Base Model --> ResNet152
Crop Height --> 256
Crop Width --> 256
Num Epochs --> 300
Initial Epoch --> 0
Batch Size --> 4
Num Classes --> 32
Data Augmentation:
        Data Augmentation Rate --> 0.2
        Vertical Flip --> False
        Horizontal Flip --> True
        Brightness Alteration --> None
        Rotation --> 5.0
        Zoom --> 0.1
        Channel Shift --> 0.1

Epoch 00282: LearningRateScheduler reducing learning rate to 3.0583373654728226e-06.
Epoch 282/300
105/105 [==] - 336s 3s/step - loss: 9642.9912 - mean_io_u: 0.6224 - val_loss: 42120.7891 - val_mean_io_u: 0.4211

Epoch 00283: LearningRateScheduler reducing learning rate to 2.7560267532331423e-06.
Epoch 283/300
105/105 [==] - 343s 3s/step - loss: 9774.2246 - mean_io_u: 0.6220 - val_loss: 39625.9453 - val_mean_io_u: 0.4381

Epoch 00284: LearningRateScheduler reducing learning rate to 2.4698685850121796e-06.
Epoch 284/300
105/105 [==] - 347s 3s/step - loss: 9940.4307 - mean_io_u: 0.6170 - val_loss: 38310.8945 - val_mean_io_u: 0.5161

Epoch 00285: LearningRateScheduler reducing learning rate to 2.1998942412777778e-06.
Epoch 285/300
105/105 [==] - 347s 3s/step - loss: 9922.2578 - mean_io_u: 0.6183 - val_loss: 38778.2109 - val_mean_io_u: 0.4697

Epoch 00286: LearningRateScheduler reducing learning rate to 1.9461333277591077e-06.
Epoch 286/300
105/105 [==] - 348s 3s/step - loss: 10017.0273 - mean_io_u: 0.6204 - val_loss: 33931.5859 - val_mean_io_u: 0.4451

Epoch 00287: LearningRateScheduler reducing learning rate to 1.7086136721998994e-06.
Epoch 287/300
105/105 [==] - 367s 3s/step - loss: 10207.9473 - mean_io_u: 0.6153 - val_loss: 36609.2266 - val_mean_io_u: 0.4528

Epoch 00288: LearningRateScheduler reducing learning rate to 1.4873613213070326e-06.
Epoch 288/300
105/105 [==] - 338s 3s/step - loss: 9705.6338 - mean_io_u: 0.6212 - val_loss: 39513.2461 - val_mean_io_u: 0.4077

Epoch 00289: LearningRateScheduler reducing learning rate to 1.2824005378940589e-06.
Epoch 289/300
105/105 [==] - 340s 3s/step - loss: 9609.6250 - mean_io_u: 0.6251 - val_loss: 35499.0898 - val_mean_io_u: 0.4380

Epoch 00290: LearningRateScheduler reducing learning rate to 1.093753798220553e-06.
Epoch 290/300
105/105 [==] - 332s 3s/step - loss: 10076.4199 - mean_io_u: 0.6170 - val_loss: 42613.4648 - val_mean_io_u: 0.4285

Epoch 00291: LearningRateScheduler reducing learning rate to 9.214417895274202e-07.
Epoch 291/300
105/105 [==] - 337s 3s/step - loss: 9577.9072 - mean_io_u: 0.6227 - val_loss: 33922.4375 - val_mean_io_u: 0.4640

Epoch 00292: LearningRateScheduler reducing learning rate to 7.654834077681545e-07.
Epoch 292/300
105/105 [==] - 354s 3s/step - loss: 9645.2383 - mean_io_u: 0.6223 - val_loss: 35155.5859 - val_mean_io_u: 0.4655

Epoch 00293: LearningRateScheduler reducing learning rate to 6.258957555368196e-07.
Epoch 293/300
105/105 [==] - 338s 3s/step - loss: 9825.3242 - mean_io_u: 0.6220 - val_loss: 38539.3555 - val_mean_io_u: 0.4084

Epoch 00294: LearningRateScheduler reducing learning rate to 5.026941401925086e-07.
Epoch 294/300
105/105 [==] - 340s 3s/step - loss: 10266.7412 - mean_io_u: 0.6160 - val_loss: 42251.1406 - val_mean_io_u: 0.3362

Epoch 00295: LearningRateScheduler reducing learning rate to 3.958920721806797e-07.
Epoch 295/300
105/105 [==] - 355s 3s/step - loss: 9449.6855 - mean_io_u: 0.6282 - val_loss: 46164.1172 - val_mean_io_u: 0.4545

Epoch 00296: LearningRateScheduler reducing learning rate to 3.055012635516537e-07.
Epoch 296/300
105/105 [==] - 335s 3s/step - loss: 9814.1074 - mean_io_u: 0.6232 - val_loss: 39850.5703 - val_mean_io_u: 0.4511

Epoch 00297: LearningRateScheduler reducing learning rate to 2.3153162667618543e-07.
Epoch 297/300
105/105 [==] - 377s 4s/step - loss: 9784.8115 - mean_io_u: 0.6213 - val_loss: 32465.5430 - val_mean_io_u: 0.4685

Epoch 00298: LearningRateScheduler reducing learning rate to 1.7399127315854657e-07.
Epoch 298/300
105/105 [==] - 348s 3s/step - loss: 10461.2939 - mean_io_u: 0.6065 - val_loss: 42474.8828 - val_mean_io_u: 0.4353

Epoch 00299: LearningRateScheduler reducing learning rate to 1.3288651294691732e-07.
Epoch 299/300
105/105 [==] - 344s 3s/step - loss: 9802.5967 - mean_io_u: 0.6220 - val_loss: 26617.5840 - val_mean_io_u: 0.5529

Epoch 00300: LearningRateScheduler reducing learning rate to 1.0822185364145413e-07.
Epoch 300/300
105/105 [==] - 337s 3s/step - loss: 10342.7090 - mean_io_u: 0.6079 - val_loss: 38630.7422 - val_mean_io_u: 0.4652

その7(その前のに対して、オーグメンテーションをかえる)


***** Begin training *****
Dataset --> CamVid
Num Images --> 421
Model --> DeepLabV3Plus
Base Model --> ResNet152
Crop Height --> 256
Crop Width --> 256
Num Epochs --> 300
Initial Epoch --> 0
Batch Size --> 4
Num Classes --> 32
Data Augmentation:
        Data Augmentation Rate --> 0.5
        Vertical Flip --> False
        Horizontal Flip --> True
        Brightness Alteration --> None
        Rotation --> 15.0
        Zoom --> 0.1
        Channel Shift --> 0.1

Epoch 00188: LearningRateScheduler reducing learning rate to 9.339277401657084e-05.
Epoch 188/300
105/105 [==] - 527s 5s/step - loss: 15350.4150 - mean_io_u: 0.5448 - val_loss: 25813.9512 - val_mean_io_u: 0.4830

Epoch 00189: LearningRateScheduler reducing learning rate to 9.194203781150716e-05.
Epoch 189/300
105/105 [==] - 527s 5s/step - loss: 14895.3438 - mean_io_u: 0.5475 - val_loss: 35445.7461 - val_mean_io_u: 0.4441

Epoch 00190: LearningRateScheduler reducing learning rate to 9.049767379931463e-05.
Epoch 190/300
105/105 [==] - 527s 5s/step - loss: 14428.6836 - mean_io_u: 0.5522 - val_loss: 51286.9258 - val_mean_io_u: 0.3764

Epoch 00191: LearningRateScheduler reducing learning rate to 8.905984037078375e-05.
Epoch 191/300
105/105 [==] - 527s 5s/step - loss: 15003.8516 - mean_io_u: 0.5479 - val_loss: 42903.3438 - val_mean_io_u: 0.4295

Epoch 00192: LearningRateScheduler reducing learning rate to 8.762869520055285e-05.
Epoch 192/300
105/105 [==] - 527s 5s/step - loss: 15446.7051 - mean_io_u: 0.5407 - val_loss: 37077.4375 - val_mean_io_u: 0.4162

Epoch 00193: LearningRateScheduler reducing learning rate to 8.620439522981734e-05.
Epoch 193/300
105/105 [==] - 527s 5s/step - loss: 15499.0967 - mean_io_u: 0.5430 - val_loss: 27281.6191 - val_mean_io_u: 0.4473

Epoch 00194: LearningRateScheduler reducing learning rate to 8.478709664911951e-05.
Epoch 194/300
105/105 [==] - 527s 5s/step - loss: 15826.6797 - mean_io_u: 0.5339 - val_loss: 31886.1836 - val_mean_io_u: 0.4124

Epoch 00195: LearningRateScheduler reducing learning rate to 8.337695488122015e-05.
Epoch 195/300
105/105 [==] - 527s 5s/step - loss: 15473.5068 - mean_io_u: 0.5385 - val_loss: 40918.0039 - val_mean_io_u: 0.4251

Epoch 00196: LearningRateScheduler reducing learning rate to 8.197412456405496e-05.
Epoch 196/300
105/105 [==] - 528s 5s/step - loss: 15493.5537 - mean_io_u: 0.5425 - val_loss: 33786.1367 - val_mean_io_u: 0.4137

Epoch 00197: LearningRateScheduler reducing learning rate to 8.057875953377676e-05.
Epoch 197/300
105/105 [==] - 527s 5s/step - loss: 15747.3652 - mean_io_u: 0.5360 - val_loss: 34643.0234 - val_mean_io_u: 0.4354

Epoch 00198: LearningRateScheduler reducing learning rate to 7.919101280788536e-05.
Epoch 198/300
105/105 [==] - 527s 5s/step - loss: 15162.3525 - mean_io_u: 0.5432 - val_loss: 47692.2109 - val_mean_io_u: 0.4041

Epoch 00199: LearningRateScheduler reducing learning rate to 7.781103656844776e-05.
Epoch 199/300
105/105 [==] - 527s 5s/step - loss: 15366.5439 - mean_io_u: 0.5403 - val_loss: 37444.9453 - val_mean_io_u: 0.4167

Epoch 00200: LearningRateScheduler reducing learning rate to 7.643898214540963e-05.
Epoch 200/300
105/105 [==] - 527s 5s/step - loss: 15176.5488 - mean_io_u: 0.5426 - val_loss: 30566.3789 - val_mean_io_u: 0.4977

Epoch 00201: LearningRateScheduler reducing learning rate to 7.507499999999997e-05.
Epoch 201/300
105/105 [==] - 527s 5s/step - loss: 15491.7168 - mean_io_u: 0.5437 - val_loss: 43589.2109 - val_mean_io_u: 0.4045

Epoch 00202: LearningRateScheduler reducing learning rate to 7.371923970823181e-05.
Epoch 202/300
105/105 [==] - 527s 5s/step - loss: 15268.3857 - mean_io_u: 0.5424 - val_loss: 36250.5547 - val_mean_io_u: 0.4588

Epoch 00203: LearningRateScheduler reducing learning rate to 7.237184994449914e-05.
Epoch 203/300
105/105 [==] - 527s 5s/step - loss: 15150.2100 - mean_io_u: 0.5467 - val_loss: 25390.1094 - val_mean_io_u: 0.4964

Epoch 00204: LearningRateScheduler reducing learning rate to 7.10329784652732e-05.
Epoch 204/300
105/105 [==] - 527s 5s/step - loss: 14917.7900 - mean_io_u: 0.5478 - val_loss: 42530.0859 - val_mean_io_u: 0.3851

Epoch 00205: LearningRateScheduler reducing learning rate to 6.970277209289948e-05.
Epoch 205/300
105/105 [==] - 527s 5s/step - loss: 15090.8779 - mean_io_u: 0.5463 - val_loss: 32042.3184 - val_mean_io_u: 0.5052

Epoch 00206: LearningRateScheduler reducing learning rate to 6.838137669949673e-05.
Epoch 206/300
105/105 [==] - 527s 5s/step - loss: 14266.7305 - mean_io_u: 0.5533 - val_loss: 44365.8516 - val_mean_io_u: 0.4154

Epoch 00207: LearningRateScheduler reducing learning rate to 6.706893719096057e-05.
Epoch 207/300
105/105 [==] - 527s 5s/step - loss: 15473.3535 - mean_io_u: 0.5420 - val_loss: 40047.4453 - val_mean_io_u: 0.4887

Epoch 00208: LearningRateScheduler reducing learning rate to 6.576559749107305e-05.
Epoch 208/300
105/105 [==] - 527s 5s/step - loss: 14821.1270 - mean_io_u: 0.5519 - val_loss: 38655.5898 - val_mean_io_u: 0.4361

あまり、興味深い結果ではない。

その8(VGG19へ)

***** Begin training *****
Dataset --> CamVid
Num Images --> 421
Model --> UNet
Base Model --> VGG19
Crop Height --> 256
Crop Width --> 256
Num Epochs --> 300
Initial Epoch --> 0
Batch Size --> 4
Num Classes --> 32
Data Augmentation:
        Data Augmentation Rate --> 0.5
        Vertical Flip --> False
        Horizontal Flip --> True
        Brightness Alteration --> None
        Rotation --> 15.0
        Zoom --> 0.1
        Channel Shift --> 0.1
Epoch 00278: LearningRateScheduler reducing learning rate to 4.4284147275500865e-06.
Epoch 278/300
105/105 [==] - 262s 2s/step - loss: 13041.4004 - mean_io_u: 0.5759 - val_loss: 36836.8672 - val_mean_io_u: 0.4096

Epoch 00279: LearningRateScheduler reducing learning rate to 4.061843514169614e-06.
Epoch 279/300
105/105 [==] - 262s 2s/step - loss: 13144.2178 - mean_io_u: 0.5734 - val_loss: 30244.1719 - val_mean_io_u: 0.4554

Epoch 00280: LearningRateScheduler reducing learning rate to 3.7112815472848386e-06.
Epoch 280/300
105/105 [==] - 262s 2s/step - loss: 12966.4160 - mean_io_u: 0.5748 - val_loss: 33730.5156 - val_mean_io_u: 0.4099

Epoch 00281: LearningRateScheduler reducing learning rate to 3.3767672699658532e-06.
Epoch 281/300
105/105 [==] - 262s 2s/step - loss: 12453.0010 - mean_io_u: 0.5844 - val_loss: 36161.7305 - val_mean_io_u: 0.4274

Epoch 00282: LearningRateScheduler reducing learning rate to 3.0583373654728226e-06.
Epoch 282/300
105/105 [==] - 262s 2s/step - loss: 12499.7373 - mean_io_u: 0.5800 - val_loss: 18748.4023 - val_mean_io_u: 0.5515

Epoch 00283: LearningRateScheduler reducing learning rate to 2.7560267532331423e-06.
Epoch 283/300
105/105 [==] - 262s 2s/step - loss: 13150.8936 - mean_io_u: 0.5765 - val_loss: 25297.3223 - val_mean_io_u: 0.5206

Epoch 00284: LearningRateScheduler reducing learning rate to 2.4698685850121796e-06.
Epoch 284/300
105/105 [==] - 262s 2s/step - loss: 13421.9834 - mean_io_u: 0.5681 - val_loss: 28623.0254 - val_mean_io_u: 0.5018

Epoch 00285: LearningRateScheduler reducing learning rate to 2.1998942412777778e-06.
Epoch 285/300
105/105 [==] - 262s 2s/step - loss: 13142.8691 - mean_io_u: 0.5742 - val_loss: 32247.8555 - val_mean_io_u: 0.4657

Epoch 00286: LearningRateScheduler reducing learning rate to 1.9461333277591077e-06.
Epoch 286/300
105/105 [==] - 262s 2s/step - loss: 12577.5742 - mean_io_u: 0.5820 - val_loss: 33489.8945 - val_mean_io_u: 0.4923

Epoch 00287: LearningRateScheduler reducing learning rate to 1.7086136721998994e-06.
Epoch 287/300
105/105 [==] - 262s 2s/step - loss: 13352.0547 - mean_io_u: 0.5728 - val_loss: 26837.2656 - val_mean_io_u: 0.4971

Epoch 00288: LearningRateScheduler reducing learning rate to 1.4873613213070326e-06.
Epoch 288/300
105/105 [==] - 262s 2s/step - loss: 12696.1904 - mean_io_u: 0.5782 - val_loss: 26888.3340 - val_mean_io_u: 0.4703

Epoch 00289: LearningRateScheduler reducing learning rate to 1.2824005378940589e-06.
Epoch 289/300
105/105 [==] - 262s 2s/step - loss: 13743.4229 - mean_io_u: 0.5652 - val_loss: 25023.8652 - val_mean_io_u: 0.5094

Epoch 00290: LearningRateScheduler reducing learning rate to 1.093753798220553e-06.
Epoch 290/300
105/105 [==] - 262s 2s/step - loss: 13030.3672 - mean_io_u: 0.5754 - val_loss: 29277.4004 - val_mean_io_u: 0.4536

Epoch 00291: LearningRateScheduler reducing learning rate to 9.214417895274202e-07.
Epoch 291/300
105/105 [==] - 262s 2s/step - loss: 13075.4102 - mean_io_u: 0.5749 - val_loss: 28388.0273 - val_mean_io_u: 0.4484

Epoch 00292: LearningRateScheduler reducing learning rate to 7.654834077681545e-07.
Epoch 292/300
105/105 [==] - 262s 2s/step - loss: 12329.8584 - mean_io_u: 0.5826 - val_loss: 28064.2461 - val_mean_io_u: 0.5163

Epoch 00293: LearningRateScheduler reducing learning rate to 6.258957555368196e-07.
Epoch 293/300
105/105 [==] - 262s 2s/step - loss: 12244.7539 - mean_io_u: 0.5866 - val_loss: 27515.9336 - val_mean_io_u: 0.5570

Epoch 00294: LearningRateScheduler reducing learning rate to 5.026941401925086e-07.
Epoch 294/300
105/105 [==] - 262s 2s/step - loss: 12972.5166 - mean_io_u: 0.5735 - val_loss: 27562.1309 - val_mean_io_u: 0.4888

Epoch 00295: LearningRateScheduler reducing learning rate to 3.958920721806797e-07.
Epoch 295/300
105/105 [==] - 262s 2s/step - loss: 12098.0410 - mean_io_u: 0.5870 - val_loss: 30404.6992 - val_mean_io_u: 0.4871

Epoch 00296: LearningRateScheduler reducing learning rate to 3.055012635516537e-07.
Epoch 296/300
105/105 [==] - 262s 2s/step - loss: 13095.4434 - mean_io_u: 0.5764 - val_loss: 30395.1992 - val_mean_io_u: 0.5303

Epoch 00297: LearningRateScheduler reducing learning rate to 2.3153162667618543e-07.
Epoch 297/300
105/105 [==] - 262s 2s/step - loss: 13845.8545 - mean_io_u: 0.5634 - val_loss: 26944.0059 - val_mean_io_u: 0.4717

Epoch 00298: LearningRateScheduler reducing learning rate to 1.7399127315854657e-07.
Epoch 298/300
105/105 [==] - 262s 2s/step - loss: 13043.8018 - mean_io_u: 0.5749 - val_loss: 34509.7266 - val_mean_io_u: 0.4517

Epoch 00299: LearningRateScheduler reducing learning rate to 1.3288651294691732e-07.
Epoch 299/300
105/105 [==] - 262s 2s/step - loss: 13175.7002 - mean_io_u: 0.5716 - val_loss: 29966.4961 - val_mean_io_u: 0.4763

Epoch 00300: LearningRateScheduler reducing learning rate to 1.0822185364145413e-07.
Epoch 300/300
105/105 [==] - 262s 2s/step - loss: 13899.7666 - mean_io_u: 0.5674 - val_loss: 30677.0117 - val_mean_io_u: 0.5009

その9(VGG16へ)

***** Begin training *****
Dataset --> CamVid
Num Images --> 421
Model --> UNet
Base Model --> VGG16
Crop Height --> 256
Crop Width --> 256
Num Epochs --> 300
Initial Epoch --> 0
Batch Size --> 4
Num Classes --> 32
Data Augmentation:
        Data Augmentation Rate --> 0.5
        Vertical Flip --> False
        Horizontal Flip --> True
        Brightness Alteration --> None
        Rotation --> 15.0
        Zoom --> 0.1
        Channel Shift --> 0.1

Epoch 79/300
105/105 [==] - 231s 2s/step - loss: 21093.9590 - mean_io_u: 0.4881 - val_loss: 24889.8906 - val_mean_io_u: 0.4581

Epoch 00080: LearningRateScheduler reducing learning rate to 0.00025154755165391495.
Epoch 80/300
105/105 [==] - 231s 2s/step - loss: 21166.4648 - mean_io_u: 0.4843 - val_loss: 27520.2227 - val_mean_io_u: 0.4448

Epoch 00081: LearningRateScheduler reducing learning rate to 0.00025038613442351075.
Epoch 81/300
105/105 [==] - 231s 2s/step - loss: 20243.4414 - mean_io_u: 0.4927 - val_loss: 23198.8301 - val_mean_io_u: 0.5450

Epoch 00082: LearningRateScheduler reducing learning rate to 0.0002492137142052816.
Epoch 82/300
105/105 [==] - 231s 2s/step - loss: 21469.2344 - mean_io_u: 0.4804 - val_loss: 28869.9902 - val_mean_io_u: 0.4203

Epoch 00083: LearningRateScheduler reducing learning rate to 0.00024803041956831634.
Epoch 83/300
105/105 [==] - 231s 2s/step - loss: 19920.8145 - mean_io_u: 0.4965 - val_loss: 30445.1348 - val_mean_io_u: 0.3874

Epoch 00084: LearningRateScheduler reducing learning rate to 0.0002468363802742064.
Epoch 84/300
105/105 [==] - 231s 2s/step - loss: 19678.6504 - mean_io_u: 0.5019 - val_loss: 32337.4023 - val_mean_io_u: 0.4333

Epoch 00085: LearningRateScheduler reducing learning rate to 0.000245631727262816.
Epoch 85/300
105/105 [==] - 231s 2s/step - loss: 20097.7812 - mean_io_u: 0.4944 - val_loss: 29057.7617 - val_mean_io_u: 0.4194

Epoch 00086: LearningRateScheduler reducing learning rate to 0.0002444165926379231.
Epoch 86/300
105/105 [==] - 231s 2s/step - loss: 20126.7305 - mean_io_u: 0.4947 - val_loss: 28013.0059 - val_mean_io_u: 0.4356

Epoch 00087: LearningRateScheduler reducing learning rate to 0.00024319110965273264.
Epoch 87/300
105/105 [==] - 231s 2s/step - loss: 21490.9336 - mean_io_u: 0.4830 - val_loss: 43573.7695 - val_mean_io_u: 0.3367

Epoch 00088: LearningRateScheduler reducing learning rate to 0.0002419554126952638.
Epoch 88/300
105/105 [==] - 231s 2s/step - loss: 20464.9824 - mean_io_u: 0.4899 - val_loss: 30465.7402 - val_mean_io_u: 0.4155

Epoch 00089: LearningRateScheduler reducing learning rate to 0.00024070963727361311.
Epoch 89/300
105/105 [==] - 231s 2s/step - loss: 19254.0059 - mean_io_u: 0.5030 - val_loss: 34606.4609 - val_mean_io_u: 0.4253

Epoch 00090: LearningRateScheduler reducing learning rate to 0.00023945392000109406.
Epoch 90/300
105/105 [==] - 231s 2s/step - loss: 19508.3613 - mean_io_u: 0.5028 - val_loss: 23513.0078 - val_mean_io_u: 0.4602

Epoch 00091: LearningRateScheduler reducing learning rate to 0.00023818839858125631.
Epoch 91/300
105/105 [==] - 231s 2s/step - loss: 20069.9258 - mean_io_u: 0.4982 - val_loss: 29752.0742 - val_mean_io_u: 0.3745

Epoch 00092: LearningRateScheduler reducing learning rate to 0.00023691321179278464.
Epoch 92/300
105/105 [==] - 231s 2s/step - loss: 18906.5195 - mean_io_u: 0.5077 - val_loss: 25509.8203 - val_mean_io_u: 0.4449

まとめ

特になし。
まだ、記載はじめなので。。
グラフにしなさい、、、という感じですが。。。。いい結果が出れば、見せ方考えます。

備忘(本人用)

コマンド例示

python train.py --model FCN-8s --base_model ResNet50

python train.py --model UNet --base_model ResNet152  <---NG

python train.py --model UNet --base_model VGG19  --h_flip True --rotation 15 --channel_shift 0.1 --data_aug_rate 0.5

python train.py --model DeepLabV3Plus --base_model ResNet152  --h_flip True --rotation 15 --channel_shift 0.1 --data_aug_rate 0.5

python train.py --model UNet --base_model VGG19  --h_flip True --rotation 15 --channel_shift 0.1 --data_aug_rate 0.5
0
2
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
2