3
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

RNN/LSTMで気温予測【1.実装編】

Last updated at Posted at 2022-08-07

はじめに

今回はRNN, LSTMを用いて東京の1時間ごとの気温を予測するモデルを作成する。
モデルの作成にはPytorchを用いている。
1.は実装編となっていて、2.以降でハイパーパラメータの調整などを行いながら結果について考察していきたい。

ミスやアドバイスなどありましたらコメントしていただけると幸いです。

データの準備

気象庁のHP内の過去の気象データ検索より東京の1時間ごとの気象データを取得する。
1時間ごとにきちんとデータがそろっていたのが1990年以降だったのでそこから今日までのデータをスクレイピングする。

download.py
import csv
import time
import datetime
import requests
from bs4 import BeautifulSoup

# 取得開始の日付
year = 1990
month = 1
day = 1

# 繰り返し処理で用いるための1日のtimedeltaオブジェクト
td = datetime.timedelta(days=1)

# dl_days: 何日分のデータを取得するか
date = datetime.date(year, month, day)
today = datetime.date.today()
dl_days = (today - date).days

#CSVファイルのヘッダー
header = ['year', 'month', 'day', 'hour', 'atmosphere1', 'atmosphere2',	'rain',	'temperature', 'dew point', 'vaper pressure', 'humidity']

# 空のCSVファイルを作成し、ヘッダーを書き込む
with open('data.csv', 'w', encoding='utf-8', newline='') as f:
  writer = csv.writer(f)
  writer.writerow(header)

# データの取得
for i in range(1, dl_days+1):
    # プログレスバーの表示
    progress = int(round(i / dl_days / 4 * 100))
    pro_bar = ('=' * progress) + (' ' * (25-progress))
    print('\rDownloading table of {0}   [{1}] {2}%'.format(str(date), pro_bar, round(i / dl_days * 100, 1)), end='')

    y = date.year
    m = date.month
    d = date.day
    
    # URLとなる文字列を生成
    url = 'https://www.data.jma.go.jp/obd/stats/etrn/view/hourly_s1.php?prec_no=44&block_no=47662&year=' + str(y) + '&month=' + str(m) + '&day=' + str(d) + '&view='
    
    res = requests.get(url)
    soup = BeautifulSoup(res.text, 'html.parser')
    
    table = soup.find_all('tr')
    with open('data.csv', 'a', encoding='utf-8', newline='') as f:
    writer = csv.writer(f)
    for row in table:
        csvRow = [y, m, d]
        # データ部分の要素数が17だったのでそれ以外は余計な行と判断
        if(len(row) != 17):
            continue
        # とりあえず欲しかったデータは8列分(湿度まで)
        for c in range(8):
            text = row.find_all(['td', 'th'])[c].get_text()
            if text == '--':
                text = 0
        csvRow.append(text)
        writer.writerow(csvRow)

    # サーバーへの負荷をかけすぎないため待機時間を設定
    time.sleep(3)
    date += td

データ数が多く、取得にかなり時間がかかるので必要に応じて開始の日付を遅くしてもよい。
待機時間が3秒の場合1年分のデータ取得に20分弱かかる。

データの前処理

観測機器の故障やメンテナンスなどで一部欠損値が見られる。
数値に変換できないセルを一度NaNに置換し、interpolate()1を用いて線形補完する(前後の値の中間値になるようにする)。

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

df = pd.read_csv('data.csv')

df['temperature'] = df['temperature'].mask(df['temperature'].apply(lambda s:pd.to_numeric(s, errors='coerce')).isnull(), np.nan)
df['atmosphere1'] = df['atmosphere1'].mask(df['atmosphere1'].apply(lambda s:pd.to_numeric(s, errors='coerce')).isnull(), np.nan)
df['atmosphere2'] = df['atmosphere2'].mask(df['atmosphere2'].apply(lambda s:pd.to_numeric(s, errors='coerce')).isnull(), np.nan)
df['rain'] = df['rain'].mask(df['rain'].apply(lambda s:pd.to_numeric(s, errors='coerce')).isnull(), np.nan)
df['dew point'] = df['dew point'].mask(df['dew point'].apply(lambda s:pd.to_numeric(s, errors='coerce')).isnull(), np.nan)
df = df.astype({'atmosphere1': 'float64', 'atmosphere2': 'float64', 'rain': 'float64', 'dew point': 'float64'})

df = df.interpolate()
df.head()

Dataset/DataLoaderの作成

from sklearn.preprocessing import MinMaxScaler

import torch
from torch.utils.data import DataLoader

y = df['temperature'].values
x = np.arange(len(y))

# train/testデータの分割
# 全データが28万件あるので訓練データの割合を大きくする
train_rate = 0.95
train_size = int(len(y)*train_rate)
test_size = len(y) - train_size

y_train = y[0:train_size]
y_test = y[train_size:len(y)]

print("train size: {}, test size: {} ".format(len(y_train), len(y_test)))
# train size: 271434, test size: 14286 

# ある程度値の範囲が限られていて外れ値がないので正規化を行う
ms = MinMaxScaler()
y = y.reshape(-1, 1)
y_train = y_train.reshape(-1, 1)
y_test = y_test.reshape(-1, 1)
y_ms = ms.fit_transform(y)
y_train_ms = ms.transform(y_train)
y_test_ms = ms.transform(y_test)

# Dataset/DataLoaderの作成
# パラメータ
# time_step: 1つのデータを予測するのに用いる一連のデータの数
time_step = 12
BATCH_SIZE = 1024
BATCH_SIZE_test = 512

n_sample = len(y) - time_step
n_sample_train = train_size - time_step
n_sample_test = test_size - time_step

input_data = np.zeros((n_sample, time_step, 1))
input_data_train = np.zeros((n_sample_train, time_step, 1))
correct_data_train = np.zeros((n_sample_train, 1))
input_data_test = np.zeros((n_sample_test, time_step, 1))
correct_data_test = np.zeros((n_sample_test, 1))

for i in range(n_sample):
    input_data[i] = y_ms[i:i+time_step]

for i in range(n_sample_train):
    input_data_train[i] = y_train_ms[i:i+time_step]
    correct_data_train[i] = y_train_ms[i+time_step]

for i in range(n_sample_test):
    input_data_test[i] = y_test_ms[i:i+time_step]
    correct_data_test[i] = y_test_ms[i+time_step]
    
input_data_tensor = torch.tensor(input_data, dtype=torch.float)
input_data_train_tensor = torch.tensor(input_data_train, dtype=torch.float)
correct_data_train_tensor = torch.tensor(correct_data_train, dtype=torch.float)
input_data_test_tensor = torch.tensor(input_data_test, dtype=torch.float)
correct_data_test_tensor = torch.tensor(correct_data_test, dtype=torch.float)

dataset_train = torch.utils.data.TensorDataset(input_data_train_tensor, correct_data_train_tensor)
dataset_test = torch.utils.data.TensorDataset(input_data_test_tensor, correct_data_test_tensor)
train_loader = DataLoader(dataset_train, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(dataset_test, batch_size=BATCH_SIZE_test, shuffle=True)```

# モデルの設計、損失関数や最適化手法の選択
損失関数はMSE平均二乗和誤差)、最適化手法はAdamとした
```python
# モデルの作成
import torch.nn as nn
import torch.nn.functional as F
from torch import optim

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)

class RNN(nn.Module):
    def __init__(self, input_size, output_size, hidden_dim, n_layers):
        super(RNN, self).__init__()

        self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
        self.fc = nn.Linear(hidden_dim, output_size)

    def forward(self, x):
        x = x.to(device)
        y_rnn, h = self.rnn(x, None)
        y = self.fc(y_rnn[:, -1, :])
        return y

class LSTM(nn.Module):
    def __init__(self, input_size, output_size, hidden_dim, n_layers):
        super(LSTM, self).__init__()

        self.lstm = nn.LSTM(input_size, hidden_dim, n_layers, batch_first=True)
        self.fc = nn.Linear(hidden_dim, output_size) 

    def forward(self, x):
        x = x.to(device)
        y_lstm, h = self.lstm(x, None) 
        y = self.fc(y_lstm[:, -1, :]) 
        return y

# パラメータの設定
n_inputs = 1
n_outputs = 1
n_hidden = 128
n_layers = 1
LEARNING_RATE = 0.00005

model = LSTM(n_inputs, n_outputs, n_hidden, n_layers).to(device)
loss_fnc = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)

訓練

import matplotlib as mpl
import time
# 訓練にかかる時間を計測
start = time.time()

EPOCHS = 30
record_loss_train = []
record_loss_test = []

for i in range(1, EPOCHS+1):
    model.train()
    loss_train = 0
    # 学習
    for j, (x, t) in enumerate(train_loader):
        pred_y = model(x)
        loss = loss_fnc(pred_y, t.to(device))
        loss_train += loss.item()
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    loss_train /= j+1
    record_loss_train.append(loss_train)

    # テストデータでの誤差を計測
    loss_test = 0
    for j, (x, t) in enumerate(test_loader):
        with torch.no_grad():
            pred_y = model(x)
            loss = loss_fnc(pred_y, t.to(device))
            loss_test += loss.item()
    loss_test /= j+1
    record_loss_test.append(loss_test)

    print("Epoch:", i, "\t", "Loss_Train:", loss_train, "\t", "Loss_Test:", loss_test)
    
# エポックごとの誤差の推移
plt.plot(range(len(record_loss_train)), record_loss_train,label="train loss")
plt.plot(range(len(record_loss_test)), record_loss_test, label="test loss")
plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Error")


elapsed_time = time.time() - start
print('time: {}'.format(elapsed_time) + '[s]')

予測

# 予測開始時刻、訓練データ内で指定
hour_id = 221000

# 何時間後まで予測するか
pred_range = 144
predicted = input_data_tensor[hour_id-time_step+1].numpy()
model.eval()
with torch.no_grad():
    for k in range(pred_range):
        x = torch.tensor(predicted[-time_step:], dtype=torch.float)
        x = x.reshape(1, time_step, 1)
        pred_y = model(x)
        predicted = np.append(predicted, pred_y.cpu().numpy(), axis=0)
predicted = predicted.reshape(-1, 1)
predicted = ms.inverse_transform(predicted)

plt.plot(range(hour_id-time_step+1, hour_id+pred_range+1), y[(hour_id-time_step+1):(hour_id+pred_range+1)], label='Correct')
plt.plot(range(hour_id+1, hour_id+pred_range+1), predicted[-pred_range:], label='pred_y')
plt.show()

ハイパーパラメータ

コードの一番上にハイパーパラメータをまとめておくと調整を行いやすかった。

# Hyperparameters -------------------------------------
time_step = 240
BATCH_SIZE = 1024
BATCH_SIZE_test = 512
n_hidden = 128
n_layers = 1
LEARNING_RATE = 0.00005
EPOCHS = 100
# -----------------------------------------------------

結果の考察用

テストデータを用いて1時間ごとの平均誤差のデータフレームを作成する。

# 計算量をdiv分の1に減らす
div = 5
# 何時間後の平均誤差まで求めるか
pred_hour = 24
n_data = int((n_sample_test - pred_hour) / div) + 1
model.eval()
loss_data = np.zeros((pred_hour, 1))

for i in range(n_data):
    print('\r i: {}'.format(i*div), end='')
    predicted = input_data_test_tensor[div*i].numpy()
    with torch.no_grad():
        for j in range(pred_hour):
            x = torch.tensor(predicted[-time_step:], dtype=torch.float)
            x = x.reshape(1, time_step, 1)
            pred_y = model(x)
            predicted = np.append(predicted, pred_y.cpu()[0])
    predicted = predicted.reshape(-1, 1)
    predicted = ms.inverse_transform(predicted)
    loss_ = abs(predicted - y_test[div*i:div*i+time_step+pred_hour])
    loss_data = np.append(loss_data, loss_[-pred_hour:], axis=1)

loss_data = loss_data[:, 1:].T
zero_series = np.zeros((n_data, 1))
loss_data = np.append(zero_series, loss_data, axis=1)
loss_df = pd.DataFrame(loss_data)

plt.figure(figsize=(12, 6))
plt.plot(range(1, pred_hour+2), loss_df.mean(), label='average error')
plt.boxplot(loss_data, whis=1000, labels=range(0, pred_hour+1))
plt.xlabel('hour')
plt.ylabel('error[℃]')
plt.legend()
plt.savefig('model1/model1_time_loss.jpg', dpi=100)
plt.show()

loss_df.describe()

終わりに

次回以降、パラメータの調整やモデルの変更などをしながら結果を見ていく。


  1. pandas.DataFrame.interpolate

3
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
3
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?