1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

FactorMIDAS(uMIDAS)を用いた我が国GDPのナウキャスティング

Last updated at Posted at 2021-05-02

目的

前回の続き

[1]の再現

背景知識

因子分析

多変量解析の手法として、因子分析と主成分分析が広く知られている。因子の推定法として、主因子法、主成分法、最尤法[2]があげられる。その中でも、主成分法は『主成分法で回転をしない結果は、主成分分析の分析結果と同じ』[2]である。そこで、本誌では主成分分析とスパース主成分分析を用いて、因子を導出する。以下、一般的なファクターモデル


x_t = \Lambda F_t + e_t

主成分分析

一般モデル[3]


\hat{V} = \underset{V_k}{\operatorname{argmin}} \sum_{i=1}^n ||x_i - \hat{x_i} ||^2 \quad s.t. \quad \hat{x_i} = V_k V_k^T x_i.

ファクターモデル[4]


V(\hat{F}, \hat{\Lambda} )= (NT)^{-1}\sum_{i=1}^n ||x_i - \hat{\Lambda} \hat{F_t} ||^2 \quad

スパース主成分分析

一般モデル[3]


V(\hat{A}, \hat{B}) = \underset{A,B}{\operatorname{argmin}} \{ \sum_{i=1}^n ||x_i - AB^Tx^i ||^2 + \lambda_2 \sum_{j=1}^k || \beta_j ||^2 + \sum_{j=1}^k \lambda_{1,j} || \beta_j ||_1 \}.

ファクターモデル[1]


V^{LASSO}(F,\Lambda;X, \psi_T ) = \frac{1}{nT} \{ \sum_{i=1}^n \sum_{t=1}^T (x_it - \lambda_i F_{t} )^2 + \psi_T \sum_{i=1}^n | \lambda_i | \}.

また、上記の手法で求めた因子を用いて以下の手法でナウキャスティングをする。

因子ブリッジ方程式(因子混合頻度アプローチ)[1]


y_t = \alpha + \sum_{i=1}^{N} \beta_i x^Q_{i,t} + \sum_{i=1}^{M} \gamma_i F^Q_{i,t} + \varepsilon _t \\
x^Q_{i,t} = \frac{1}{3}(x^M_{i,3t} + 2 x^M_{i,3t-1} + 3 x^M_{i,3t-2} + 2 x^M_{i,3t-3} + x^M_{i,3t-4}) \\
F^Q_{i,t} = \frac{1}{3}(F^M_{i,3t} + 2 F^M_{i,3t-1} + 3 F^M_{i,3t-2} + 2 F^M_{i,3t-3} + F^M_{i,3t-4}) \\

FactorMIDAS(因子混合データサンプリングモデル)[1]


y_t = \alpha + \sum_{i=1}^{k_1} \sum_{j=0}^{l_{1,i}} \beta_{i,j} x^M_{i,3t - j} + \sum_{i=1}^{k_2} \sum_{j=0}^{l_{2,i}} \gamma_{i,j} F^M_{i,3t - j} + \varepsilon _t

また、
使用した指数は鉱工業指数(生産指数) 鉱工業総合(以下、iipと記す)、第3次産業活動指数 第3次産業総合 (以下、itaと記す)とGDPの四半期速報値である。
なお、使用したデータは、e-statのデータを前処理を行なった上で使用した。

最後にrepositoryはこちら

因子ブリッジ方程式

主成分分析


from sklearn.linear_model import LassoLarsIC, LinearRegression
import numpy as np
import pandas as pd
# sklearnの標準化モジュールをインポート
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import SparsePCA, PCA
import matplotlib.pyplot as plt

gdp = pd.read_csv('import/GDP_JAPAN.csv',
                  parse_dates=['DATE'], index_col='DATE')
iip = pd.read_csv('import/IIP.csv', parse_dates=['DATE'], index_col='DATE')
ita = pd.read_csv('import/ITA.csv', parse_dates=['DATE'], index_col='DATE')

gdp.index = pd.to_datetime(gdp.index, format='%m/%d/%Y').strftime('%Y-%m-01')
iip.index = pd.to_datetime(iip.index, format='%m/%d/%Y').strftime('%Y-%m-01')
ita.index = pd.to_datetime(ita.index, format='%m/%d/%Y').strftime('%Y-%m-01')

dfX = pd.concat([iip['IIP_YOY'], ita['ITA_YOY']], axis=1)

# データを変換する計算式を生成
sc = StandardScaler()
sc.fit(dfX)

# 実際にデータを変換
z = sc.transform(dfX)

dfX_std = pd.DataFrame(z, columns=dfX.columns)

# 主成分分析
transformer = PCA(n_components=1, random_state=0)
transformer.fit(z)
X_transformed = transformer.transform(z)

# 前処理
dfX_factor = pd.DataFrame(X_transformed, columns=['FACTOR'])
dfX_factor.index = iip.index
dfX_std.index = iip.index
df_std_factor = pd.merge(dfX_factor, dfX_std, left_index=True,
                         right_index=True, how='outer')
dfX_std_factor = pd.merge(df_std_factor, gdp, left_index=True,
                          right_index=True, how='outer')
dfX_std_factor = dfX_std_factor[['GDP_CYOY', 'IIP_YOY', 'ITA_YOY', 'FACTOR']]
dfX_std_factor = dfX_std_factor[dfX_std_factor.index != '2013-01-01']
dfX_std_factor['PERIOD'] = pd.to_datetime(dfX_std_factor.index.to_series()).apply(
    lambda x: 3 if x.month in [1, 4, 7, 10] else (1 if x.month in [2, 5, 8, 11] else 2))

# bridge_factor
bridge_factor = pd.DataFrame(
    columns=['BRIDGE_IIP_YOY', 'BRIDGE_ITA_YOY', 'BRIDGE_FACTOR'])
x = np.array([])
y = np.array([])
z = np.array([])
flag = False
for date, IIP_YOY, ITA_YOY, FACTOR in zip(dfX_std_factor.index, dfX_std_factor['IIP_YOY'], dfX_std_factor['ITA_YOY'], dfX_std_factor['FACTOR']):

    x = np.append(x, IIP_YOY)
    y = np.append(y, ITA_YOY)
    z = np.append(z, FACTOR)
    if flag == False:
        if date == '2013-07-01':
            flag = True

    if flag:
        x3t = x[-1]
        x3tm1 = x[-2]
        x3tm2 = x[-3]
        x3tm3 = x[-4]
        x3tm4 = x[-5]
        xt = (x3t + 2*x3tm1 + 3*x3tm2 + 2*x3tm3 + x3tm4)/3

        y3t = y[-1]
        y3tm1 = y[-2]
        y3tm2 = y[-3]
        y3tm3 = y[-4]
        y3tm4 = y[-5]
        yt = (y3t + 2*y3tm1 + 3*y3tm2 + 2*y3tm3 + y3tm4)/3

        z3t = z[-1]
        z3tm1 = z[-2]
        z3tm2 = z[-3]
        z3tm3 = z[-4]
        z3tm4 = z[-5]
        zt = (z3t + 2*z3tm1 + 3*z3tm2 + 2*z3tm3 + z3tm4)/3
        record = pd.Series([xt, yt, zt],
                           index=bridge_factor.columns, name=date)
        bridge_factor = bridge_factor.append(record)

bridge_factor.index.name = 'DATE'
df_bridge = pd.merge(gdp, bridge_factor,
                     left_index=True, right_index=True, how='outer')
df_bridge = df_bridge.dropna()


# 目的変数のみ削除して変数Xに格納
X_bridge = df_bridge.drop("GDP_CYOY", axis=1)
# 目的変数のみ抽出して変数Yに格納
Y_bridge = df_bridge["GDP_CYOY"]

model_bridge = LinearRegression()

model_bridge.fit(X_bridge, Y_bridge)

# パラメータ算出
# 回帰係数
reg_bridge_a_0 = model_bridge.coef_[0]
reg_bridge_a_1 = model_bridge.coef_[1]
reg_bridge_a_2 = model_bridge.coef_[2]

# 切片
reg_bridge_b = model_bridge.intercept_


df_bridge['NOWCAST'] = df_bridge.apply(
    lambda x: reg_bridge_b + reg_bridge_a_0*x['BRIDGE_IIP_YOY'] + reg_bridge_a_1*x['BRIDGE_ITA_YOY'] + reg_bridge_a_2*x['BRIDGE_FACTOR'], axis=1)


df_bridge_new = df_bridge.copy()
df_bridge_new = df_bridge_new.drop('BRIDGE_IIP_YOY', axis=1)
df_bridge_new = df_bridge_new.drop('BRIDGE_ITA_YOY', axis=1)
df_bridge_new = df_bridge_new.drop('BRIDGE_FACTOR', axis=1)

nowcast_bridge_PCA.png

スパース主成分分析


from sklearn.linear_model import LassoLarsIC, LinearRegression
import numpy as np
import pandas as pd
# sklearnの標準化モジュールをインポート
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import SparsePCA, PCA
import matplotlib.pyplot as plt

gdp = pd.read_csv('import/GDP_JAPAN.csv',
                  parse_dates=['DATE'], index_col='DATE')
iip = pd.read_csv('import/IIP.csv', parse_dates=['DATE'], index_col='DATE')
ita = pd.read_csv('import/ITA.csv', parse_dates=['DATE'], index_col='DATE')

gdp.index = pd.to_datetime(gdp.index, format='%m/%d/%Y').strftime('%Y-%m-01')
iip.index = pd.to_datetime(iip.index, format='%m/%d/%Y').strftime('%Y-%m-01')
ita.index = pd.to_datetime(ita.index, format='%m/%d/%Y').strftime('%Y-%m-01')

dfX = pd.concat([iip['IIP_YOY'], ita['ITA_YOY']], axis=1)

# データを変換する計算式を生成
sc = StandardScaler()
sc.fit(dfX)

# 実際にデータを変換
z = sc.transform(dfX)

dfX_std = pd.DataFrame(z, columns=dfX.columns)

# スパース主成分分析
transformer = SparsePCA(n_components=1, random_state=0)
transformer.fit(z)
X_transformed = transformer.transform(z)

# 前処理
dfX_factor = pd.DataFrame(X_transformed, columns=['FACTOR'])
dfX_factor.index = iip.index
dfX_std.index = iip.index
df_std_factor = pd.merge(dfX_factor, dfX_std, left_index=True,
                         right_index=True, how='outer')
dfX_std_factor = pd.merge(df_std_factor, gdp, left_index=True,
                          right_index=True, how='outer')
dfX_std_factor = dfX_std_factor[['GDP_CYOY', 'IIP_YOY', 'ITA_YOY', 'FACTOR']]
dfX_std_factor = dfX_std_factor[dfX_std_factor.index != '2013-01-01']
dfX_std_factor['PERIOD'] = pd.to_datetime(dfX_std_factor.index.to_series()).apply(
    lambda x: 3 if x.month in [1, 4, 7, 10] else (1 if x.month in [2, 5, 8, 11] else 2))

# bridge_factor
bridge_factor = pd.DataFrame(
    columns=['BRIDGE_IIP_YOY', 'BRIDGE_ITA_YOY', 'BRIDGE_FACTOR'])
x = np.array([])
y = np.array([])
z = np.array([])
flag = False
for date, IIP_YOY, ITA_YOY, FACTOR in zip(dfX_std_factor.index, dfX_std_factor['IIP_YOY'], dfX_std_factor['ITA_YOY'], dfX_std_factor['FACTOR']):

    x = np.append(x, IIP_YOY)
    y = np.append(y, ITA_YOY)
    z = np.append(z, FACTOR)
    if flag == False:
        if date == '2013-07-01':
            flag = True

    if flag:
        x3t = x[-1]
        x3tm1 = x[-2]
        x3tm2 = x[-3]
        x3tm3 = x[-4]
        x3tm4 = x[-5]
        xt = (x3t + 2*x3tm1 + 3*x3tm2 + 2*x3tm3 + x3tm4)/3

        y3t = y[-1]
        y3tm1 = y[-2]
        y3tm2 = y[-3]
        y3tm3 = y[-4]
        y3tm4 = y[-5]
        yt = (y3t + 2*y3tm1 + 3*y3tm2 + 2*y3tm3 + y3tm4)/3

        z3t = z[-1]
        z3tm1 = z[-2]
        z3tm2 = z[-3]
        z3tm3 = z[-4]
        z3tm4 = z[-5]
        zt = (z3t + 2*z3tm1 + 3*z3tm2 + 2*z3tm3 + z3tm4)/3
        record = pd.Series([xt, yt, zt],
                           index=bridge_factor.columns, name=date)
        bridge_factor = bridge_factor.append(record)

bridge_factor.index.name = 'DATE'
df_bridge = pd.merge(gdp, bridge_factor,
                     left_index=True, right_index=True, how='outer')
df_bridge = df_bridge.dropna()


# 目的変数のみ削除して変数Xに格納
X_bridge = df_bridge.drop("GDP_CYOY", axis=1)
# 目的変数のみ抽出して変数Yに格納
Y_bridge = df_bridge["GDP_CYOY"]

model_bridge = LinearRegression()

model_bridge.fit(X_bridge, Y_bridge)

# パラメータ算出
# 回帰係数
reg_bridge_a_0 = model_bridge.coef_[0]
reg_bridge_a_1 = model_bridge.coef_[1]
reg_bridge_a_2 = model_bridge.coef_[2]

# 切片
reg_bridge_b = model_bridge.intercept_


df_bridge['NOWCAST'] = df_bridge.apply(
    lambda x: reg_bridge_b + reg_bridge_a_0*x['BRIDGE_IIP_YOY'] + reg_bridge_a_1*x['BRIDGE_ITA_YOY'] + reg_bridge_a_2*x['BRIDGE_FACTOR'], axis=1)


df_bridge_new = df_bridge.copy()
df_bridge_new = df_bridge_new.drop('BRIDGE_IIP_YOY', axis=1)
df_bridge_new = df_bridge_new.drop('BRIDGE_ITA_YOY', axis=1)
df_bridge_new = df_bridge_new.drop('BRIDGE_FACTOR', axis=1)


nowcast_bridge.png

FactorMIDAS

主成分分析


from sklearn.linear_model import LassoLarsIC, LinearRegression
import numpy as np
import pandas as pd
# sklearnの標準化モジュールをインポート
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import SparsePCA, PCA
import matplotlib.pyplot as plt

gdp = pd.read_csv('import/GDP_JAPAN.csv',
                  parse_dates=['DATE'], index_col='DATE')
iip = pd.read_csv('import/IIP.csv', parse_dates=['DATE'], index_col='DATE')
ita = pd.read_csv('import/ITA.csv', parse_dates=['DATE'], index_col='DATE')

gdp.index = pd.to_datetime(gdp.index, format='%m/%d/%Y').strftime('%Y-%m-01')
iip.index = pd.to_datetime(iip.index, format='%m/%d/%Y').strftime('%Y-%m-01')
ita.index = pd.to_datetime(ita.index, format='%m/%d/%Y').strftime('%Y-%m-01')

dfX = pd.concat([iip['IIP_YOY'], ita['ITA_YOY']], axis=1)

# データを変換する計算式を生成
sc = StandardScaler()
sc.fit(dfX)

# 実際にデータを変換
z = sc.transform(dfX)

dfX_std = pd.DataFrame(z, columns=dfX.columns)

# 主成分分析
transformer = PCA(n_components=1, random_state=0)
transformer.fit(z)
X_transformed = transformer.transform(z)

# 前処理
dfX_factor = pd.DataFrame(X_transformed, columns=['FACTOR'])
dfX_factor.index = iip.index
dfX_std.index = iip.index
df_std_factor = pd.merge(dfX_factor, dfX_std, left_index=True,
                         right_index=True, how='outer')
dfX_std_factor = pd.merge(df_std_factor, gdp, left_index=True,
                          right_index=True, how='outer')
dfX_std_factor = dfX_std_factor[['GDP_CYOY', 'IIP_YOY', 'ITA_YOY', 'FACTOR']]
dfX_std_factor = dfX_std_factor[dfX_std_factor.index != '2013-01-01']
dfX_std_factor['PERIOD'] = pd.to_datetime(dfX_std_factor.index.to_series()).apply(
    lambda x: 3 if x.month in [1, 4, 7, 10] else (1 if x.month in [2, 5, 8, 11] else 2))


# factor_MIDAS
df_factor = pd.DataFrame(columns=[
    'GDP_CYOY', 'IIP_YOY_Q1', 'IIP_YOY_Q2', 'IIP_YOY_Q3', 'ITA_YOY_Q1', 'ITA_YOY_Q2', 'ITA_YOY_Q3', 'FACTOR_Q1', 'FACTOR_Q2', 'FACTOR_Q3'])
for date, GDP_CYOY, IIP_YOY, ITA_YOY, FACTOR, PERIOD in zip(dfX_std_factor.index, dfX_std_factor.GDP_CYOY, dfX_std_factor.IIP_YOY, dfX_std_factor.ITA_YOY, dfX_std_factor.FACTOR, dfX_std_factor.PERIOD):

    if PERIOD == 1:
        q1_iip = IIP_YOY
        q1_ita = ITA_YOY
        q1_factor = FACTOR
    elif PERIOD == 2:
        q2_iip = IIP_YOY
        q2_ita = ITA_YOY
        q2_factor = FACTOR
    else:
        record = pd.Series([GDP_CYOY, q1_iip, q2_iip, IIP_YOY, q1_ita, q2_ita, ITA_YOY, q1_factor, q2_factor, FACTOR],
                           index=df_factor.columns, name=date)
        df_factor = df_factor.append(record)

df_factor.index.name = 'DATE'

# 目的変数のみ削除して変数Xに格納
X_factor = df_factor.drop("GDP_CYOY", axis=1)
# 目的変数のみ抽出して変数Yに格納
Y_factor = df_factor["GDP_CYOY"]

model_factor = LinearRegression()

model_factor.fit(X_factor, Y_factor)

# パラメータ算出
# 回帰係数
reg_factor_a_0 = model_factor.coef_[0]
reg_factor_a_1 = model_factor.coef_[1]
reg_factor_a_2 = model_factor.coef_[2]
reg_factor_a_3 = model_factor.coef_[3]
reg_factor_a_4 = model_factor.coef_[4]
reg_factor_a_5 = model_factor.coef_[5]
reg_factor_a_6 = model_factor.coef_[6]
reg_factor_a_7 = model_factor.coef_[7]
reg_factor_a_8 = model_factor.coef_[8]

# 切片
reg_factor_b = model_factor.intercept_

df_factor['NOWCAST'] = df_factor.apply(lambda x: reg_factor_b + reg_factor_a_0*x['IIP_YOY_Q1'] + reg_factor_a_1*x['IIP_YOY_Q2'] + reg_factor_a_2*x['IIP_YOY_Q3'] + reg_factor_a_3 *
                                       x['ITA_YOY_Q1'] + reg_factor_a_4*x['ITA_YOY_Q2'] + reg_factor_a_5*x['ITA_YOY_Q3'] + reg_factor_a_6*x['FACTOR_Q1'] + reg_factor_a_7*x['FACTOR_Q2'] + reg_factor_a_8*x['FACTOR_Q3'], axis=1)

df_factor_new = df_factor.copy()
df_factor_new = df_factor_new.drop('IIP_YOY_Q1', axis=1)
df_factor_new = df_factor_new.drop('IIP_YOY_Q2', axis=1)
df_factor_new = df_factor_new.drop('IIP_YOY_Q3', axis=1)
df_factor_new = df_factor_new.drop('ITA_YOY_Q1', axis=1)
df_factor_new = df_factor_new.drop('ITA_YOY_Q2', axis=1)
df_factor_new = df_factor_new.drop('ITA_YOY_Q3', axis=1)
df_factor_new = df_factor_new.drop('FACTOR_Q1', axis=1)
df_factor_new = df_factor_new.drop('FACTOR_Q2', axis=1)
df_factor_new = df_factor_new.drop('FACTOR_Q3', axis=1)

nowcast_factor_PCA.png

スパース主成分分析


from sklearn.linear_model import LassoLarsIC, LinearRegression
import numpy as np
import pandas as pd
# sklearnの標準化モジュールをインポート
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import SparsePCA, PCA
import matplotlib.pyplot as plt

gdp = pd.read_csv('import/GDP_JAPAN.csv',
                  parse_dates=['DATE'], index_col='DATE')
iip = pd.read_csv('import/IIP.csv', parse_dates=['DATE'], index_col='DATE')
ita = pd.read_csv('import/ITA.csv', parse_dates=['DATE'], index_col='DATE')

gdp.index = pd.to_datetime(gdp.index, format='%m/%d/%Y').strftime('%Y-%m-01')
iip.index = pd.to_datetime(iip.index, format='%m/%d/%Y').strftime('%Y-%m-01')
ita.index = pd.to_datetime(ita.index, format='%m/%d/%Y').strftime('%Y-%m-01')

dfX = pd.concat([iip['IIP_YOY'], ita['ITA_YOY']], axis=1)

# データを変換する計算式を生成
sc = StandardScaler()
sc.fit(dfX)

# 実際にデータを変換
z = sc.transform(dfX)

dfX_std = pd.DataFrame(z, columns=dfX.columns)

# スパース主成分分析
transformer = SparsePCA(n_components=1, random_state=0)
transformer.fit(z)
X_transformed = transformer.transform(z)


# 前処理
dfX_factor = pd.DataFrame(X_transformed, columns=['FACTOR'])
dfX_factor.index = iip.index
dfX_std.index = iip.index
df_std_factor = pd.merge(dfX_factor, dfX_std, left_index=True,
                         right_index=True, how='outer')
dfX_std_factor = pd.merge(df_std_factor, gdp, left_index=True,
                          right_index=True, how='outer')
dfX_std_factor = dfX_std_factor[['GDP_CYOY', 'IIP_YOY', 'ITA_YOY', 'FACTOR']]
dfX_std_factor = dfX_std_factor[dfX_std_factor.index != '2013-01-01']
dfX_std_factor['PERIOD'] = pd.to_datetime(dfX_std_factor.index.to_series()).apply(
    lambda x: 3 if x.month in [1, 4, 7, 10] else (1 if x.month in [2, 5, 8, 11] else 2))


# factor_MIDAS
df_factor = pd.DataFrame(columns=[
    'GDP_CYOY', 'IIP_YOY_Q1', 'IIP_YOY_Q2', 'IIP_YOY_Q3', 'ITA_YOY_Q1', 'ITA_YOY_Q2', 'ITA_YOY_Q3', 'FACTOR_Q1', 'FACTOR_Q2', 'FACTOR_Q3'])
for date, GDP_CYOY, IIP_YOY, ITA_YOY, FACTOR, PERIOD in zip(dfX_std_factor.index, dfX_std_factor.GDP_CYOY, dfX_std_factor.IIP_YOY, dfX_std_factor.ITA_YOY, dfX_std_factor.FACTOR, dfX_std_factor.PERIOD):

    if PERIOD == 1:
        q1_iip = IIP_YOY
        q1_ita = ITA_YOY
        q1_factor = FACTOR
    elif PERIOD == 2:
        q2_iip = IIP_YOY
        q2_ita = ITA_YOY
        q2_factor = FACTOR
    else:
        record = pd.Series([GDP_CYOY, q1_iip, q2_iip, IIP_YOY, q1_ita, q2_ita, ITA_YOY, q1_factor, q2_factor, FACTOR],
                           index=df_factor.columns, name=date)
        df_factor = df_factor.append(record)

df_factor.index.name = 'DATE'

# 目的変数のみ削除して変数Xに格納
X_factor = df_factor.drop("GDP_CYOY", axis=1)
# 目的変数のみ抽出して変数Yに格納
Y_factor = df_factor["GDP_CYOY"]

model_factor = LinearRegression()

model_factor.fit(X_factor, Y_factor)

# パラメータ算出
# 回帰係数
reg_factor_a_0 = model_factor.coef_[0]
reg_factor_a_1 = model_factor.coef_[1]
reg_factor_a_2 = model_factor.coef_[2]
reg_factor_a_3 = model_factor.coef_[3]
reg_factor_a_4 = model_factor.coef_[4]
reg_factor_a_5 = model_factor.coef_[5]
reg_factor_a_6 = model_factor.coef_[6]
reg_factor_a_7 = model_factor.coef_[7]
reg_factor_a_8 = model_factor.coef_[8]

# 切片
reg_factor_b = model_factor.intercept_

df_factor['NOWCAST'] = df_factor.apply(lambda x: reg_factor_b + reg_factor_a_0*x['IIP_YOY_Q1'] + reg_factor_a_1*x['IIP_YOY_Q2'] + reg_factor_a_2*x['IIP_YOY_Q3'] + reg_factor_a_3 *
                                       x['ITA_YOY_Q1'] + reg_factor_a_4*x['ITA_YOY_Q2'] + reg_factor_a_5*x['ITA_YOY_Q3'] + reg_factor_a_6*x['FACTOR_Q1'] + reg_factor_a_7*x['FACTOR_Q2'] + reg_factor_a_8*x['FACTOR_Q3'], axis=1)

df_factor_new = df_factor.copy()
df_factor_new = df_factor_new.drop('IIP_YOY_Q1', axis=1)
df_factor_new = df_factor_new.drop('IIP_YOY_Q2', axis=1)
df_factor_new = df_factor_new.drop('IIP_YOY_Q3', axis=1)
df_factor_new = df_factor_new.drop('ITA_YOY_Q1', axis=1)
df_factor_new = df_factor_new.drop('ITA_YOY_Q2', axis=1)
df_factor_new = df_factor_new.drop('ITA_YOY_Q3', axis=1)
df_factor_new = df_factor_new.drop('FACTOR_Q1', axis=1)
df_factor_new = df_factor_new.drop('FACTOR_Q2', axis=1)
df_factor_new = df_factor_new.drop('FACTOR_Q3', axis=1)


nowcast_factor.png

参考文献

[1]

[2]

[3]

[4]

1
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?