Help us understand the problem. What is going on with this article?

# FXの一分足データから五分後の上昇下落を予想する

More than 1 year has passed since last update.

# ディープラーニングを初めて実装した

## 実装

```from google.colab import files

import pandas as pd
import io
dataM1 = pd.read_csv('/content/drive/My Drive/out_2018usdjpy.csv', sep = ",")

import random
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction import DictVectorizer
from sklearn import preprocessing
import time
import keras
#ｃｓｖデータの読み取り
time1 = time.time()

dataM2 = dataM1.dropna()   #欠損値がある行の削除
data1 = dataM2.values#numpy配列に変更
print(data1.shape)

col = 11 #特徴量の数

X = data1[col:, 1:col]#特徴量行列の設定
y = data1[col:, col:]#ターゲットデータの設定
print(X.shape)
hl = y#numpy配列に変更
print(hl.shape)
time2= time.time()
time3 = time2-time1
print(time3)
sc=preprocessing.StandardScaler()
sc.fit(X)
X_std=sc.transform(X)　#データの正規化

X_train, X_test, y_train, y_test=train_test_split(X_std,hl.reshape(-1,), test_size=0.3,random_state = 1)　#テストデータとトレーニングデータを分割

print(X_train.shape[0])
print(X_train.shape[1])

print(X_train.shape)
print(y_train.shape)

np.random.seed(123)
tf.set_random_seed(123)

time4 = time.time()

y_train_onehot = keras.utils.to_categorical(y_train)

model = keras.models.Sequential()

input_dim = X_train.shape[1],
kernel_initializer ="glorot_uniform",
bias_initializer ='zeros',
activation = "tanh"
))

input_dim = 300,
kernel_initializer ="glorot_uniform",
bias_initializer ='zeros',
activation = "tanh"
))

input_dim = 300,
kernel_initializer ="glorot_uniform",
bias_initializer ='zeros',
activation = "tanh"
))

input_dim = 300,
kernel_initializer ="glorot_uniform",
bias_initializer ='zeros',
activation = "tanh"
))

input_dim = 300,
kernel_initializer ="glorot_uniform",
bias_initializer ='zeros',
activation = "tanh"
))

input_dim = 300,
kernel_initializer ="glorot_uniform",
bias_initializer ='zeros',
activation = "softmax"
))

sgd_optimizer = keras.optimizers.SGD(lr=0.01,decay = 1e-7,momentum= .9)

model.compile(optimizer= sgd_optimizer,loss='categorical_crossentropy')

history = model.fit(X_train,
y_train_onehot,
batch_size = 64,
epochs = 0,
verbose= 1,
validation_split = 0.1
)

y_train_pred = model.predict_classes(X_train,verbose =0)
print("first 3 predictions: ",y_train_pred[:3])

correct_preds = np.sum(y_train == y_train_pred,axis = 0)

time5 = time.time()

print(time5-time4)

train_acc = correct_preds / y_train.shape[0]

print("training accuracy: %.2f%%" % (train_acc * 100))

y_test_pred = model.predict_classes(X_test,verbose =0)

correct_preds2 = np.sum(y_test == y_test_pred,axis = 0)

test_acc = correct_preds2 / y_test.shape[0]

print("test accuracy: %.2f%%" % (test_acc * 100))
```

## 結果

training accuracy: 50.15%
test accuracy: 50.09%

## 感想

まあ儲からないよなっていう印象。流石はランダムウォークというべきか。結果を見ての通りにクラス分類で50％ってことはそういうことなんだろうと思う。

Why not register and get more from Qiita?
1. We will deliver articles that match you
By following users and tags, you can catch up information on technical fields that you are interested in as a whole
2. you can read useful information later efficiently
By "stocking" the articles you like, you can search right away