# 概要

chainerの作法、調べてみた。
autoencoder。

# インポート

```import numpy as np
from chainer import datasets, iterators
from chainer import optimizers
from chainer import Chain
from chainer import training
from chainer.training import extensions
import chainer.functions as F
import chainer.links as L
import chainer
import matplotlib.pyplot as plt

```

# 訓練データセットのイテレーション

datasetsのget_mnist使う。

```    train, test = datasets.get_mnist()
train = train[0 : 1000]
train = [i[0] for i in train]

```

# ミニバッチに対する前処理

datasetsのTupleDataset使う。

```    train = datasets.TupleDataset(train, train)
train_iter = iterators.SerialIterator(train, 100)
test = test[0 : 25]
```

# ニューラルネットワークのForward/backward計算

Classifier使うけど、lossは、mean_squaredを使う。

```class Autoencoder(Chain):
def __init__(self):
super(Autoencoder, self).__init__(encoder = L.Linear(784, 80), decoder = L.Linear(80, 784))
def __call__(self, x, hidden = False):
h = F.relu(self.encoder(x))
if hidden:
return h
else:
return F.relu(self.decoder(h))
model = L.Classifier(Autoencoder(), lossfun = F.mean_squared_error)
model.compute_accuracy = False
```

# パラメータの更新

```    optimizer = optimizers.Adam()
optimizer.setup(model)
updater = training.StandardUpdater(train_iter, optimizer, device = -1)
trainer = training.Trainer(updater, (80, 'epoch'), out = "result")

trainer.run()
```

# 中間結果をログに残す

```    trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport(['epoch', 'main/loss']))
```