LoginSignup
0
0

More than 5 years have passed since last update.

RNN > RECURRENT NEURAL NETWORKS TUTORIAL, PART 1 – INTRODUCTION TO RNNS > 翻訳版にはない注記

Last updated at Posted at 2016-11-05

Recurrent Neural Networks (RNNs)
の面白そうな記事を見つけた。
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/

以下で翻訳を見つけた。
http://qiita.com/kiminaka/items/87afd4a433dc655d8cfd

翻訳を読んでいたが、いくつかの文書は抜けている。完全翻訳でもないようだ。

具体的には、WHAT ARE RNNS?節の以下の文章の対応する翻訳が見つからないようだ。

個人的に興味をひかれた部分を色付けしてみた。

There are a few things to note here:

You can think of the hidden state $s_t$ as the memory of the network. $s_t$ captures information about what happened in all the previous time steps. The output at step $o_t$ is calculated solely based on the memory at time $t$. As briefly mentioned above, it’s a bit more complicated in practice because $s_t$ typically can’t capture information from too many time steps ago.

Unlike a traditional deep neural network, which uses different parameters at each layer, a RNN shares the same parameters ($U$, $V$, $W$ above) across all steps. This reflects the fact that we are performing the same task at each step, just with different inputs. This greatly reduces the total number of parameters we need to learn.

The above diagram has outputs at each time step, but depending on the task this may not be necessary. For example, when predicting the sentiment of a sentence we may only care about the final output, not the sentiment after each word. Similarly, we may not need inputs at each time step. The main feature of an RNN is its hidden state, which captures some information about a sequence.

余談
Qiitaでは数式表示にも色をつけられるようになっている。前からなのか不明だが、これは便利かもしれない。

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0