3
5

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

W-Net(Stacked U-Net)の原理(arxiv論文の簡単なまとめ)

Last updated at Posted at 2018-09-11

W-Netの構造の概要

In particular, we design a new architecture which we call W-Net, and which ties two fully convolutional network (FCN) architectures (each similar to the UNet architecture) together into a single autoencoder.

-> 2つのFCNを連結し、オートエンコーダにつなぐするという構造のようである。

The first FCN encodes an input image, using fully convolutional layers, into a k-way soft segmentation.

-> 1番目のFCNはinputイメージをk-way soft segmentationへとエンコードする。

The second FCN reverses this process, going from the segmentation layer back to a reconstructed image.

-> 2番目のFCNはsegmentation画像にエンコードされたものを画像にdecodeする。

We jointly minimize both the reconstruction error of the autoencoder as well as a “soft” normalized cut loss function on the encoding layer

-> オートエンコーダのreconstruction errorと、1st FCN(おそらく)の"soft" normalized cut loss functionを最小化した。

In order to achieve state-of-the-art results, we further appropriately postprocess this initial segmentation in two steps: we first apply a fully conneceted conditional random field (CRF) [20, 6] smoothing on the outputted segments, and we second apply the hierarchical merging method of [2] to obtain a final segmentation, showed as Figure 1

-> Hierarchical segmentationを1st FCNの結果に対して行い、final segmentationとしてdecoderに渡している。(corrected on 20190529)

W-Net(Stacked U-Net)に関連する技術

①Unsupervised Segmentation - Felzenszwalb and Huttenlocher’s graph-based method, Shi and Malik’s Normalized Cuts, Comaniciu and Meer’s Mean Shift

②Deep CNNs in Semantic Segmentation - FCN

In a FCN, fully connected layers of standard convolutional neural networks (CNNs) are transformed as convolution layers with kernels that cover the entire input region.

③Encoder-decoders

The encoder Enc maps the input (e.g. an image patch) to a compact feature representation, and then the decoder Dec reproduces the input from its lower-dimensional representation

The encoder Enc maps the input (e.g. an image patch) to a compact feature representation, and then the decoder Dec reproduces the input from its lower-dimensional representation. In this paper, we design an encoder such that the input is mapped to a dense pixelwise segmentation layer with same spatial size rather than a low-dimensional space. The decoder then performs a reconstruction from the dense prediction layer.

Unknown glossaries to be studied

  • CRF energy

  • supervised loss

  • mean-field approximate inference for the CRF as a Recurrent Neural Network (CRFRNN)

Feature map:

畳み込み層の入出力データのこと。

入力特徴マップ:input feature map

出力特徴マップ:output feature map

U-NetとW-Netについての本文中の説明

[27] presents a U-shaped architecture consisting of a contracting path to capture context and a symmetric expanding path that enables precise localization. In this paper, we modify and extend the architecture described in [27] to a W-shaped network such that it reconstructs the original input images and also predicts a segmentation map without any labeling information.

##Separable convolution

A depthwise separable convolution operation consists of a depthwise convolution and a pointwise convolution.

a depthwise convolution performs spatial convolutions independently over each channel and then a pointwise convolution projects the feature channels by the depthwise convolution onto a new channel space.

W-Netの種類 Stacked U-NetとParallel U-Net?(2019年5月29日追記)

U-Net architectureを連結するStacked U-Netと、Stereo imageに対してParallelにU-Netを適用するParallel U-Netが存在するようです。

それぞれ私の造語ですが、以下にまとめます。

Stacked U-Net

「input dataに対してRoughなfeature mapにencodeしてからsegmentationを生成する」ということを何通りものfeature mapに対して行うというものを指すようです。(CNN as 1st FCN?? おそらく筆者はこの部分をPre-trained U-Netとして扱っている。)

Pre-trained U-Netを複数集めてきて、生成した(raw-data, mask)setに対してこれらのデータを教師データとしてさらにencoder-decoderに渡すことでFinal segmentationを決定するというもの。

Parallel U-Net

Stereo imageに対してU-Netを適用したものをW-Netと呼ぶ場合もある模様。現在調査中で、真偽は不明です。

encoder-decoder-networkを用いたモデリングはベイズ主義的か?

頻度主義とベイズ主義の認識があまりできていなかったため、ただ一つのsegmentation modelを作成するsingle U-NetとW-Netはそれぞれ頻度主義的、ベイズ主義的であると考えていました。

すなわち、真値としてのモデルは従来のsingle U-Netによるセグメンテーションモデルなのではないかな、と考えました。

(今考えるとVAEはベイズ主義に基づいているため適切な表現ではなかったかもしれないです。)

##Algorithm 1 Minibatch stochastic gradient descent training of W-Net

1: procedure W-NET(X;UEnc, UDec)
​
2: for number of training iterations do
​
3: Sample a minibatch of new input images x
​
4: Update UEnc by minimizing Jsof t−N cut
​
5: Only update UEnc
​
6: Update whole W-Net by minimizing Jreconstr
​
7: Update both UEnc and UDec
​
8: return UEnc

##Algorithm 2 Post processing

1: procedure POSTPROCESSING(x;UEnc, CRF, P b)
    
2: x = UEnc(x)
​
3: #Get the hidden representation of x
​
4: x = CRF(x)
​
5: #fine-grained boundaries with a fully CRF
​
6: x = P b(x)
​
7: #compute the probability of boundary only on the edge detected in x
​
8: S = countour2ucm(x)
​
9: #hierarchical segmentation
​
10: return S

##Soft Normalized Cut Loss
$$NcutK(V)=∑k=1Kcut(Ak,V−Ak)assoc(Ak,V)=∑k=1K∑u∈Ak,v∈V−Akw(u,v)∑u∈Ak,t∈Vw(u,t)$$

where Ak is set of pixels in segment k, V is the set of all pixels, and w measures the weight between two pixels

However, since the argmax function is nondifferentiable, it is impossible to calculate the corresponding gradient during backpropagation. Instead, we define a soft version of the N cut loss which is differentiable so that we can update gradients during backpropagation:

-> 微分可能な関数を用いなければ勾配降下法によるパラメータの更新ができないため。(おそらく)

$$Jsoft−Ncut(V,K)=∑k=1Kcut(Ak,V−Ak)assoc(Ak,V)$$

$$=K−∑k=1Kassoc(Ak,Ak)assoc(Ak,V)$$

$$=K−∑k=1K∑u∈V,v∈Vw(u,v)p(u=Ak)p(v=Ak)∑u∈Ak,t∈Vw(u,t)p(u=Ak)$$

$$=K−∑k=1K∑u∈Vp(u=Ak)∑u∈Vw(u,v)p(v=Ak)∑u∈Vp(u=Ak)∑t∈Vw(u,t)$$

where p(u = Ak) measures the probability of node u belonging to class Ak, and which is directly computed by the encoder.

##Reconstruction Loss
$$Jreconstr=‖X−UDec(UEnc(X;WEnc);WDec)‖22$$

##Postprocessing
$$E(X)=∑uΦ(u)+∑u,vΨ(u,v)$$

$$wij=e−‖F(i)−F(j)‖22σ2I$$

参考

あとがき

Qiita初投稿です。

CMSと使い分けが難しいですがコードの評価に関する記事や、共有した方がいいと思った内容についてはqiitaに投稿していこうと考えてます。

間違った解釈など、大いにあると思います。その場合はご指摘をいただけると幸いです。

(圧倒的成長のため、調査段階でも記事を投稿していきたいと考えております。)

3
5
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
3
5

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?