Changes in title
WIP > Paper > Most 2005 (dynardo) > Approximation of complex nonlinear functions by means of neural networks
+Paper > Most 2005 (dynardo) > Approximation of complex nonlinear functions by means of neural networks
@@ 1,48 +1,149 @@
**WIP (Work in Progress)**

[Approximation of complex nonlinear functions by means of neural networks](https://www.dynardo.de/fileadmin/Material_Dynardo/WOST/Paper/wost2.0/NeuralNetworks.pdf)
Thomas Most
presented at the 2nd Weimar Optimization and Stochastic Days 2005
2017年現在からすると、12年も前の論文。
12年の間にこの論文内の内容について新しい発見があるかもしれない点は注意事項。
いくつかの問題に対して、mutlilayer perceptron (also called feedforward backpropagation network)を用いてfunction approximationを行っている。
以下は気になったキーワードの抜粋。
 1. Introduction
 Moving Least Squared: Lancaster and Salkauskas (1981)
 ANN: artificial neural networks
 A good overview: Hagan et al. (1996)
 RSM
 limited to problems of lower dimension
 Lehky´ and Nov´ak (2004)
 material parameters of a smeared crack model for concrete cracking were identified
 2. Neural network approximation
 Eq.1: output of a single neuron
 a_i^j
 m, i, j, w_k,i^j
 sigmoid transfer function
 linear output layer with f(x) = x
 Demuth and Beale (2002): a complete list of different transfer functions
 Three points: influence on the approximation quality of NN
 training
 design of the network architecture
 choice of appropriate training sample
 Scaled Conjugate Gradient: Møller (1993)
 Eq.2: number of training samples m should be ... [link](http://qiita.com/7of9/items/c1501c2c3fb58df0f445)
 regularization
 Bayesian training: MacKay (1992)
 very slow for higher dimensions compared to Newtonbased training approaches.
 does not avoid overfitting completely
 early stopping
 needs additional data set, the control set
 stops if the control set starts to increase, while the training error decreases
 does not avoid overfitting completely
 Papadrakakis et al. (1996), Hurtado (2002)
 training samples are generated by standard Monte Calro Simulation
 Lehky´ and Nov´ak (2004)
 **LHS: Latin Hypercube Sampling**
 [link @ lucille 開発日記](http://lucille.sourceforge.net/blog/archives/000188.html)
+ 3. Numerical examples
+ 3.1. Approximation of nonlinear limit state functions
+  nonlinear limit state function
+  in Katsuki and Frangopol (1994)
+  2D function
+  defined by two linear and two quadratic functions
+  Eq.3: g1(x), g2(x), g3(x), g4(x)
+  x1, x2: Gaussian random variables
+  reliability indices
+  approximation of limit state g(x) = 0
+  **inverse radius**
+  zero value for an unbounded direction
+  largest values for the points with the largest probability
+  support points
+  Fig.2
+  weighted interpolation
+  MLSG: MLS interpolation with exponential weight
+  D: scaled influence radius
+  failure probability
+  to quantify the quality of the approximation
+  Fig.3
+  neural network approximations gives very good results if a t least 16 support points are used
+  Fig.4
+  Eq.4: a quadratic limit state functions
+  Fig.5
+  For this function type an unbounded region eixsts, thus the approximation works only sufficiently using the inverse radius as output quantity.
+ 3.2. Crack growth in a plain concrete beam
+  3.2.1. Stochastic model and probabilistic analysis
+  Fig.6
+  analyzed deterministically
+  Carpinteri et al. (1986)
+  assuming random material properties
+  multiparameter random field
+  Most and Bucher (2005)
+  with lognormal distribution
+  stochastic analysis
+  10000 plain Monte Carlo samples
+  crack growth algorithm
+  Fig.7
+  probability density function (PDF)
+  3.2.2. Approximation of the maximum relative load
+  neural networks
+  trained with 50, 200 and 200 widespanned LHS samples
+  the widespanned LHS training is obtained ...
+  input
+  all 30 independent Gaussian random field variables
+  output
+  one outpue value
+  Only 2 neurons are used together with the 50 training samples and 6 neurons for the 200 samples in order to fullfil Eq.2.
+  Table 1
+  maximum error
+  mean error
+  mean error
+  less than 1% and larger than 99%
+  approximation error for the regions with very low probability
+  Fig.8
+  Fig.9
+  compared to a response surface approximation
+  using
+  a global polynomial
+  MLS interpolation
+  Table 1
+  one order of magnitude larger than these of the neural network approximation
+  3.2.3. Approximation of the random load displacement curves
+  complete nonlinear response
+  neural network
+  input
+  30 random variables
+  output
+  the load values at ten fixed displacement values
+  200 widespanned LHS samples
+  Fig.10
+  10000 MCS samples
+  corresponding neural network approximation
+  Table 2
+  maximum and mean error
+  normalized
+ 3.3. Identification of the parameters of a complex interface material model for concrete
+  parameters of a material model
+  for cohesive interfaces
+  twelve parameters
+  Fig.11
+  neural network
+  input
+  Eleven regular points on this curve
+  output
+  three parameters
+  tensile strength
+  ModelI fracture energy
+  shape parameter for tensile softening
+  Fig.12
+  ラボ観測結果との比較
+  Hassanzadeh (1990)
+  optimization strategyとの比較
+  Carol et al. (1997)
+  Table 3
+  150 uniformly distributed LHS samples with 4 hidden neurons
+  1000 LHS samples with 29 hidden nuerons
+  But a very good agreement with the experimental curves could not be achieved.
+  much larger number of neurons and belonging training samples might be necessary.
+ 4. Conclusions
+
+