LoginSignup
0
0

More than 5 years have passed since last update.

Paper > Most 2005 (dynardo) > Approximation of complex nonlinear functions by means of neural networks

Last updated at Posted at 2017-06-23

Approximation of complex nonlinear functions by means of neural networks
Thomas Most
presented at the 2nd Weimar Optimization and Stochastic Days 2005

2017年現在からすると、12年も前の論文。
12年の間にこの論文内の内容について新しい発見があるかもしれない点は注意事項。

いくつかの問題に対して、mutli-layer perceptron (also called feed-forward back-propagation network)を用いてfunction approximationを行っている。

以下は気になったキーワードの抜粋。

  • 1. Introduction
    • Moving Least Squared: Lancaster and Salkauskas (1981)
    • ANN: artificial neural networks
    • A good overview: Hagan et al. (1996)
    • RSM
      • limited to problems of lower dimension
    • Lehky´ and Nov´ak (2004)
      • material parameters of a smeared crack model for concrete cracking were identified
  • 2. Neural network approximation
    • Eq.1: output of a single neuron
      • a_i^j
      • m, i, j, w_k,i^j
    • sigmoid transfer function
    • linear output layer with f(x) = x
    • Demuth and Beale (2002): a complete list of different transfer functions
    • Three points: influence on the approximation quality of NN
      • training
      • design of the network architecture
      • choice of appropriate training sample
    • Scaled Conjugate Gradient: Møller (1993)
    • Eq.2: number of training samples m should be ... link
    • regularization
      • Bayesian training: MacKay (1992)
        • very slow for higher dimensions compared to Newton-based training approaches.
        • does not avoid over-fitting completely
    • early stopping
      • needs additional data set, the control set
      • stops if the control set starts to increase, while the training error decreases
      • does not avoid over-fitting completely
    • Papadrakakis et al. (1996), Hurtado (2002)
      • training samples are generated by standard Monte Calro Simulation
    • Lehky´ and Nov´ak (2004)
  • 3. Numerical examples
  • 3.1. Approximation of nonlinear limit state functions
    • nonlinear limit state function
      • in Katsuki and Frangopol (1994)
      • 2D function
      • defined by two linear and two quadratic functions
      • Eq.3: g1(x), g2(x), g3(x), g4(x)
      • x1, x2: Gaussian random variables
      • reliability indices
    • approximation of limit state g(x) = 0
    • inverse radius
      • zero value for an unbounded direction
      • largest values for the points with the largest probability
    • support points
      • Fig.2
      • weighted interpolation
      • MLS-G: MLS interpolation with exponential weight
      • D: scaled influence radius
    • failure probability
      • to quantify the quality of the approximation
      • Fig.3
      • neural network approximations gives very good results if a t least 16 support points are used
      • Fig.4
      • Eq.4: a quadratic limit state functions
      • Fig.5
        • For this function type an unbounded region eixsts, thus the approximation works only sufficiently using the inverse radius as output quantity.
  • 3.2. Crack growth in a plain concrete beam
    • 3.2.1. Stochastic model and probabilistic analysis
      • Fig.6
        • analyzed deterministically
          • Carpinteri et al. (1986)
          • assuming random material properties
      • multi-parameter random field
        • Most and Bucher (2005)
        • with lognormal distribution
      • stochastic analysis
        • 10000 plain Monte Carlo samples
        • crack growth algorithm
        • Fig.7
        • probability density function (PDF)
    • 3.2.2. Approximation of the maximum relative load
      • neural networks
        • trained with 50, 200 and 200 wide-spanned LHS samples
        • the wide-spanned LHS training is obtained ...
        • input
          • all 30 independent Gaussian random field variables
        • output
          • one outpue value
        • Only 2 neurons are used together with the 50 training samples and 6 neurons for the 200 samples in order to fullfil Eq.2.
      • Table 1
        • maximum error
        • mean error
        • mean error
          • less than 1% and larger than 99%
          • approximation error for the regions with very low probability
      • Fig.8
      • Fig.9
      • compared to a response surface approximation
        • using
          • a global polynomial
          • MLS interpolation
        • Table 1
        • one order of magnitude larger than these of the neural network approximation
    • 3.2.3. Approximation of the random load displacement curves
      • complete nonlinear response
      • neural network
        • input
          • 30 random variables
        • output
          • the load values at ten fixed displacement values
        • 200 wide-spanned LHS samples
      • Fig.10
        • 10000 MCS samples
        • corresponding neural network approximation
      • Table 2
        • maximum and mean error
          • normalized
  • 3.3. Identification of the parameters of a complex interface material model for concrete
    • parameters of a material model
      • for cohesive interfaces
    • twelve parameters
    • Fig.11
    • neural network
      • input
        • Eleven regular points on this curve
      • output
        • three parameters
          • tensile strength
          • Model-I fracture energy
          • shape parameter for tensile softening
      • Fig.12
        • ラボ観測結果との比較
          • Hassanzadeh (1990)
        • optimization strategyとの比較
          • Carol et al. (1997)
      • Table 3
      • 150 uniformly distributed LHS samples with 4 hidden neurons
      • 1000 LHS samples with 29 hidden nuerons
      • But a very good agreement with the experimental curves could not be achieved.
      • much larger number of neurons and belonging training samples might be necessary.
  • 4. Conclusions
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0