Download PDF
Research Article  |  Open Access  |  30 Aug 2023

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Views: 598 |  Downloads: 194 |  Cited:  0
Dis Prev Res 2023;2:15.
10.20517/dpr.2023.18 |  © The Author(s) 2023.
Author Information
Article Notes
Cite This Article

Abstract

Aim

Since double sources of uncertainties are usually involved in practical engineering – namely, the epistemic and aleatory uncertainties – it is becoming increasingly crucial to incorporate these double sources of uncertainties into structural reliability analysis in order to ensure the safety of structures.

Methods

In this paper, a two-level Active-learning Kriging (AK) meta-model approach is put forward to address the challenges of imprecise structural reliability analysis, where the epistemic and aleatory uncertainties are characterized by using a parameterized probability-box (p-box) model. At the inner loop, a new learning function called the Relative Entropy Function (REF) is proposed to enhance the active learning process. The proposed REF facilitates the selection of informative points efficiently, accelerating the overall AK with Monte Carlo Simulation (AK-MCS) process. In that regard, the proposed AK-MCS with REF is effective in estimating failure probabilities with high accuracy in a precise probability sense. Moving to the outer loop, another Kriging meta-model is established to relate the distribution parameters within the p-boxes to the conditional failure probabilities. This outer-loop model allows for efficient estimation of the failure probability bounds via the efficient global optimization.

Results

The efficacy of the proposed method is verified through four numerical examples, which include a finite-element model. A pertinent double-loop MCS is employed to obtain comparative results. Furthermore, the proposed method is applied to structural progressive collapse analysis, serving as a guide for robustness-based design.

Conclusion

The computational results demonstrate that the proposed method is effective in dealing with structural reliability problems involving double sources of uncertainties.

Keywords

Probability-boxes, Kriging meta-model, Active learning, Relative entropy function, Imprecise structural reliability analysis, Structural progressive collapse

INTRODUCTION

Uncertainty is a common occurrence in various aspects of practical engineering, including structure design, manufacturing, operation, and maintenance. While individual uncertainties may have minimal impacts on structural response, the combination of multiple uncertainties can lead to significant deviations in the structural behavior[1]. As a result, it is crucial to perform structural reliability analysis, particularly for critical structures such as nuclear power plants, hospitals, and monuments, where structural failure can pose significant safety risks and have monumental consequences.

In practical engineering, uncertainties can generally be classified into two categories: aleatory and epistemic uncertainties[24]. On the one hand, aleatory uncertainty, also known as probability uncertainty, is inherent in nature and beyond our control. It is characterized by a level of uncertainty that cannot be reduced or eliminated. On the other hand, epistemic uncertainty, also referred to as subjective uncertainty, arises from insufficient knowledge of variables and a lack of comprehensive experimental data. As knowledge and information increase, epistemic uncertainty tends to decrease[5, 6]. In practical engineering problems, it is crucial to recognize that our knowledge is inherently imperfect. As a result, accurately assessing the probability of a structure becomes challenging due to the presence of epistemic uncertainty. This means that even the reliability index, which is used to measure the safety and performance of a structure, is fundamentally uncertain.

Traditional reliability analysis methods may not be suitable for handling the problem at hand because they are designed to handle precise probability problems that assume complete probabilistic information of variables without accounting for aleatory uncertainty. To address this issue, non-probabilistic models and imprecise probability models have emerged as potential solutions. Non-probabilistic models, such as the interval model[7, 8], convex model[911], and fuzzy set theory[12], have been proposed. However, these models may not effectively distinguish between epistemic and aleatory uncertainties, which can result in sub-optimal decision-making. Imprecise probability models, on the other hand, have gained attention as a more effective way to handle uncertainties. These models include probability-boxes (p-boxes)[3, 13, 14], Dempster-Shafer evidence theory[1517], interval probabilities[18], fuzzy probabilities[19]. By utilizing imprecise probability models, it is possible to more accurately capture and express the uncertainties associated with the variables. However, using imprecise probability models comes with a significant challenge—computational cost. The large number of Limit State Function (LSF) evaluations, especially for time-consuming finite element analyses, can increase the computational effort significantly. Some other methods, such as nested Monte Carlo Simulation (MCS)[20], random sets[21], and advanced line sampling[22], have been developed to address the imprecise problems, but their computational efficiency remains unsatisfactory. To overcome the challenge of computational complexity, active learning-based meta-models, also known as surrogate models, have been introduced. These surrogate models aim to approximate the LSF with reduced computational effort while maintaining an acceptable level of accuracy. By using surrogate models, the computational burden associated with imprecise probability models can be significantly reduced, making them more manageable and efficient in practice.

In recent decades, significant strides have been made in the development and widespread application of various meta-models aimed at enhancing the precision of structural reliability analysis. Notably, the Kriging meta-model [also known as the Gaussian Process Regression (GPR) model] has garnered substantial attention and utilization in this field[2325]. Other prominent meta-models include response surfaces[26], support vector machines[27, 28], neural networks[2931], and high-dimensional model representations[32]. Among these, the Kriging meta-model stands out due to its exceptional qualities in precise interpolation and local uncertainty quantification. As a result, the active Kriging meta-model technology has found applications in addressing imprecise probability problems. Noteworthy contributions from researchers involve the fusion of Active-learning Kriging (AK) meta-models with advanced sampling techniques to evaluate failure probabilities[3338]. Nevertheless, efficiently estimating failure probability bounds for imprecise reliability problems remains a challenging endeavor, particularly when employing the AK model. This paper addresses this challenge by employing parametric p-boxes[14] to accurately characterize the imprecise random variables involved.

This paper introduces a novel methodology to address imprecise reliability problems employing parametric p-boxes. The approach introduces an efficient learning function termed the "Relative Entropy Function (REF)" to enhance the AK meta-model. By optimizing the point selection strategy, the REF significantly accelerates the speed of active learning. The organizational structure of this paper is outlined as follows: In Section 2, the problem formulation is meticulously presented. Moving on to Section 3, a succinct review of the Kriging meta-model is offered, alongside its fusion with the REF, giving rise to an AK meta-model with MCS (AK-MCS) in the precise probability sense. Furthermore, the paper applies the devised AK-MCS with REF to tackle imprecise probability problems, facilitating the estimation of failure probability bounds. Section 4 demonstrates the effectiveness of the proposed method through the exposition of four numerical examples. To delve deeper into the applicability, Section 5 explores a practical implementation of the proposed method. Some concluding remarks are contained in the final section.

PROBLEM STATEMENT

In structural reliability analysis, the LSF of a stochastic system mapping inputs to outputs is typically expressed as:

$$ \begin{equation} Z = G\left( {\bf{X}} \right) \end{equation} $$

where $$ Z $$ signifies the output, while $$ {\bf{X}} = \left[ X_1, X_2, \ldots, X_n \right] $$ forms a vector encompassing $$ n $$ input random variables. Each random variable $$ X_i $$, where $$ i = 1, 2, \ldots, n $$, can generally be characterized by its Probability Density Function (PDF) $$ f_{X_i}(x_i) $$ or Cumulative Distribution Function (CDF) $$ F_{X_i}(x_i) $$. In practice, a combination of epistemic and aleatory uncertainties often comes into play. Parametric p-boxes can effectively capture epistemic uncertainties, while exact PDFs/CDFs are employed to delineate aleatory uncertainties.

In parametric p-boxes, the PDF or CDF of $$ {X_i} $$ can be expressed by distribution function families with interval variables. The joint PDF of $$ \bf{X} $$ can be can be defined as $$ {f_{\bf{X}}}\left( {\bf{x}} \right) = {f_{\bf{X}}}\left( {\left. {\bf{x}} \right|{\rm \mathit{\boldsymbol{\theta}}}} \right) $$, where $$ {\mathit{\boldsymbol{\theta}}} = \left[ {{\theta _1},{\theta _2}, \ldots ,{\theta _m}} \right] $$ is distribution parameter vector. For simplicity, the interval model is often used to describe the uncertainty of $$ {\mathit{\boldsymbol{\theta}}} $$, i.e., $$ {\mathit{\boldsymbol{\theta}}} \in \left[ {\underline {\mathit{\boldsymbol{\theta}}} ,\overline {\mathit{\boldsymbol{\theta}}} } \right] $$, where $$ \underline {\mathit{\boldsymbol{\theta}}} = \left[ {{{\underline {\mathit{\boldsymbol{\theta}}} }_1},{{\underline {\mathit{\boldsymbol{\theta}}} }_2}, \ldots ,{{\underline {\mathit{\boldsymbol{\theta}}} }_m}} \right] $$ and $$ \overline {\mathit{\boldsymbol{\theta}}} = \left[ {{{\overline {\mathit{\boldsymbol{\theta}}} }_1},{{\overline {\mathit{\boldsymbol{\theta}}} }_2}, \ldots ,{{\overline {\mathit{\boldsymbol{\theta}}} }_m}} \right] $$ are the lower and upper bounds, respectively. For convenience, the interval distribution parameters are independent of each other, i.e., $$ \left[ {{{\underline {\mathit{\boldsymbol{\theta}}} }_1},{{\overline {\mathit{\boldsymbol{\theta}}} }_1}} \right] \times \left[ {{{\underline {\mathit{\boldsymbol{\theta}}} }_2},{{\overline {\mathit{\boldsymbol{\theta}}} }_2}} \right] \times \cdots \times \left[ {{{\underline {\mathit{\boldsymbol{\theta}}} }_m},{{\overline {\mathit{\boldsymbol{\theta}}} }_m}} \right] $$ denotes a hyper-rectangular geometry. For example, if $$ {X} \sim \left( {\left[ {0,1} \right],\left[ {1,{{1.5}^2}} \right]} \right) $$, the PDFs and CDFs are shown in Figure 1.

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 1. PDFs and CDFs of the parametric p-boxes.

In the precise reliability sense, the condition $$ Z \le 0 $$ means the structural failure state, and the corresponding failure probability $$ p_f $$ is defined as:

$$ \begin{equation} {p_f} = \Pr \left[ {Z \le 0} \right] = \Pr \left[ {G\left( {{\bf{X}}} \right) \le 0} \right] = \int_\Omega {{f_{{\bf{X}}}}\left( {{\bf{x}}} \right) , \mathrm{d} {\bf{x}}} \end{equation} $$

where $$ \Omega $$ represents the failure domain of the structure. However, in the context of parametric p-boxes, unlike precise reliability analysis, the joint PDF $$ {f_{{\bf{X}}}}\left( {{\bf{x}}} \right) $$ cannot be precisely determined.

In this case, the range of failure probability $$ {p_f} \in \left[ \underline{p}_f, \overline{p}_f \right] $$, where the bounds are defined as follows:

$$ \begin{equation} {\underline p _f} = \mathop {\min }\limits_{{f_{\bf{X}}}\left( {\left. {\bf{x}} \right|{\mathit{\boldsymbol{\theta}}}} \right)} \int_\Omega {{f_{\bf{X}}}\left( {\left. {\bf{x}} \right|{\mathit{\boldsymbol{\theta}}}} \right){\rm d}{\bf{x}}} ,\; {\overline p _f} = \mathop {\max }\limits_{{f_{\bf{X}}}\left( {\left. {\bf{x}} \right|{\mathit{\boldsymbol{\theta}}}} \right)} \int_\Omega {{f_{\bf{X}}}\left( {\left. {\bf{x}} \right|{\mathit{\boldsymbol{\theta}}}} \right){\rm d}{\bf{x}}} \end{equation} $$

where both $$ \mathop {\min }\limits_{{f_{\bf{X}}}\left( {\left. {\bf{x}} \right|{\mathit{\boldsymbol{\theta}}}} \right)} $$ and $$ \mathop {\max }\limits_{{f_{\bf{X}}}\left( {\left. {\bf{x}} \right|{\mathit{\boldsymbol{\theta}}}} \right)} $$ indicate that optimization is carried out over all distribution parameters $$ \boldsymbol{\theta} $$ to identify the minimum and maximum values of failure probabilities. The central objective of this paper is to assess the failure probability bounds within the framework of parametric p-boxes. To this end, an adaptive AK meta-model, coupled with a novel learning function, is proposed for achieving this goal.

IMPRECISE STRUCTURAL RELIABILITY ANALYSIS WITH RELATIVE ENTROPY FUNCTION

In this section, the classical Kriging meta-model is briefly revisited. Subsequently, a new learning function named the " Relative Entropy Function (REF)" is introduced. This REF is then combined with the Kriging meta-model to establish a novel active-learning algorithm termed AK-MCS-REF. This algorithm is further extended to deal with the problems of imprecise structural reliability analysis[14], where both epistemic and aleatory uncertainties are considered. The objective of this extension is to effectively determine the bounds of failure probability.

Kriging meta-model

The Kriging meta-model, often categorized as a GPR, serves as a regression algorithm designed for spatial modeling and prediction (interpolation) of stochastic processes or random fields, relying on covariance functions[39]. As outlined in[40], the Gaussian process is frequently applied to achieve Kriging predictions. This involves the assumption of a relationship between the real response of the system and the input variables, characterized by:

$$ \begin{equation} g\left( {\bf{x}} \right) = {\bf Y}\left( {{\bf{x}},\mathit{\boldsymbol{\beta}} } \right) + {\mathit{\boldsymbol{\varepsilon}}} \end{equation} $$

where the deterministic function $$ {\bf Y}\left( {{\bf{x}},\mathit{\boldsymbol{\beta}} } \right) $$ serves as an estimate of the model response's mean, and its general regression model is typically expressed as $$ {\bf Y}\left( {{\bf{x}},\mathit{\boldsymbol{\beta}}} \right) = {\bf{y}}{\left( {\bf{x}} \right)^T}{\mathit{\boldsymbol{\beta}}} $$. In this formulation, $$ {\bf{y}}{\left( {\bf{x}} \right)^T} = \left\{ {{y_1}\left( {\bf{x}} \right),{y_2}\left( {\bf{x}} \right), \ldots ,{y_k}\left( {\bf{x}} \right)} \right\} $$ represents the regression function, while $$ {{\mathit{\boldsymbol{\beta}}}^T} = \left\{ {{\beta _1},{\beta _2}, \ldots ,{\beta _k}} \right\} $$ corresponds to the associated regression coefficients. The residual or noise, denoted as $$ {\mathit{\boldsymbol{\varepsilon}}} $$, is assumed to adhere to a zero-mean normal distribution with independent and identical distribution, given by $$ f\left( {\mathit{\boldsymbol{\varepsilon}}} \right) = N\left( {{\mathit{\boldsymbol{\varepsilon}}}\left| {0,\sigma _{\mathit{\boldsymbol{\varepsilon}}}^2R\left( {{\bf{x}},{\bf{w}}} \right)} \right.} \right) $$. Here, $$ {\sigma _{\mathit{\boldsymbol{\varepsilon}}}^2} $$ signifies the variance of the Gaussian process, and $$ {R\left( {{\bf{x}},{\bf{w}}} \right)} $$ represents the correlation function between points $$ \bf{x} $$ and $$ \bf{w} $$ within the spatial domain. Besides, the Squared Exponential Kernel function, which incorporates distinct length scales for each predictor[41, 42], is chosen to model $$ {R\left( {{\bf{x}},{\bf{w}}} \right)} $$ in this paper.

The Kriging meta-model can be conceptualized as a process in which a Gaussian prior model is combined with observations to derive a posterior model. Given a training set $$ { \bf{T}} = \left\{ { {\bf{X}},{ \bf{Z}}} \right\} $$, where $$ {\bf{X}} = {\left\{ {{{\bf{x}}_1},{{\bf{x}}_2}, \ldots ,{{\bf{x}}_p}} \right\}^T} $$ represents the realizations of input variables, and $$ {{\bf{Z}}^T} = \left\{ {{z_1},{z_2}, \ldots ,{z_p}} \right\} $$ corresponds to the associated output values of the random system. The parameters $$ \beta $$ and $$ {\sigma _{\mathit{\boldsymbol{\varepsilon}}}^2} $$ are then estimated using the approach described in[43]:

$$ \begin{equation} \begin{array}{l} \beta = {\left( {{{\bf{1}}^T}{\bf{R}^{ - 1}}{\bf{1}}} \right)^{ - 1}}{{\bf{1}}^T}{\bf{R}^{ - 1}}{\bf{Z}}\\ \sigma _{\bf{ \pmb{\mathsf{ ε}} }}^2 = \frac{1}{p}{\left( {{\bf{Z}} - \beta {\bf{1}}} \right)^T}{\bf{R}^{ - 1}}\left( {{\bf{Z}} - \beta {\bf{1}}} \right) \end{array} \end{equation} $$

where $$ {{\bf{R}}_{ij}} = R\left( {{{\bf{x}}_i},{{\bf{x}}_j}} \right) $$ is the correlation matrix between the initial sample points in the training set and $$ \bf{1} $$ is a 1-vector of length $$ p $$. Obviously, the values of $$ \beta $$ and $$ \sigma _{\mathit{\boldsymbol{\varepsilon}}}^2 $$ depend on the correlation parameter in correlation function $$ \bf{R} $$. Cross-validation or maximum likelihood estimation can be applied when the correlation parameter is unknown[44].

Consider a collection of unobserved points denoted as $$ {{\bf{x}}^*} $$, where the associated system output value is $$ \hat g \left( {{{\bf{x}}^*}} \right) $$. It is noteworthy that $$ \hat g \left( {{{\bf{x}}^*}} \right) $$ adheres to a Gaussian distribution characterized by its mean value and variance:

$$ \begin{equation} \begin{array}{l} \hat g\left( {{{\bf{x}}^ * }} \right) = \beta + {\bf{r}}\left( {{{\bf{x}}^ * }} \right){R^{ - 1}}\left( {{\bf{Z}} - \beta {\bf{1}}} \right)\\ \sigma _{\hat g}^2\left( {{{\bf{x}}^ * }} \right) = \sigma _{\bf{ \pmb{\mathsf{ ε}} }}^2\left( {1 + u{{\left( {{{\bf{x}}^ * }} \right)}^T}{{\left( {{{\bf{1}}^T}{R^{ - 1}}{\bf{1}}} \right)}^{ - 1}}u\left( {{{\bf{x}}^ * }} \right) - {\bf{r}}{{\left( {{{\bf{x}}^ * }} \right)}^T}{R^{ - 1}}{\bf{r}}\left( {{{\bf{x}}^ * }} \right)} \right) \end{array} \end{equation} $$

where $$ u\left( {{{\bf{x}}^ * }} \right) = {{\bf{1}}^T}{R^{ - 1}}{\bf{r}}\left( {{{\bf{x}}^ * }} \right) - 1 $$ and $$ {\bf{r}}\left( {{{\bf{x}}^ * }} \right) = {\left\{ {R\left( {{{\bf{x}}^ * },{{\bf{x}}_i}} \right)} \right\}_{i = 1,2, \ldots ,p}} $$. In summary, the Kriging meta-model stands as an exact interpolation model. For an experimental design point $$ \bf{x} $$, the attributes $$ \hat g\left( {\bf{x}} \right) = g\left( {\bf{x}} \right) $$ and $$ \sigma _{\hat g}^2\left( {\bf{x}} \right) = 0 $$ hold true, a depiction of which can be observed in Figure 2. The distinctive feature of the Kriging meta-model, namely its ability to quantify local uncertainty, has led to its widespread adoption within the realm of active learning in recent years.

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 2. Kriging meta-model prediction.

The proposed learning function REF

In the context of the Kriging meta-model integrated with active learning, the selection of the next best point is significantly influenced by the learning function[38]. An effective learning function has the capacity to accelerate the refinement process of the Kriging meta-model while also ensuring result accuracy. This is particularly advantageous for intricate finite element models that demand substantial computational resources, as it can markedly enhance computational efficiency. Recent years have witnessed the emergence of several learning functions tailored specifically for reliability analysis.

Based on the Effcient Global Optimization (EGO)[43], the Effcient Global Optimization (EFF) was proposed for efficient global reliability analysis[45]. The EFF elucidates the degree to which the actual value of the performance function at a specific point $$ \bf x $$ satisfies the equality constraint $$ g\left( {\bf{x}} \right) = y $$ within the designated domain $$ \left[ {y - \xi ,y + \xi } \right] $$. Furthermore, it is assumed that the output value $$ \hat g\left( {{\bf{x}}} \right) $$ of the system adheres to a Gaussian distribution within the context of Kriging prediction. In other words, $$ \hat g\left( {\bf{x}} \right) $$ is modeled as being $$ \sim N\left( {{\mu _{{{\hat g}}}},\sigma _{{{\hat g}}}^2} \right) $$.

The general formulation of EFF reads:

$$ \begin{equation} \begin{array}{l} \begin{aligned} EFF\left( { \bf{x} } \right) =& \left( {{\mu _{\hat g}} - y} \right)\left[ {2\varPhi \left( {\frac{{y - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right) - \varPhi \left( {\frac{{\left( {y - \xi } \right) - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right) - \varPhi \left( {\frac{{\left( {y + \xi } \right) - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right)} \right]\\ -& {\sigma _{\hat g}}\left[ {2\varphi \left( {\frac{{y - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right) - \varphi \left( {\frac{{\left( {y - \xi } \right) - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right) - \varphi \left( {\frac{{\left( {y + \xi } \right) - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right)} \right]\\ +& \xi \left[ {\varPhi \left( {\frac{{\left( {y + \xi } \right) - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right) - \varPhi \left( {\frac{{\left( {y - \xi } \right) - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right)} \right] \end{aligned} \end{array} \end{equation} $$

where $$ \varPhi \left( \cdot \right) $$ and $$ \varphi \left( \cdot \right) $$ denote the CDF and PDF of standard normal distribution, respectively.

In the context of reliability analysis, significant attention is directed toward the limit state of the performance function. As a consequence, the value of $$ y $$ is typically set to 0. Additionally, the parameter $$ \xi $$ is often defined as $$ {2{\sigma _{\hat g}}} $$. In accordance with Equation (7), the EFF value provides insights into both the level of uncertainty at the point $$ \bf x $$ and the relative proximity of point $$ \bf x $$ to the limit state. Points exhibiting maximal EFF values are particularly valuable for constructing a more accurate Kriging meta-model. To facilitate the selection of points in close proximity to the limit state while also displaying high uncertainty, the U learning function has been introduced for reliability analysis[24]:

$$ \begin{equation} U\left( {\bf{x}} \right) = \frac{{\left| {{\mu _{\hat g}}} \right|}}{{{\sigma _{\hat g}}}} \end{equation} $$

In recent advancements, a novel learning function rooted in information entropy has been introduced, as shown in[46]. It is worth noting that information entropy serves as a gauge of the uncertainty level of the data: higher entropy values signify greater uncertainty, while lower values indicate reduced uncertainty.

The information entropy is defined as[47]

$$ \begin{equation} H\left( {\hat g\left( {\bf{x}} \right)} \right) = - \int {\ln \left( {f\left( {{{\hat g}}} \right)} \right)f\left( {{{\hat g}}} \right) {\rm d}{{\hat g}}} \end{equation} $$

where $$ f\left( {{{\hat g}}} \right) $$ denotes the PDF of $$ \hat g\left( {\bf{x}} \right) $$. Equation (9) shows the information entropy function $$ H\left( {\hat g\left( {\bf{x}} \right)} \right) $$ can be used to quantitatively judge the uncertainty of $$ {\hat g\left( {\bf{x}} \right)} $$[46].

The learning function H is proposed to be applied to AK prediction[46].

$$ \begin{equation} \begin{array}{l} \begin{aligned} H\left( {\hat g\left( {\bf{x}} \right)} \right) =& \left| { - \int_{ - 2{\sigma _{{{\hat g}}}}}^{2{\sigma _{{{\hat g}}}}} {\ln \left( {f\left( {{{\hat g}}} \right)} \right)f\left( {{{\hat g}}} \right){\rm d}{{\hat g}}} } \right|\\ =& \left| {\begin{array}{*{20}{l}} {\ln \left( {\sqrt {2\pi } {\sigma _{{{\hat g}}}} + \frac{1}{2}} \right)\left[ {\varPhi \left( {\frac{{2{\sigma _{{{\hat g}}}} - {\mu _{{{\hat g}}}}}}{{{\sigma _{{{\hat g}}}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{{{\hat g}}}} - {\mu _{{{\hat g}}}}}}{{{\sigma _{{{\hat g}}}}}}} \right)} \right]}\\ { - \left[ {\frac{{2{\sigma _{{{\hat g}}}} - {\mu _{{{\hat g}}}}}}{2}\varphi \left( {\frac{{2{\sigma _{{{\hat g}}}} - {\mu _{{{\hat g}}}}}}{{{\sigma _{{{\hat g}}}}}}} \right) + \frac{{2{\sigma _{{{\hat g}}}} + {\mu _{{{\hat g}}}}}}{2}\varphi \left( {\frac{{ - 2{\sigma _{{{\hat g}}}} - {\mu _{{{\hat g}}}}}}{{{\sigma _{{{\hat g}}}}}}} \right)} \right]} \end{array}} \right| \end{aligned} \end{array} \end{equation} $$

The detailed derivation of Equation (10) {can be found} in Appendix A.

The learning criteria and stopping criteria of the above three learning functions are compared in Table 1.

Table 1

The learning criterion and stopping condition of EFF, U, and H

Learning functionLearning criterionStopping condition
EFF$$ \max \left( {EFF\left( {\bf{x}} \right)} \right) $$$$ \max \left( {EFF\left( {\bf{x}} \right)} \right) \le 0.001 $$
U$$ \min \left( {U\left( {\bf{x}} \right)} \right) $$$$ \min \left( {U\left( {\bf{x}} \right)} \right) \ge 2 $$
H$$ \max \left( {H\left( {\bf{x}} \right)} \right) $$$$ \max \left( {H\left( {\bf{x}} \right)} \right) \le 0.5 $$

As previously mentioned, information entropy serves as a means to quantify the uncertainty associated with a random variable, rendering it a suitable tool for gauging the confidence in the model response value during Kriging prediction. In the active learning procedure, the realizations of random variables that affect the system with large uncertainty are used to construct a GPR, and then the Kriging prediction will be close to the true response of the system. This ensures that the Kriging prediction closely approximates the real system response, particularly within critical regions such as limit state areas. Subsequently, based on the stability criterion, a new learning function rooted in the Relative Entropy (RE) will be put forward.

RE finds its application in quantifying the disparity between two probability distributions. Within active learning algorithms, the probability distribution of the system response $$ \hat g\left( {{\bf{x}}} \right) $$ is subject to ongoing updates. Prior to introducing a new experimental design point, the system response is denoted as $$ {{\hat g}_p}\left( {\bf{x}} \right) $$ with $$ {{\hat g}_p}\left( {\bf{x}} \right)\sim N\left( {{\mu _{{{\hat g}_p}}},\sigma _{{{\hat g}_p}}^2} \right) $$, and its corresponding probability distribution is indicated by $$ f\left( {{{\hat g}_p}} \right) $$. The notation $$ {{\hat g}_q}\left( {\bf{x}} \right) $$ signifies the response after the incorporation of a new experimental design point, characterized by $$ {{\hat g}_q}\left( {\bf{x}} \right)\sim N\left( {{\mu _{{{\hat g}_q}}},\sigma _{{{\hat g}_q}}^2} \right) $$, and its associated probability distribution is denoted as $$ f\left( {{{\hat g}_q}} \right) $$. It is evident that $$ {{\hat g}_q}\left( {\bf{x}} \right) $$ is contingent upon $$ {{\hat g}_p}\left( {\bf{x}} \right) $$. In accordance with the definition of RE, the following relationships hold:

$$ \begin{equation} RE = \left( {\left. {f\left( {{{\hat g}_p}} \right)} \right|f\left( {{{\hat g}_q}} \right)} \right) = \int {f\left( {{{\hat g}_p}} \right)\ln \frac{{f\left( {{{\hat g}_p}} \right)}}{{f\left( {{{\hat g}_q}} \right)}}{\rm{d}}{{\hat g}_p}} \end{equation} $$

The RE serves as a metric for assessing the disparity between $$ {{\hat g}_p}\left( {\bf{x}} \right) $$ and $$ {{\hat g}_q}\left( {\bf{x}} \right) $$. A diminishing RE implies that the constructed Kriging meta-model closely mirrors the characteristics of the original stochastic system. For the purpose of pinpointing optimal points within the stochastic system, the learning function can be defined as follows:

$$ \begin{equation} REF = \left( {\left. {f\left( {{{\hat g}_p}} \right)} \right|f\left( {{{\hat g}_q}} \right)} \right) = \int_{\hat g_p^ - }^{\hat g_p^ + } {f\left( {{{\hat g}_p}} \right)\ln \frac{{f\left( {{{\hat g}_p}} \right)}}{{f\left( {{{\hat g}_q}} \right)}}{\rm{d}}{{\hat g}_p}} \end{equation} $$

where $$ {\hat g_p^ + } $$ and $$ {\hat g_p^ - } $$ are defined by $$ {2{\sigma _{{{\hat g}_p}}}} $$ and $$ {-2{\sigma _{{{\hat g}_p}}}} $$ to ensure the accuracy in this paper. The detailed derivation of Equation (12) can be found in Appendix B.

The REF learning function plays a pivotal role in gauging the stability of the estimated system response $$ \hat g\left( {\bf{x}} \right) $$ at a specific point $$ \bf{x} $$ and facilitates the construction of a meta-model that closely aligns with the stochastic characteristics of the system, especially in proximity to the limit state. To strike a balance between computational efficiency and precision, a termination criterion of $$ RE{F_{\max }} \le {10^{ - 3}} $$ is adopted for the learning function in this paper, based on our prior computational experiences.

To summarize, the proposed AK-MCS-REF framework entails a comprehensive set of seven steps for conducting structural reliability analysis in a precise probability sense:

Step 1: Generate a sample pool $$ S $$ comprising $$ n_{mc} $$ points using quasi-MCS methods such as Sobol sampling.

Step 2: Initialize the initial design of experiments (DoE). Select the first $$ n_0 $$ points from $$ S $$, evaluate them using the LSF, and establish a preliminary Kriging meta-model utilizing the MATLAB function fitrgp. An empirical value of $$ n_0=12 $$ is suggested based on computational considerations.

Step 3: Compute the predicted value $$ {\hat G_0} $$ of the preparation Kriging meta-model across the entire sample pool.

Step 4: Define the formal DoE using the first $$ n_1 $$ points from $$ S $$ and subsequently build the formal Kriging meta-model. A recommended value of $$ n_1=13 $$ is provided.

Step 5: Construct the formal Kriging meta-model based on the formal DoE. Compute the predicted value $$ {\hat g} $$ of the formal Kriging meta-model for the entire sample pool, thereby estimating the failure probability $$ {\hat p_f} $$.

Step 6: Implement the learning function $$ REF $$ and assess the convergence criterion. Should the termination threshold be met, proceed to the subsequent step. Otherwise, identify the optimal point using Equation (13):

$$ \begin{equation} {{\bf{x}}^ * } = \mathop {\arg \max }\limits_{{\bf{x}} \in S} \left[ {REF\left( {\bf{x}} \right)} \right] \end{equation} $$

Evaluate $$ g\left( {{{\bf{x}}^ * }} \right) $$, add $$ {\bf{x}}^ * $$ and $$ g\left( {{{\bf{x}}^ * }} \right) $$ to the formal DoE, and then return to Step 5 to formulate a novel formal Kriging meta-model incorporating $$ n_0+1 $$ points.

Step 7: Compute the Coefficient Of Variation (C.O.V.) for the failure probability using Equation (14):

$$ \begin{equation} CoV\left( {{{\hat p}_f}} \right) = \sqrt {\frac{{1 - {{\hat p}_f}}}{{{{\hat p}_f}{n_{mc}}}}} \end{equation} $$

If $$ CoV\left( {{{\hat p}_f}} \right) \le {\varepsilon _{{p_f}}} $$, the procedure terminates, and $$ {\hat p}_f $$ is deemed the definitive result for the failure probability. Conversely, if $$ CoV\left( {{{\hat p}_f}} \right) > {\varepsilon _{{p_f}}} $$ upon utilizing all points in the sample pool, $$ {\hat p}_f $$ lacks credibility, necessitating the generation of a new sample pool $$ S $$ with an increased $$ n_{mc} $$. Subsequently, return to Step 2.

Imprecise structural reliability analysis for parametric p-boxes

In the context of parametric p-boxes, the distributions of the random vector $$ \bf{X} $$ are dependent upon their corresponding distribution parameters. Consequently, the determination of the conditional failure probability $$ {p_{f\left| \mathit{\boldsymbol{\beta}} \right.}} $$ and the establishment of the failure probability range $$ \left[ {{{\underline p }_f},{{\overline p }_f}} \right] $$ become pivotal components of imprecise structural reliability analysis. The conditional failure probability is formally defined as follows:

$$ \begin{equation} {p_{f\left| \mathit{\boldsymbol{\beta}} \right.}} = \Pr \left[ {{G}\left( {{{\bf{X}}_\mathit{\boldsymbol{\beta}} }} \right) \le 0} \right] \end{equation} $$

where $$ {{{\bf{X}}_\mathit{\boldsymbol{\beta}} }} $$ represents a random variable generated using $$ {F_{{{\bf{X}}_\theta }}} = {F_{\bf{X}}}\left( {{\bf{x}}\left| \mathit{\boldsymbol{\beta}} \right.} \right) $$ based on a specific set of distribution parameters $$ \mathit{\boldsymbol{\beta}} $$. As a result, the parametric p-boxes exhibit a characteristic hierarchical model structure. The approach of nested algorithms, as outlined in[48], can be employed in imprecise structural reliability analysis to address the complexities posed by parametric p-boxes. Within this hierarchical framework, the parametric p-boxes can be segregated into two distinct components. The AK-MCS with REF learning function, as detailed above, is well-suited for addressing the task of evaluating the conditional failure probability $$ {p_{f\left| \mathit{\boldsymbol{\beta}} \right.}} $$. Since the failure probability is dependent upon the distribution parameters, the EGO algorithm[43] can be utilized to optimize $$ \mathit{\boldsymbol{\beta}} $$ and thereby deduce the bounds of the failure probability. This integrated approach enables a comprehensive treatment of the uncertainties associated with parametric p-boxes.

Figure 3 depicts the flowchart, illustrating the process of imprecise structural reliability analysis coupled with the innovative REF learning function, denoted as AK-MCS-REF-EGO. The overall procedure encompasses the following sequential steps:

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 3. Flowchart of AK-MCS-REF-EGO.

Step 1: Generate a sample pool for distribution parameters denoted as $$ {{\bf D}_{\mathit{\boldsymbol{\theta}}}} $$, comprising $$ {n_{\mathit{\boldsymbol{\theta}}}} $$ points. This can be accomplished through the utilization of Latin-Hypercube Sampling (LHS), wherein each point corresponds to a distinct set of distribution parameters, i.e., $$ {{\mathit{\boldsymbol{\theta}}}_j} \in {{\bf D}_{\mathit{\boldsymbol{\theta}}}}\left( {j = 1,2, \ldots ,{n_{\mathit{\boldsymbol{\theta}}}}} \right) $$. It is noteworthy that a sample size of $$ {n_{\mathit{\boldsymbol{\theta}}}}=10^6 $$ is typically deemed adequate for this purpose.

Step 2: Define the DoE for the distribution parameters. To achieve this, $$ {n{}_{\mathit{\boldsymbol{\theta}}}^0} $$ samples are selected at random from the sample pool $$ {{\bf D}_{\mathit{\boldsymbol{ θ}} }} $$, and these selected samples are recorded as $$ {\bf D} = \left\{ {{{\mathit{\boldsymbol{\theta}}}^{\left( 1 \right)}}, \ldots {{\mathit{\boldsymbol{\theta}}}^{\left( {n{}_{\mathit{\boldsymbol{\theta}}}^0} \right)}}} \right\} $$. For the purposes of this paper, the value of $$ {n{}_{\mathit{\boldsymbol{\theta}}}^0} $$ is set to 6.

Step 3: Compute the conditional failure probability $$ p_f^{\left( 1 \right)} $$ of $$ {{{\mathit{\boldsymbol{\theta}}}^{\left( 1 \right)}}} $$ by the procedure of AK-MCS-REF.

Step 4: Calculate the conditional failure probabilities $$ {\bf P} = \left\{ {p_f^{(2)}, \ldots ,p_f^{(n_{\mathit{\boldsymbol{\theta}}}^0)}} \right\} $$ for $$ {\bf D} = \left\{ {{{\mathit{\boldsymbol{\theta}}}^{\left( 2 \right)}}, \ldots ,{{\mathit{\boldsymbol{\theta}}}^{\left( {n_{\mathit{\boldsymbol{\theta}}}^0} \right)}}} \right\} $$ using the proposed AK-MCS-REF approach. Beginning with $$ {\mathit{\boldsymbol{\theta}}}^{\left( 2 \right)} $$, the initial DoE needed for constructing the Kriging meta-model does not have to be selected exclusively from the Sobol sample pool associated with $$ {\mathit{\boldsymbol{\theta}}}^{\left( 2 \right)} $$. Instead, it can be composed of the sample points that have already been evaluated in the preceding AK-MCS-REF iterations. In essence, the initial DoE for $$ {\mathit{\boldsymbol{\theta}}}^{\left( i+1 \right)} $$ comprises all the LSF evaluations of $$ {\mathit{\boldsymbol{\theta}}}^{\left( i \right)} $$. This implies that all the previously acquired sample points, along with their corresponding outputs, can be recycled to construct a new Kriging meta-model.

Step 5: Build a second-level Kriging meta-model, utilizing the information from $$ {\bf P} $$ and $$ {\bf D} $$, to approximate the conditional failure probabilities $$ \bf{\hat P} $$ of the random system.

Step 6: Determine the lower and upper bounds of failure probabilities, $$ {\underline p _f} $$ and $$ {\overline p _f} $$, by identifying the current minimum failure probability $$ {\bf {\hat P}}_{\rm min} $$ and maximum failure probability $$ {\bf {\hat P}}_{\rm max} $$ from the approximated $$ \bf{\hat P} $$. To search for optimal distribution parameters, the expected improvement (EI) function is employed[43]. Specifically, for $$ {\underline p _f} $$, the EI function is defined as follows:

$$ \begin{equation} E{I_{\min }}\left( \mathit{\boldsymbol{\beta}} \right) = \left( {{{{\bf{\hat P}}}_{\min }} - {\mu _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right)} \right)\varPhi \left( {\frac{{{{{\bf{\hat P}}}_{\min }} - {\mu _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right)}}{{{\sigma _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right)}}} \right) + {\sigma _{\widehat {\bf{P}}}}\left( {\mathit{\boldsymbol{\theta}}} \right)\varphi \left( {\frac{{{{{\bf{\hat P}}}_{\min }} - {\mu _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right)}}{{{\sigma _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right)}}} \right) \end{equation} $$

For $$ {\overline p _f} $$, there is:

$$ \begin{equation} E{I_{\max }}\left( \mathit{\boldsymbol{\beta}} \right) = \left( {{\mu _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right) - {{{\bf{\hat P}}}_{\max }}} \right)\varPhi \left( {\frac{{{\mu _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right) - {{{\bf{\hat P}}}_{\max }}}}{{{\sigma _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right)}}} \right) + {\sigma _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right)\varphi \left( {\frac{{{\mu _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right) - {{\bf{P}}_{\max }}}}{{{\sigma _{\widehat {\bf{P}}}}\left( \mathit{\boldsymbol{\beta}} \right)}}} \right) \end{equation} $$

where $$ EI_{\min } $$ and $$ EI_{\max } $$ represent the EIs for minimizing and maximizing the failure probability, respectively.

Step 7: Determine the optimal distribution parameters using $$ EI_{\min} $$ and $$ EI_{\max} $$:

$$ \begin{equation} {\mathit{\boldsymbol{\theta}}}_{\min }^ * = \mathop {\arg \max }\limits_{{\mathit{\boldsymbol{\theta}}} \in {D_{\mathit{\boldsymbol{\theta}}}}} \left[ {E{I_{\min }}\left( {\mathit{\boldsymbol{\theta}}} \right)} \right] \end{equation} $$

$$ \begin{equation} {\mathit{\boldsymbol{\theta}}}_{\max }^ * = \mathop {\arg \max }\limits_{{\mathit{\boldsymbol{\theta}}} \in {D_{\mathit{\boldsymbol{\theta}}}}} \left[ {E{I_{\max }}\left( {\mathit{\boldsymbol{\theta}}} \right)} \right] \end{equation} $$

Step 8: Utilize the convergence criterion proposed in[43] to estimate the bounds of failure probability.

$$ \begin{equation} E{I_{\min }}\left( {{\mathit{\boldsymbol{\theta}}}_{\min }^ * } \right) \le {\varepsilon _{EI}} \end{equation} $$

$$ \begin{equation} E{I_{\max }}\left( {{\mathit{\boldsymbol{\theta}}}_{\max }^ * } \right) \le {\varepsilon _{EI}} \end{equation} $$

where $$ {\varepsilon _{EI}} $$ is the threshold value, {which is} $$ {\varepsilon _{EI}}=10^{-5} $$ in this paper. If the criterion is satisfied, the EGO algorithm terminates and outputs the minimum ($$ {\underline p_f}=\min \left[ {\hat {\bf P}} \right] $$) and maximum ($$ {\overline p_f}=\max \left[ {\hat {\bf P}} \right] $$) failure probabilities. If not met, proceed to Step 9 for further iterations.

Step 9: Incorporate the optimal next distribution parameters $$ {\mathit{\boldsymbol{\theta}}}_{\min }^ * $$ ($$ {\mathit{\boldsymbol{\theta}}}_{\max }^ * $$) into $$ \bf D $$ and return to Step 4. Repeat this cycle until the convergence criterion specified in Step 8 is met, at which point the optimization algorithm stops.

NUMERICAL EXAMPLES

In this section, the efficiency and accuracy of the proposed method are demonstrated through four numerical examples. The double-loop MCS[49] and the proposed AK-MCS-REF approach are implemented using MATLAB. The Kriging meta-models are established using the GPR model, with the explicit basis and kernel function defined as Constant and ardsquaredexponential, respectively. Notably, the application of the ARD kernel proves beneficial for addressing problems with inputs of varying dimensions, as it allows for the consideration of diverse length scales across input dimensions[50]. A detailed discussion for effects of using different kernels can be found in[51].

The LHS technique is utilized to generate the distributed parameter sample pool, while the Sobol sampling technique is applied to generate the sample pool of random variables. In the final numerical example, a comparison is made between the proposed learning function and the previously mentioned ones, namely, EFF[45], U[24], and H[46], all at the precise probability level. This comparison serves to further establish the superiority of the proposed learning function in effectively balancing calculation accuracy and efficiency, particularly for computationally intensive finite element models. The accuracy is evaluated using the relative error of the reliability index, given by:

$$ \begin{equation} \epsilon = \left| {\frac{{{\beta_m} - \beta}}{\beta_m}} \right| \times 100\% \end{equation} $$

where $$ \beta_m $$ is the reliability index by MCS, and $$ \beta $$ {represents the reliability index obtained from a specific method}.

Example 1

A classic series system with four branches is first employed to validate the effectiveness of the proposed method[24]. The LSF is defined as follows:

$$ \begin{equation} Z = {G}\left( {\bf{X}} \right) = \left\{ {\begin{array}{*{20}{c}} {3.5 + 0.1{{\left( {{x_1} - {x_2}} \right)}^2} - \frac{{\left( {{x_1} + {x_2}} \right)}}{{\sqrt 2 }}}\\ {3.5 + 0.1{{\left( {{x_1} - {x_2}} \right)}^2} + \frac{{\left( {{x_1} + {x_2}} \right)}}{{\sqrt 2 }}}\\ {{x_1} - {x_2} + 3.5\sqrt 2 }\\ {{x_2} - {x_1} + 3.5\sqrt 2 } \end{array}} \right\} \end{equation} $$

To validate the efficacy of the proposed learning function, a precise probability level analysis is initially conducted, where the variables $$ x_1 $$ and $$ x_2 $$ are assumed to be independent standard normal random variables. The learning process of the proposed REF learning function at the precise probability level is depicted in Figure 4. The optimal points selected by the REF learning function are consistently situated near the limit state surface, and a majority of them satisfy the constraints of the limit state equation. This characteristic facilitates the capability of AK-MCS-REF to accurately predict the failure domain of the LSF. The results presented in Table 2 underscore that the proposed AK-MCS-REF method can offer a highly accurate prediction of the reliability index at the precise probability level while demanding minimal computational effort.

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 4. Experimental design in precise probability sense.

Table 2

Results in precise probability sense

Method$$ P_f $$$$ \beta $$$$ \epsilon (\%) $$$$ N_{call} $$
Note: Ncall denotes the number of LSF evaluations.
MCS$$ 7.42 \times 10^{-4} $$3.1778-$$ 10^6 $$
AK-MCS-REF$$ 7.29 \times 10^{-4} $$3.18290.1685

In the context of imprecise probability, the variables $$ x_i $$ are all characterized by parametric p-boxes, wherein the distribution parameters are selected as interval variables in this numerical illustration. The specific forms of the parametric p-boxes are presented below:

$$ \begin{equation} {F_{{X_i}}}\left( {{x_i}} \right) = {F_{\cal N}}\left( {{x_i}\mid \mu ,\sigma} \right),\mu \in [-0.1,0.1], \sigma \in [0.95,1.05],i = 1,2 \end{equation} $$

The LHS technique is employed to generate the parameter sample pool within the domains of $$ \mu \in [-0.1, 0.1] $$ and $$ \sigma \in [0.95, 1.05] $$, with a total of $$ 10^6 $$ samples. Among these samples, an initial set of six parameter samples are selected. Subsequently, the failure probabilities associated with these initial parameter sample points are computed using the AK-MCS-REF method, and the results are presented in Table 3. The initial sample points and their associated failure probabilities are employed to establish a second-level AK meta-model to address the imprecise probability scenario. The focus is on establishing the bounds of failure probabilities, achieved through the application of the $$ EI $$ criterion to locate extreme values. The evolution of the search for reliability index bounds $$ \left[ {\beta ^L, \beta ^U} \right] $$ is depicted in Figure 5, with summarized results provided in Table 4. Notably, the double-loop MCS utilizes $$ 10^5 $$ samples for the outer loop and $$ 10^6 $$ samples for the inner loop, resulting in a combined sample size of $$ 10^5 \times 10^6 $$ for this scenario. The upper and lower bounds of the reliability index are accurately determined by the proposed method, exhibiting a maximum error of just $$ 0.55% $$ when compared to double-loop MCS results. In order to visualize the required number of model calculations involved in the analysis, the number of model calculations is actually divided into three parts. Specifically, 88 computations correspond to generating failure probabilities for initial parameter sample points, while ten model evaluations are conducted to identify the minimum reliability index within the parameter sample pool. An additional ten evaluations are dedicated to locating the maximum reliability index within the parameter sample pool. This accumulation results in a total of $$ N_{call} = 80 + 10 + 10 = 108 $$ calculations. Consequently, the proposed method yields highly accurate results with notably fewer model calculations compared to the double-loop MCS approach. Observing Figure 5, it becomes evident that the EGO algorithm mandates four iterations to locate the maximum reliability index, while a mere two iterations suffice for the minimum reliability index search. Paradoxically, Table 4 presents a contrary perspective: regardless of aiming for the maximum or minimum reliability index, the essential number of LSF calls remains constant at 10. This divergence between the number of iterations and LSF calls underscores a nonlinear relationship between the two factors.

Table 3

Initial distribution parameters and their corresponding failure probabilities

NumberDistribution of variablesFailure probability
1$$ {x_1} \sim N\left( {0.0985,1.0186^2} \right) $$, $$ {x_2} \sim N\left( {-0.0253,0.9987^2} \right) $$$$ 6.70 \times 10^{-4} $$
2$$ {x_1} \sim N\left( {0.0354,0.9912^2} \right) $$, $$ {x_2} \sim N\left( {-0.0529,0.9713^2} \right) $$$$ 4.85 \times 10^{-4} $$
3$$ {x_1} \sim N\left( {-0.0014,0.9678^2} \right) $$, $$ {x_2} \sim N\left( {0.0419,1.0005^2} \right) $$$$ 4.75 \times 10^{-4} $$
4$$ {x_1} \sim N\left( {-0.0766,1.0038^2} \right) $$, $$ {x_2} \sim N\left( {-0.0932,1.0379^2} \right) $$$$ 9.00 \times 10^{-4} $$
5$$ {x_1} \sim N\left( {0.0097,1.0397^2} \right) $$, $$ {x_2} \sim N\left( {0.0892,1.0188^2} \right) $$$$ 1.09 \times 10^{-3} $$
6$$ {x_1} \sim N\left( {-0.0666,0.9516^2} \right) $$, $$ {x_2} \sim N\left( {0.0261,0.9571^2} \right) $$$$ 4.12 \times 10^{-4} $$
Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 5. Evolution of the bounds of reliability index (Example 1).

Table 4

Bounds of reliability index for Example 1

Method$$ \beta ^L $$$$ \beta ^U $$$$ N_{call} $$
Value$$ \epsilon (\%) $$Value$$ \epsilon (\%) $$
Double-loop MCS2.9633-3.4090-$$ 10^5 \times 10^6 $$
Proposed method2.97940.553.40510.1188+10+10

Example 2

The second example employs a nonlinear LSF characterized by independent random variables, as detailed in[52]. This LSF involves ten imprecise random variables and is represented as follows:

$$ \begin{equation} Z = {G}\left( {\bf{X}} \right) = 3 + 0.015\sum\limits_{i = 1}^9 {x_i^2} - {x_{10}} \end{equation} $$

Likewise, the variables $$ x_i $$, where $$ i = 1,2, \ldots ,10 $$, are modeled using parametric p-boxes. The statistical details for these variables are provided in Table 5.

Table 5

Statistical information of random variables in Example 2

VariableDistributionMeanStandard deviation
$$ {x_1}-{x_{10}} $$Normal0$$ \left[ {1,1.5} \right] $$

As depicted in Figure 6, both the maximum and minimum reliability indexes necessitate four iterations. Notably, Table 6 reveals that the maximum reliability index necessitates only one LSF evaluation, while the minimum reliability index requires 11 LSF evaluation calls. In this regard, the proposed method achieves rapid convergence to the bounds of the reliability index, requiring a total of 123 LSF evaluations. Clearly, the maximum error remains merely $$ 0.99% $$. These results serve as further validation of the accuracy and efficiency of the proposed method for conducting structural reliability analysis in the presence of dual sources of uncertainties.

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 6. Evolution of the bounds of reliability index (Example 2).

Table 6

Bounds of reliability index for Example 2

Method$$ \beta ^L $$$$ \beta ^U $$$$ N_{call} $$
Value$$ \epsilon (\%) $$Value$$ \epsilon (\%) $$
Double MCS2.1139-3.2437-$$ 10^5 \times 10^6 $$
proposed method2.12740.643.27600.99111+11+1

Example 3

The applicability of the proposed method for non-monotonic functions is demonstrated through the utilization of a highly nonlinear undamped single-degree-of-freedom system [Figure 7], as elaborated in[24]. The LSF for this case is expressed as follows:

$$ \begin{equation} Z={G}\left( {\bf{X}} \right) = 3r - \left| {\frac{{2{F_1}}}{{m\omega _0^2}}\sin \left( {\frac{{{\omega _0}{T_1}}}{2}} \right)} \right| \end{equation} $$

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 7. Single-degree-of-freedom system.

The statistical characteristics of the input random variables are presented in Table 7. In this context, the spring stiffness parameters ($$ k_1 $$ and $$ k_2 $$), mass ($$ m $$), and the displacement at which the secondary spring yields ($$ r $$) are modeled with precise probabilities. Conversely, the amplitude of the applied force ($$ F_1 $$) and the duration of the load ($$ t_1 $$) possess uncertain information and are represented using parametric p-boxes. Notably, the means and standard deviations of these variables are denoted as interval variables within the parametric p-boxes.

Table 7

Statistical information of random variables in Example 3

VariableDistributionMeanStandard deviation
$$ {k_1} $$Normal10.1
$$ {k_2} $$Normal0.10.01
$$ m $$Normal10.05
$$ r $$Normal0.50.05
$$ {t_1} $$Normal$$ \left[ {0.95,1.05} \right] $$$$ \left[ {0.095,0.105} \right] $$
$$ {F_1} $$Normal$$ \left[ {1.09,1.11} \right] $$$$ \left[ {0.095,0.105} \right] $$

As depicted in Figure 8 and summarized in Table 8, the proposed method requires five iterations to successfully converge to the maximum reliability index. In this process, a total of 22 evaluations of the LSF are conducted, yielding an impressively low relative error of only $$ 0.17% $$. In comparison, for the minimum reliability index, the proposed method utilizes two iterations and performs four LSF evaluations, resulting in a slight increase in error to $$ 1.2% $$. Remarkably, the total number of LSF evaluations for the proposed method is 60+22+4=86, which is significantly fewer compared to the total number of evaluations required for the double-loop MCS, which amounts to $$ 10^5 \times 10^7 $$. For the double-loop MCS, the number of samples for the outer loop is $$ 10^5 $$, and the sample size for the inner loop is $$ 10^7 $$. Then, the total number of sample sizes in the double-loop MCS is $$ 10^5 \times 10^7 $$ in this example. These results underscore the efficacy of the proposed method in achieving both high efficiency and accuracy in structural reliability analysis, particularly when dealing with scenarios encompassing both epistemic and aleatory uncertainties.

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 8. Evolution of the reliability index boundary (Example 3).

Table 8

Bounds of reliability index for Example 3

Method$$ \beta ^L $$$$ \beta ^U $$$$ N_{call} $$
Value$$ \epsilon (\%) $$Value$$ \epsilon (\%) $$
Double MCS2.0303-2.6614-$$ 10^5 \times 10^7 $$
Proposed method2.05461.22.65690.1760+4+22

Example 4

A practical validation is performed using a two-bay, four-story spatial concrete frame structure, aimed at demonstrating the practical engineering applicability of the proposed method. The structure takes into account the intricate nonlinear behaviors inherent to both concrete and rebar materials. To accurately capture the behavior of the system, a nonlinear beam-column finite element representation of each member is implemented within the OpenSees software[53]. In accordance with Figure 9, the horizontal displacement at node 8 is designated as the control index, thereby defining the implicit LSF as follows:

$$ \begin{equation} Z={G}\left( {\bf{X}} \right) = {\bar D}-D_8\left( {{f_c},{\epsilon_c},{f_u},{\epsilon_u},{f_y},{E_s},{b},{F_6},{F_8},{F_5},{F_7},{F_{11}},{F_{12}},{F_{19}},{F_{20}}} \right) \end{equation} $$

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 9. Two-bay four-story spatial concrete frame.

where $$ D_8 $$ represents the horizontal displacement at node 8, while $$ \bar D $$ signifies the allowable displacement, set at $$ \bar D=50 \rm mm $$ within the context of this specific example. To ascertain the suitability of the proposed learning function in handling time-consuming finite element models, the example is initially employed to conduct a reliability analysis in a precise probability sense. The pertinent physical interpretations and statistical properties of the involved random variables are itemized in Table 9.

Table 9

Statistical information of random variables in Example 4

VariableDescriptionDistributionMeanStandard deviation
$$ {f_c}(\rm MPa) $$Concrete compressive strengthLognormal26.82.68
$$ \varepsilon_c $$Concrete strain at maximum strengthLognormal0.00010.05
$$ {f_u}(\rm MPa) $$Concrete crushing strengthLognormal10.01
$$ \varepsilon_u $$Concrete strain at crushing strengthLognormal0.00350.000175
$$ {f_y}(\rm MPa) $$Yield strength of rebarLognormal355355
$$ {E_s}(\rm GPa) $$Initial elastic modulus of rebarLognormal20020
$$ b $$Strain-hardening ratio of rebarLognormal0.0010.00005
$$ {F_6}(\rm kN) $$External loadLognormal5410.8
$$ {F_8}(\rm kN) $$External loadLognormal5410.8
$$ {F_5}(\rm kN) $$External loadLognormal428.4
$$ {F_7}(\rm kN) $$External loadLognormal428.4
$$ {F_{11}}(\rm kN) $$External loadLognormal306
$$ {F_{12}}(\rm kN) $$External loadLognormal306
$$ {F_{19}}(\rm kN) $$External loadLognormal183.6
$$ {F_{20}}(\rm kN) $$External loadLognormal183.6

Table 10 presents the results of the reliability analysis conducted in a precise probability sense. This table includes results from various methods for comparison purposes, such as Importance Sampling (IS) and Subset Simulation (SS). Moreover, the AK-MCS approach is coupled with several well-established learning functions, including U[24], EFF[45], and H[46], to facilitate a comprehensive evaluation. Upon comparison, it becomes evident that the proposed learning function REF enhances computational efficiency without compromising accuracy. Furthermore, the AK-MCS-REF methodology exhibits superior performance compared to classical approaches such as IS and SS, demonstrating improved accuracy and efficiency in the context of reliability analysis in a precise probability sense.

Table 10

Reliability results in precise probability sense

Method$$ P_f $$$$ \epsilon_{P_f} (\%) $$$$ \beta $$$$ \epsilon_{\beta} (\%) $$$$ N_{call} $$
MCS$$ 1.21 \times 10^{-3} $$-3.0334-$$ 10^6 $$
IS$$ 1.29 \times 10^{-3} $$6.833.01340.661106
SS$$ 1.43 \times 10^{-3} $$18.282.98241.682800
AK-MCS-U$$ 1.13 \times 10^{-3} $$6.533.05380.67135
AK-MCS-EFF$$ 1.21 \times 10^{-3} $$1.323.02940.13175
AK-MCS-H$$ 1.37 \times 10^{-3} $$13.402.99531.2676
AK-MCS-REF$$ 1.22 \times 10^{-3} $$1.243.02970.1267

Next, the proposed method is applied to conduct imprecise reliability analysis. In this context, the loads are treated as imprecise random variables modeled using parametric p-boxes, with interval variables specifying their standard deviations, as shown in Table 11. Given the consideration of nonlinear behaviors in both beams and columns, the corresponding LSF becomes more intricate. The findings from Table 12 reveal that a total of 515 LSF evaluations are necessary to compute the conditional failure probabilities aligned with the initial parameter sample pool. Regarding the determination of reliability index bounds, Figure 10 illustrates that 32 and 39 iterations are required. Notably, a total of {358 and 309 times} of LSF evaluations are further required to estimate the maximum and minimum values of reliability indexes, respectively. To alleviate computational demands, the double-loop MCS approach is still employed, where the IS technique is utilized for the inner loop for reliability analysis in a precise probability sense. Meanwhile, the LHS method is employed for the outer loop, utilizing a parameter sample pool with $$ 10^4 $$ samples. The combination of IS and LHS is referred to as LHS-IS, serving as reference results to validate the proposed method. The results, presented in Table 12, indicate a maximum reliability index error of merely $$ 1.52% $$. Compared to the LHS-IS approach, which requires 11, 204, 350 finite-element analyses of the structure, the proposed method significantly reduces the computational burden, necessitating only 515 + 358 + 309 = 1182 model evaluations. These results underscore the capacity of the proposed method to effectively balance computational accuracy and efficiency for practical engineering structures.

Table 11

Statistical information of imprecise random variables

VariableDistributionMeanStandard deviation
$$ {F_6}(\rm kN) $$Lognormal54$$ \left[ {10.8,11.88} \right] $$
$$ {F_8}(\rm kN) $$Lognormal54$$ \left[ {10.8,11.88} \right] $$
$$ {F_5}(\rm kN) $$Lognormal42$$ \left[ {8.4,9.24} \right] $$
$$ {F_7}(\rm kN) $$Lognormal42$$ \left[ {8.4,9.24} \right] $$
$$ {F_{11}}(\rm kN) $$Lognormal30$$ \left[ {6,6.6} \right] $$
$$ {F_{12}}(\rm kN) $$Lognormal30$$ \left[ {6,6.6} \right] $$
$$ {F_{19}}(\rm kN) $$Lognormal18$$ \left[ {3.6,3.96} \right] $$
$$ {F_{20}}(\rm kN) $$Lognormal18$$ \left[ {3.6,3.96} \right] $$
Table 12

Bounds of reliability index for Example 4

Method$$ \beta ^L $$$$ \beta ^U $$$$ N_{call} $$
Value$$ \epsilon (\%) $$Value$$ \epsilon (\%) $$
LHS-IS2.8208-3.0694-11204350
Proposed method2.86381.523.02401.48515+309+358
Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 10. Evolution of the bounds of reliability index (Example 4).

ENGINEERING APPLICATION

This section delves into the imprecise reliability analysis of progressive collapse in a practical frame structure, showcasing the engineering application of the proposed method. The analysis takes into account both epistemic and aleatory uncertainties, providing a comprehensive assessment of the reliability of the structure under progressive collapse conditions.

The investigation involves a planar four-bay eight-story reinforced concrete frame structure, as depicted in Figure 11. This layout illustrates the elevation view of the structure, along with details of the sectional reinforcements of beams and columns. The constitutive laws governing the behavior of concrete and steel bars are also presented. Given the intricacies of complex structural systems, conducting a progressive collapse analysis by removing each individual member proves impractical. Instead, in line with the provisions of Code DoD2013[54], the analysis focuses on key members. Eight distinct columns, denoted as $$ E_1-E_4 $$ and $$ C_1-C_4 $$, are selected for this purpose, as highlighted in Figure 11.

Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 11. Four-bay eight-story reinforced concrete plane frame.

The nonlinear static pushdown method is utilized for analyzing the progressive collapse of structures that have been damaged due to column removal. The code DoD2013 specifies the combination of load effects to be employed in the pushdown method in the following manner:

$$ \begin{equation} \begin{array}{l} {G_{N1}} = {\Omega _N}\left( {1.2D + 0.5L\; {\rm{or}}\; 0.2S} \right)\\ {G_{N2}} = 1.2D + 0.5L\; {\rm{or}}\; 0.2S \end{array} \end{equation} $$

where $$ G_{N1} $$ denotes the combined value of load effects on the span connected to the column being removed and the upper story; $$ G_{N2} $$ denotes the combined value of load effects on the span disconnected to the column being removed or on the span connected to the column removal with the lower story; $$ D $$ is the dead load, $$ L $$ is the live load, $$ S $$ is the snow load, and $$ \Omega _N $$ denotes the dynamic amplification factor, whose specific expression is

$$ \begin{equation} {\Omega _N} = 1.04 + \frac{{0.45}}{{{{{\gamma _{\rm pra}}} \mathord{\left/ {\vphantom {{{\gamma _{\rm pra}}} {{\gamma_{\rm y}}}}} \right. } {{\gamma_{\rm y}}}} + 0.48}} \end{equation} $$

where $$ {\gamma _{\rm pra}} $$ is the plastic angle at the end of beam and $$ {\gamma _{\rm y}} $$ is the yield angle of beam. For safety reasons, $$ {{{\gamma _{\rm pra}}} \mathord{\left/ {\vphantom {{{\gamma _{pra}}} {{\gamma _y}}}} \right. } {{\gamma_{\rm y}}}} $$ should take the minimum value for all members so that the dynamic amplification factor is specified as $$ {\Omega _N} = 1.31 $$[55].

The deformation criterion serves as the basis for assessing the progressive collapse resistance of damaged structures. When reinforcement rupture takes place, the vertical displacement of the failed joint node at the column removal typically ranges between $$ 15% $$ to $$ 20% $$ of the beam span, as documented in previous studies[5658]. For this study, the median value of $$ 18% $$ is utilized. The LSF capturing the progressive collapse can be formulated as follows:

$$ \begin{equation} Z=G\left( {\bf{X}} \right) = {\Delta _{\lim }} - \Delta \left( {\bf{X}} \right) \end{equation} $$

The equation represents the vertical displacement of the failed joint node, denoted as $$ \Delta \left( {\bf{X}} \right) $$, with $$ {\Delta _{\lim }} $$ representing the maximum allowable vertical displacement for the failed node. In the case of the beam span being $$ 6 \rm m $$, this value is calculated as $$ {\Delta _{\lim }} = 0.18 \times 6000 \rm mm = 1080 \rm mm $$. The comprehensive analysis encompasses both epistemic and aleatory uncertainties, and the characteristics and probability distributions of the involved random variables are outlined in Table 13.

Table 13

Statistical information of random variables for progressive collapse analysis

VariableDescriptionDistributionMeanCoefficient of variation
$$ {f_c}(\rm MPa) $$Peak concrete stresses in non-core areasLognormal$$ \left[ {28,28.8} \right] $$0.172
$$ \epsilon_c $$Peak concrete strain in non-core areasLognormal0.00180.15
$$ \epsilon_u $$Ultimate concrete strain in non-core areasLognormal0.0050.15
$$ {f_{c,cor}}(\rm MPa) $$Peak concrete stresses in core areasLognormal$$ \left[ {31.5,32} \right] $$0.172
$$ \epsilon_{c,cor} $$Peak concrete strain in core areasLognormal0.0050.14
$$ {f_{u,cor}}(\rm MPa) $$Ultimate concrete stresses in core areasLognormal14.70.15
$$ \epsilon_{u,cor} $$Ultimate concrete strain in non-core areasLognormal0.020.15
$$ {f_y}(\rm MPa) $$Yield strength of rebarLognormal$$ \left[ {445,455} \right] $$0.072
$$ {E_s}(\rm GPa) $$Initial elastic modulus of rebarLognormal2000.034
$$ b $$Strain-hardening ratio of rebarLognormal0.020.1
$$ {D}(\rm kN/{m^2}) $$Dead loadNormal$$ \left[ {5.25,5.5} \right] $$0.1
$$ {L}(\rm kN/{m^2}) $$Live loadNormal$$ \left[ {3,3.25} \right] $$0.47

The proposed method yields reliability index intervals for various working conditions, as presented in Table 14. The efficiency of this method shines through, with just a few hundred finite-element analyses needed for each working condition to determine the reliability index intervals for progressive collapse analyses of structures. A visual representation of the differences between these reliability index intervals is depicted in the histograms displayed in Figure 12. The histograms suggest that the reliability index intervals exhibit relatively minor fluctuations across different damaged floors on the same axis when column removal takes place. Notably, the upper and lower boundaries of the reliability index for side column removals are smaller than those for middle column removals. This observation underscores the importance of reinforcing side columns against progressive collapse from a reliability standpoint.

Table 14

Bounds of reliability index under various working conditions

Working condition$$ \beta ^L $$$$ \beta ^U $$$$ N_{call} $$
C12.01032.5427$$ 189+74+117=380 $$
C21.99542.5690$$ 209+65+242=516 $$
C32.00472.5690$$ 350+124+467=941 $$
C42.00842.6045$$ 287+154+422=863 $$
E11.82502.3867$$ 194+122+466=782 $$
E21.81842.3867$$ 190+119+193=502 $$
E31.80552.3781$$ 195+90+93=378 $$
E41.78292.3378$$ 190+135+183=508 $$
Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function

Figure 12. Histogram of reliability index interval under various working conditions.

Typically, robustness pertains to the ability of a structure to maintain satisfactory load-bearing capacity despite the random perturbation of specific parameters, ensuring overall performance meets the required standards. The assessment of structural robustness after a progressive collapse event employs the logarithmic ratio of anteroposterior failure probabilities. The robustness index is formally defined as follows[55]:

$$ \begin{equation} R = \frac{{\ln {P_{f,d}}}}{{\ln {P_{f,0}}}} \end{equation} $$

where $$ {P_{f,d}} $$ signifies the failure probability of the structure following localized damage, while $$ {P_{f,0}} $$ represents the failure probability of an undamaged structure. Evidently, the robustness index $$ R $$ falls within the range $$ \left[ {0,1} \right] $$. As the likelihood of progressive collapse in the structure rises, or in other words, as $$ {P_{f,d}} $$ approaches 1, the robustness index $$ R $$ tends toward 0. This signifies that the capacity of the structure to withstand progressive collapse weakens, indicating lower post-damage robustness.

For the sake of simplicity, the failure probability of the undamaged structure is assumed to range between $$ {1 \mathord{\left/ {\vphantom {1 3}} \right. } 3} $$ and $$ {1 \mathord{\left/ {\vphantom {1 5}} \right. } 5} $$ of the target failure probability for the member, as outlined in[59]. Consequently, $$ {P_{f,0}} = 1.16 \times {10^{ - 3}} $$ is adopted as a specific value. By employing Equation (31), the limits of the robustness index for the structure under epistemic uncertainty are determined accordingly. The findings are presented in Table 15. A similar conclusion can be drawn, indicating that robustness design for columns along the lateral axes should be reinforced, as the boundaries of the robustness index with lateral column removal are distinctly smaller than those with central column removal.

Table 15

Bounds of robustness index under different working conditions

Working conditionC1C2C3C4E1E2E3E4
$$ R^L $$0.5630.5580.5610.5630.5000.4980.4940.487
$$ R^U $$0.7700.7810.7810.7960.7050.7050.7020.686

CONCLUDING REMARKS

This paper presents a novel approach for conducting imprecise structural reliability analysis, addressing both epistemic and aleatory uncertainties through a hierarchical model represented by p-boxes. The proposed method introduces a new learning function based on RE within an AK-MCS. This new learning function, referred to as the REF, efficiently and accurately evaluates failure probabilities in the context of precise probability analysis.

The REF learning function effectively guides the selection of optimal sampling points, contributing to improved accuracy and efficiency when compared to existing learning functions. Notably, the ability of REF to determine optimal points enhances reliability assessment in precise probability analysis. Additionally, when tackling the challenge of calculating failure probabilities associated with distributed parameters within p-boxes, the proposed approach capitalizes on the information acquired from prior AK-MCS-REF iterations. This allows for leveraging the LSF evaluations of previous AK-MCS-REF as an initial DoE for subsequent iterations. Consequently, the establishment of a second-level Kriging meta-model becomes straightforward, centered on distribution parameters and their corresponding failure probabilities. The application of an EI function facilitates the identification of optimal distribution parameter points for further exploration. Ultimately, through an EGO algorithm, the method culminates in obtaining bounds for failure probabilities or reliability indexes.

The effectiveness and accuracy of the proposed method are demonstrated through four numerical examples, validating its performance. Furthermore, the method is practically applied to analyze the imprecise reliability of a frame structure under progressive collapse. Key findings include (1) the superiority of the REF learning function over existing alternatives, leading to enhanced accuracy and efficiency in precise probability analysis; (2) the substantial reduction in computational efforts compared to traditional double-loop MCS in imprecise reliability analysis; and (3) the utility of the method in providing practical insights for robustness design in scenarios such as a progressive collapse of engineering structures.

It is crucial to acknowledge that the proposed method comes with certain limitations that warrant consideration and potential future developments. Firstly, in scenarios where distribution parameters encompass large intervals or involve small-scale failure problems, the method may necessitate a substantial number of LSF evaluations to ensure desired accuracy. Consequently, this could result in time-intensive and resource-demanding computations. Secondly, the method's foundation on the Kriging metamodel implies that the challenge of dimensionality, commonly referred to as the curse of dimensionality, might still pose obstacles. This becomes especially pertinent as the number of interval and random variables exceeds 20. As such, accurately capturing intricate relationships between variables and the LSF could become more challenging in high-dimensional spaces. To address these limitations and enhance the method's applicability, future research endeavors may focus on refining strategies to mitigate the computational burden associated with large parameter intervals or small failure scenarios. Additionally, exploring advanced techniques for managing high-dimensional spaces, such as dimension reduction methods, could prove beneficial for more effectively handling complex relationships and improving efficiency.

APPENDIX: Analytical derivation of the learning function H

Since $$ \hat g\left( {\bf{x}} \right) \sim N\left( {{\mu _{{{\hat g}}}},\sigma _{{{\hat g}}}^2} \right) $$, the analytical expression of the learning function H can be derived from the Equation 9 as follows

$$ \begin{equation} \begin{array}{l} \begin{aligned} H\left( {\hat g\left( {\bf{x}} \right)} \right) =& \left| { - \int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\ln \left( {f\left( {\hat g} \right)} \right)f\left( {\hat g} \right){\rm{d}}\hat g} } \right|\\ =& \left| { - \int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\frac{1}{{\sqrt {2\pi } {\sigma _{\hat g}}}}\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right)\ln \left( {\frac{1}{{\sqrt {2\pi } {\sigma _{\hat g}}}}\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right)} \right){\rm{d}}\hat g} } \right|\\ =& \left| { - \int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\frac{1}{{\sqrt {2\pi } {\sigma _{\hat g}}}}\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right)\left[ {\ln \left( {\frac{1}{{\sqrt {2\pi } {\sigma _{\hat g}}}}} \right){\rm{ - }}\frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right]{\rm{d}}\hat g} } \right|\\ =& \left| {\int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\frac{1}{{\sqrt {2\pi } {\sigma _{\hat g}}}}\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right)\left[ {\ln \left( {\sqrt {2\pi } {\sigma _{\hat g}}} \right){\rm{ + }}\frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right]{\rm{d}}\hat g} } \right|\\ =& \left| {\frac{{\ln \left( {\sqrt {2\pi } {\sigma _{\hat g}}} \right)}}{{\sqrt {2\pi } {\sigma _{\hat g}}}}\int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right){\rm{d}}\hat g} + \frac{1}{{2\sqrt {2\pi } \sigma _{\hat g}^3}}\int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right){{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}{\rm{d}}\hat g} } \right|\\ =& \left| {\ln \left( {\sqrt {2\pi } {\sigma _{\hat g}}} \right)\int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\frac{1}{{\sqrt {2\pi } {\sigma _{\hat g}}}}\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right){\rm{d}}\hat g} + \frac{{ - 1}}{{2\sqrt {2\pi } {\sigma _{\hat g}}}}\int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\left( {\hat g - {\mu _{\hat g}}} \right){\rm{d}}\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right)} } \right|\\ =& \left| \begin{array}{l} \ln \left( {\sqrt {2\pi } {\sigma _{\hat g}}} \right)\left[ {\varPhi \left( {\frac{{2{\sigma _{\hat g}} - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{\hat g}} - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right)} \right]\\ + \frac{{ - 1}}{{2\sqrt {2\pi } {\sigma _{\hat g}}}}\left[ {\left. {\left( {\hat g - {\mu _{\hat g}}} \right)\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right)} \right|_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} - \int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right){\rm{d}}\hat g} } \right] \end{array} \right|\\ =& \left| \begin{array}{l} \ln \left( {\sqrt {2\pi } {\sigma _{\hat g}}} \right)\left[ {\varPhi \left( {\frac{{2{\sigma _{\hat g}} - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{\hat g}} - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right)} \right] - \frac{1}{{2\sqrt {2\pi } {\sigma _{\hat g}}}}\left[ {\left. {\left( {\hat g - {\mu _{\hat g}}} \right)\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right)} \right|_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}}} \right]\\ + \frac{1}{2}\int_{ - 2{\sigma _{\hat g}}}^{2{\sigma _{\hat g}}} {\frac{1}{{\sqrt {2\pi } {\sigma _{\hat g}}}}\exp \left( { - \frac{{{{\left( {\hat g - {\mu _{\hat g}}} \right)}^2}}}{{2\sigma _{\hat g}^2}}} \right){\rm{d}}\hat g} \end{array} \right|\\ =& \left| {\begin{array}{*{20}{l}} {\ln \left( {\sqrt {2\pi } {\sigma _{\hat g}} + \frac{1}{2}} \right)\left[ {\varPhi \left( {\frac{{2{\sigma _{\hat g}} - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{\hat g}} - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right)} \right]}\\ { - \left[ {\frac{{2{\sigma _{\hat g}} - {\mu _{\hat g}}}}{2}\varphi \left( {\frac{{2{\sigma _{\hat g}} - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right) + \frac{{2{\sigma _{\hat g}} + {\mu _{\hat g}}}}{2}\varphi \left( {\frac{{ - 2{\sigma _{\hat g}} - {\mu _{\hat g}}}}{{{\sigma _{\hat g}}}}} \right)} \right]} \end{array}} \right| \end{aligned} \end{array} \end{equation} $$

Appendix B:

Actually, Equation (12) can be derived as follows

$$ \begin{equation} \begin{array}{l} \begin{aligned} REF =& \left( {\left. {f\left( {{{\hat g}_p}} \right)} \right|f\left( {{{\hat g}_q}} \right)} \right) = \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {f\left( {{{\hat g}_p}} \right)\ln \frac{{f\left( {{{\hat g}_p}} \right)}}{{f\left( {{{\hat g}_q}} \right)}}{\rm{d}}{{\hat g}_p}} \\ =& \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {f\left( {{{\hat g}_p}} \right)\left( {\ln f\left( {{{\hat g}_p}} \right) - \ln f\left( {{{\hat g}_q}} \right)} \right){\rm{d}}{{\hat g}_q}} \\ =& \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {f\left( {{{\hat g}_p}} \right)\left( {\ln \left( {\frac{1}{{\sqrt {2\pi } {\sigma _{{{\hat g}_p}}}}}\exp \left( { - {{\left( {\frac{{{{\hat g}_p} - {\mu _{{{\hat g}_p}}}}}{{\sqrt 2 {\sigma _{{{\hat g}_p}}}}}} \right)}^2}} \right)} \right) - \ln \left( {\frac{1}{{\sqrt {2\pi } {\sigma _{{{\hat g}_q}}}}}\exp \left( { - {{\left( {\frac{{{{\hat g}_p} - {\mu _{{{\hat g}_q}}}}}{{\sqrt 2 {\sigma _{{{\hat g}_q}}}}}} \right)}^2}} \right)} \right)} \right){\rm{d}}{{\hat g}_p}} \\ =& \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {f\left( {{{\hat g}_p}} \right)\left( { - \frac{1}{2}\ln 2\pi - \ln {\sigma _{{{\hat g}_p}}} - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}} + \frac{1}{2}\ln 2\pi + \ln {\sigma _{{{\hat g}_q}}} + \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}} \right){\rm{d}}{{\hat g}_p}} \\ =& \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {f\left( {{{\hat g}_p}} \right)\left( {\ln \frac{{{\sigma _{{{\hat g}_q}}}}}{{{\sigma _{{{\hat g}_p}}}}} + \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}} - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right){\rm{d}}{{\hat g}_p}} \\ =& \ln \frac{{{\sigma _{{{\hat g}_q}}}}}{{{\sigma _{{{\hat g}_p}}}}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} + \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} - \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} \\ =& \ln \frac{{{\sigma _{{{\hat g}_q}}}}}{{{\sigma _{{{\hat g}_p}}}}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{1}{{\sqrt 2 \pi {\sigma _{{{\hat g}_p}}}}}\exp \left( { - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right){\rm{d}}{{\hat g}_p}}\\ &+ \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} - \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} \end{aligned} \end{array} \end{equation} $$

Let $$ {\cal T}= \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} $$ and $$ {\cal S}= \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} $$, then the REF can be transformed as

$$ \begin{equation} \begin{array}{l} \begin{aligned} REF =& \ln \frac{{{\sigma _{{{\hat g}_q}}}}}{{{\sigma _{{{\hat g}_p}}}}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{1}{{\sqrt 2 \pi {\sigma _{{{\hat g}_p}}}}}\exp \left( { - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right){\rm{d}}{{\hat g}_p}}+{\cal T}-{\cal S}\\ =& \ln \frac{{{\sigma _{{{\hat g}_q}}}}}{{{\sigma _{{{\hat g}_p}}}}}\left[ {\varPhi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)} \right]+{\cal T}-{\cal S} \end{aligned} \end{array} \end{equation} $$

$$ \begin{equation} \begin{array}{l} \begin{aligned} {\cal T}=& \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} = \frac{1}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}} + {\mu _{{{\hat g}_p}}} - {\mu _{{{\hat g}_q}}}} \right)}^2}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} \\ =& \frac{1}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} + \frac{1}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {{{\left( {{\mu _{{{\hat g}_p}}} - {\mu _{{{\hat g}_q}}}} \right)}^2}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} \\ &+ \frac{1}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {2\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)\left( {{\mu _{{{\hat g}_p}}} - {\mu _{{{\hat g}_q}}}} \right)f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} \\ =& \frac{1}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)\frac{{ - \sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sqrt {2 \pi} {\sigma _{{{\hat g}_p}}}}}{\rm{d}}\left( {\exp \left( { - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right)} \right)} + \frac{{{{\left( {{\mu _{{{\hat g}_p}}} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} \\ &+ \frac{{\left( {{\mu _{{{\hat g}_p}}} - {\mu _{{{\hat g}_q}}}} \right)}}{{\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{ - \sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sqrt {2\pi } {\sigma _{{{\hat g}_p}}}}}{\rm{d}}\left( {\exp \left( { - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right)} \right)} \\ =& \frac{1}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\left( {\left. {\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)\left( { - \sigma _{{\sigma _{{{\hat g}_p}}}}^2} \right)\frac{1}{{\sqrt {2 \pi} {\sigma _{{{\hat g}_p}}}}}\exp \left( { - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right)} \right|_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} - \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{ - \sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sqrt {2 \pi} {\sigma _{{{\hat g}_p}}}}}\exp \left( { - \frac{{{{\left( {{{\hat g}_p}- {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right){\rm{d}}{{\hat g}_p}} } \right) \\ &+ \frac{{{{\left( {{\mu _{{{\hat g}_p}}} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} + \frac{{\left( {{\mu _{{{\hat g}_q}}} - {\mu _{{{\hat g}_p}}}} \right)\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\left( {\left. {\frac{1}{{\sqrt {2\pi } {\sigma _{{{\hat g}_p}}}}}\exp \left( { - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right)} \right|_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}}} \right)\\ =& - \frac{{\sigma _{{\sigma _{{{\hat g}_p}}}}^2\left( {2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}} \right)}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\varphi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \frac{{\sigma _{{\sigma _{{{\hat g}_p}}}}^2\left( {2{\sigma _{{{\hat g}_p}}} + {\mu _{{{\hat g}_p}}}} \right)}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\varphi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) \\ &+ \left( {\frac{{\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}} + \frac{{{{\left( {{\mu _{{{\hat g}_p}}} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}} \right)\left[ {\varPhi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)} \right] \\ &+ \frac{{\left( {{\mu _{{{\hat g}_q}}} - {\mu _{{{\hat g}_p}}}} \right)\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\left( {\varphi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \varphi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)} \right)\\ \end{aligned} \end{array} \end{equation} $$

$$ \begin{equation} \begin{array}{l} \begin{aligned} {\cal S} =& \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}f\left( {{{\hat g}_p}} \right){\rm{d}}{{\hat g}_p}} = \frac{1}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}\int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)\frac{{ - \sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sqrt {2 \pi} {\sigma _{{{\hat g}_p}}}}}{\rm{d}}\left( {\exp \left( { - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right)} \right)} \\ =& \frac{1}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}\left( {\left. {\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)\left( { - \sigma _{{\sigma _{{{\hat g}_p}}}}^2} \right)\frac{1}{{\sqrt {2 \pi} {\sigma _{{{\hat g}_p}}}}}\exp \left( { - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right)} \right|_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} - \int_{ - 2{\sigma _{{{\hat g}_p}}}}^{2{\sigma _{{{\hat g}_p}}}} {\frac{{ - \sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sqrt {2 \pi} {\sigma _{{{\hat g}_p}}}}}\exp \left( { - \frac{{{{\left( {{{\hat g}_p} - {\mu _{{{\hat g}_p}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}} \right){\rm{d}}{{\hat g}_p}} } \right)\\ =& - \frac{{\left( {2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}} \right)}}{2}\varphi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \frac{{\left( {2{\sigma _{{{\hat g}_p}}} + {\mu _{{{\hat g}_p}}}} \right)}}{2}\varphi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) \\ &+ \frac{1}{2}\left[ {\varPhi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)} \right]\\ \end{aligned} \end{array} \end{equation} $$

In conclusion, the analytical expression of the learning function REF is

$$ \begin{equation} \begin{array}{l} \begin{aligned} REF =& \ln \frac{{{\sigma _{{{\hat g}_q}}}}}{{{\sigma _{{{\hat g}_p}}}}}\left[ {\varPhi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)} \right] - \frac{{\sigma _{{\sigma _{{{\hat g}_p}}}}^2\left( {2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}} \right)}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\varphi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)\\ &- \frac{{\sigma _{{\sigma _{{{\hat g}_p}}}}^2\left( {2{\sigma _{{{\hat g}_p}}} + {\mu _{{{\hat g}_p}}}} \right)}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\varphi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) + \left( {\frac{{\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}} + \frac{{{{\left( {{\mu _{{{\hat g}_p}}} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}} \right)\left[ {\varPhi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)} \right]\\ &+ \frac{{\left( {{\mu _{{{\hat g}_q}}} - {\mu _{{{\hat g}_p}}}} \right)\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}\left( {\varphi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \varphi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)} \right) + \frac{{\left( {2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}} \right)}}{2}\varphi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) \\ &+ \frac{{\left( {2{\sigma _{{{\hat g}_p}}} + {\mu _{{{\hat g}_p}}}} \right)}}{2}\varphi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \frac{1}{2}\left[ {\varPhi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)} \right]\\ =& \left( {\ln \frac{{{\sigma _{{{\hat g}_q}}}}}{{{\sigma _{{{\hat g}_p}}}}} - \frac{1}{2} + \frac{{{{\left( {{\mu _{{{\hat g}_p}}} - {\mu _{{{\hat g}_q}}}} \right)}^2}}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}} + \frac{{\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}} \right)\left[ {\varPhi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) - \varPhi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)} \right]\\ &+ \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{2} - \frac{{\left( {2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}} \right)\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}} + \frac{{\left( {{\mu _{{{\hat g}_q}}} - {\mu _{{{\hat g}_p}}}} \right)\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}} \right)\varphi \left( {\frac{{2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right)\\ &+ \left( {\frac{{2{\sigma _{{{\hat g}_p}}} + {\mu _{{{\hat g}_p}}}}}{2} - \frac{{\left( {2{\sigma _{{{\hat g}_p}}} + {\mu _{{{\hat g}_p}}}} \right)\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{2\sigma _{{\sigma _{{{\hat g}_q}}}}^2}} - \frac{{\left( {{\mu _{{{\hat g}_q}}} - {\mu _{{{\hat g}_p}}}} \right)\sigma _{{\sigma _{{{\hat g}_p}}}}^2}}{{\sigma _{{\sigma _{{{\hat g}_q}}}}^2}}} \right)\varphi \left( {\frac{{ - 2{\sigma _{{{\hat g}_p}}} - {\mu _{{{\hat g}_p}}}}}{{{\sigma _{{{\hat g}_p}}}}}} \right) \end{aligned} \end{array} \end{equation} $$

DECLARATIONS

Authors' contributions

Made substantial contributions to the conception and design of the study and performed data analysis and interpretation: Du Y

Performed data acquisition and provided administrative, technical, and material support: Xu J

Availability of data and materials

Some or all data, models, or codes generated or used during the study are available from the corresponding author by request.

Financial support and sponsorship

The National Natural Science Foundation of China (Nos. 52278178, 51978253), Natural Science Foundation of Hunan Province, China (No. 2022JJ20012), the Open Fund of State Key Laboratory of Disaster Prevention in Civil Engineering, China (No. SLDRCE21-04) and Fundamental Research Funds for the Central Universities, China (No.531107040224) are gratefully appreciated for the financial support of this research.

Conflicts of interest

All authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2023.

REFERENCES

1. Guo J, Du X. Sensitivity analysis with mixture of epistemic and aleatory uncertainties. AIAA J 2007;45:2337-49.

2. Der Kiureghian A, Ditlevsen O. Aleatory or epistemic? Does it matter? Structural Safety 2009;31:105-12.

3. Beer M, Ferson S, Kreinovich V. Imprecise probabilities in engineering analyses. Mech Syst Signal Pr 2013;37:4-29.

4. Xiao M, Zhang J, Gao L. A Kriging-assisted sampling method for reliability analysis of structures with hybrid uncertainties. Reliab Eng Syst Safe 2021;210:107552.

5. Schöbi R, Sudret B. Global sensitivity analysis in the context of imprecise probabilities (p-boxes) using sparse polynomial chaos expansions. Reliab Eng Syst Safe 2019;187:129-41.

6. Yuan X, Faes MG, Liu S, Valdebenito MA, Beer M. Efficient imprecise reliability analysis using the Augmented Space Integral. Reliab Eng Syst Safe 2021;210:107477.

7. Elishakoff I. Essay on uncertainties in elastic and viscoelastic structures: from AM Freudenthal's criticisms to modern convex modeling. Compu Struct 1995;56:871-95.

8. Faes M, Moens D. Recent trends in the modeling and quantification of non-probabilistic uncertainty. Arch Computat Methods Eng 2020;27:633-71.

9. Ben-Haim Y. A non-probabilistic measure of reliability of linear systems based on expansion of convex models. Struct Saf 1995;17:91-109.

10. Ben-Haim Y, Chen G, Soong T. Maximum structural response using convex models. J Eng Mech 1996;122:325-33.

11. Jiang C, Bi R, Lu G, Han X. Structural reliability analysis using non-probabilistic convex model. Comput Method Appl Mech Eng 2013;254:83-98.

12. Möller B, Beer M. Fuzzy randomness: uncertainty in civil engineering and computational mechanics Springer Science & Business Media; 2004.

13. Ferson S, Kreinovich V, Grinzburg L, Myers D, Sentz K. Constructing probability boxes and Dempster-Shafer structures. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); 2015.

14. Schöbi R, Sudret B. Structural reliability analysis for p-boxes using multi-level meta-models. Probabilist Eng Mech 2017;48:27-38.

15. Helton JC, Johnson J, Oberkampf WL, Storlie CB. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Comput Method Appl Mech Eng 2007;196:3980-98.

16. Limbourg P, De Rocquigny E. Uncertainty analysis using evidence theory-confronting level-1 and level-2 approaches with data availability and computational constraints. Reliab Eng Syst Safe 2010;95:550-64.

17. Yang X, Liu Y, Ma P. Structural reliability analysis under evidence theory using the active learning kriging model. Eng Optimiz 2017;49:1922-38.

18. Xiao NC, Huang HZ, Wang Z, Pang Y, He L. Reliability sensitivity analysis for structural systems in interval probability form. Struct Multidisc Optim 2011;44:691-705.

19. Buckley JJ. Fuzzy probabilities: new approach and applications. vol. 115. Springer Science & Business Media; 2005.

20. Hurtado JE. Assessment of reliability intervals under input distributions with uncertain parameters. Probabilist Eng Mech 2013;32:80-92.

21. Alvarez DA, Hurtado JE. An efficient method for the estimation of structural reliability intervals with random sets, dependence modeling and uncertain inputs. Comput Struct 2014;142:54-63.

22. de Angelis M, Patelli E, Beer M. Advanced line sampling for efficient robust reliability analysis. Struct Saf 2015;52:170-82.

23. Kaymaz I. Application of kriging method to structural reliability problems. Struct Saf 2005;27:133-51.

24. Echard B, Gayton N, Lemaire M. AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Struct Saf 2011;33:145-54.

25. Xiao M, Zhang J, Gao L. A system active learning Kriging method for system reliability-based design optimization with a multiple response model. Reliab Eng Syst Safe 2020;199:106935.

26. Kim SH, Na SW. Response surface method using vector projected sampling points. Struct Saf 1997;19:3-19.

27. Bourinet JM, Deheeger F, Lemaire M. Assessing small failure probabilities by combined subset simulation and support vector machines. Struct Saf 2011;33:343-53.

28. Luo CJ, Keshtegar B, Zhu S, Niu X. EMCS-SVR: Hybrid efficient and accurate enhanced simulation approach coupled with adaptive SVR for structural reliability analysis. Comput Method Appl Mech Eng 2022;400:115499.

29. Papadrakakis M, Lagaros ND. Reliability-based structural optimization using neural networks and Monte Carlo simulation. Comput Method Appl Mech Eng 2002;191:3491-507.

30. Bao Y, Xiang Z, Li H. Adaptive subset searching-based deep neural network method for structural reliability analysis. Reliab Eng Syst Safe 2021;213:107778.

31. Luo CJ, Keshtegar B, Zhu S, Taylan O, Niu XP. Hybrid enhanced Monte Carlo simulation coupled with advanced machine learning approach for accurate and efficient structural reliability analysis. Comput Method Appl Mech Eng 2022;388:114218.

32. Sobol' IM. Theorems and examples on high dimensional model representation. Reliab Eng Syst Safe 2003;79:187-93.

33. Balesdent M, Morio J, Brevault L. Rare event probability estimation in the presence of epistemic uncertainty on input probability distribution parameters. Methodol Comput Appl Probab 2016;18:197-216.

34. Xiong Y, Sampath S. A fast-convergence algorithm for reliability analysis based on the AK-MCS. Reliab Eng Syst Safe 2021;213:107693.

35. Ameryan A, Ghalehnovi M, Rashki M. AK-SESC: a novel reliability procedure based on the integration of active learning kriging and sequential space conversion method. Reliab Eng Syst Safe 2022;217:108036.

36. Huang SY, Zhang SH, Liu LL. A new active learning Kriging metamodel for structural system reliability analysis with multiple failure modes. Reliab Eng Syst Safe 2022;228:108761.

37. Li P, Wang Y. An active learning reliability analysis method using adaptive Bayesian compressive sensing and Monte Carlo simulation (ABCS-MCS). Reliab Eng Syst Safe 2022;221:108377.

38. Zhou J, Li J. IE-AK: A novel adaptive sampling strategy based on information entropy for Kriging in metamodel-based reliability analysis. Reliab Eng Syst Safe 2022;229:108824.

39. Matheron G. The intrinsic random functions and their applications. Adv Appl Probab 1973;5:439-68.

40. Sacks J, Welch WJ, Mitchell TJ, Wynn HP. [Design and analysis of computer experiments]: rejoinder. Statist Sci 1989;4:433-5.

41. Vert JP, Tsuda K, Schölkopf B. A primer on kernel methods. Kernel Methods in Computational Biology 2004;47:35-70.

42. Neal RM. Bayesian learning for neural networks. vol. 118. Springer Science & Business Media; 2012.

43. Jones DR, Schonlau M, Welch WJ. Efficient global optimization of expensive black-box functions. J Global Optim 1998;13:455-92.

44. Bachoc F. Cross validation and maximum likelihood estimations of hyper-parameters of Gaussian processes with model misspecification. Comput Stat Data Anal 2013;66:55-69.

45. Efficient global reliability analysis for nonlinear implicit performance functions. AIAA J 2008;46:2459–68.

46. Lv Z, Lu Z, Wang P. A new learning function for Kriging and its applications to solve reliability problems in engineering. Comput Math Appl 2015;70:1182-97.

47. Shannon CE. A mathematical theory of communication. SIGMOBILE Mob Comput Commun Rev 2001;5:3-55.

48. Eldred MS, Swiler LP. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part Ⅰ: Algorithms and Benchmark Results, Sandia National Laboratories, SAND2009-5805, Albuquerque, NM 2009; doi: 10.2172/972887.

49. Hofer E, Kloos M, Krzykacz-Hausmann B, Peschke J, Woltereck M. An approximate epistemic uncertainty analysis approach in the presence of epistemic and aleatory uncertainties. Reliab Eng Syst Safe 2002;77:229-38.

50. Girard A, Rasmussen C, Candela JQ, Murray-Smith R. Gaussian process priors with uncertain inputs application to multiple-step ahead time series forecasting. NeurIPS 2002;15. .

51. Duvenaud D. Automatic model construction with Gaussian processes. Diss. University of Cambridge, 2014.

52. Shayanfar MA, Barkhordari MA, Roudak MA. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method. Commun Nonlinear Sci Numer Simul 2017;47:223-37.

53. Xu J, Du Y, Zhou L. A multi-fidelity integration rule for statistical moments and failure probability evaluations. Struct Multidisc Optim 2021;64:1305-26.

54. UFC4-023–03. Unified facilities criteria: design of buildings to resist progressive collapse. USA: United States Department of Defense; 2013.

55. Rong G. A Study on progressive collapse and reliability of reinforced concrete frame structures. Hunan University; 2018.

56. Su Y, Tian Y, Song X. Progressive collapse resistance of axially-restrained frame beams. ACI Struct J 2009;106:600-607.

57. Yu J, Tan KH. Experimental and numerical investigation on progressive collapse resistance of reinforced concrete beam column sub-assemblages. Eng Struct 2013;55:90-106.

58. Yu J, Tan KH. Structural behavior of reinforced concrete frames subjected to progressive collapse. ACI Struct J 2017;114:63-74.

59. Moan T. Target levels for reliability-based reassessment of offshore structures, in Proc., 7th Int. Conf. on Structural Safety and Reliability. Kyoto, 1997.

Cite This Article

Research Article
Open Access
Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function
Yunjie Du, Jun Xu

How to Cite

Du, Y.; Xu J. Structural reliability analysis with epistemic and aleatory uncertainties via AK-MCS with a new learning function. Dis. Prev. Res. 2023, 2, 15. http://dx.doi.org/10.20517/dpr.2023.18

Download Citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click on download.

Export Citation File:

Type of Import

Tips on Downloading Citation

This feature enables you to download the bibliographic information (also called citation data, header data, or metadata) for the articles on our site.

Citation Manager File Format

Use the radio buttons to choose how to format the bibliographic data you're harvesting. Several citation manager formats are available, including EndNote and BibTex.

Type of Import

If you have citation management software installed on your computer your Web browser should be able to import metadata directly into your reference database.

Direct Import: When the Direct Import option is selected (the default state), a dialogue box will give you the option to Save or Open the downloaded citation data. Choosing Open will either launch your citation manager or give you a choice of applications with which to use the metadata. The Save option saves the file locally for later use.

Indirect Import: When the Indirect Import option is selected, the metadata is displayed and may be copied and pasted as needed.

About This Article

Special Issue

© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
598
Downloads
194
Citations
0
Comments
0
0

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Disaster Prevention and Resilience
ISSN 2832-4056 (Online)
Follow Us

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/