0
Research Papers: Design Automation

Reliability Analysis With Monte Carlo Simulation and Dependent Kriging Predictions OPEN ACCESS

[+] Author and Article Information
Zhifu Zhu

Department of Mechanical
and Aerospace Engineering,
Missouri University of Science and Technology,
258A Toomey Hall,
400 West 13th Street,
Rolla, MO 65409-0500
e-mail: zzgc5@mst.edu

Xiaoping Du

Professor
Department of Mechanical
and Aerospace Engineering,
Missouri University of Science and Technology,
272 Toomey Hall,
400 West 13th Street,
Rolla, MO 65409-0500
e-mail: dux@mst.edu

1Corresponding author.

Contributed by the Design Automation Committee of ASME for publication in the JOURNAL OF MECHANICAL DESIGN. Manuscript received November 1, 2015; final manuscript received July 10, 2016; published online September 14, 2016. Assoc. Editor: Nam H. Kim.

J. Mech. Des 138(12), 121403 (Sep 14, 2016) (11 pages) Paper No: MD-15-1741; doi: 10.1115/1.4034219 History: Received November 01, 2015; Revised July 10, 2016

Reliability analysis is time consuming, and high efficiency could be maintained through the integration of the Kriging method and Monte Carlo simulation (MCS). This Kriging-based MCS reduces the computational cost by building a surrogate model to replace the original limit-state function through MCS. The objective of this research is to further improve the efficiency of reliability analysis with a new strategy for building the surrogate model. The major approach used in this research is to refine (update) the surrogate model by accounting for the full information available from the Kriging method. The existing Kriging-based MCS uses only partial information. Higher efficiency is achieved by the following strategies: (1) a new formulation defined by the expectation of the probability of failure at all the MCS sample points, (2) the use of a new learning function to choose training points (TPs). The learning function accounts for dependencies between Kriging predictions at all the MCS samples, thereby resulting in more effective TPs, and (3) the employment of a new convergence criterion. The new method is suitable for highly nonlinear limit-state functions for which the traditional first- and second-order reliability methods (FORM and SORM) are not accurate. Its performance is compared with that of existing Kriging-based MCS method through five examples.

FIGURES IN THIS ARTICLE
<>

Reliability analysis evaluates the likelihood of failure for components or systems in the presence of randomness [1]. Seeking for a good balance between accuracy and efficiency is always the focus on the methodology development of reliability analysis. MCS [2,3] is commonly used for reliability analysis. MCS can produce high accuracy given a sufficiently large sample size. But it is computationally expensive if the sample size is too large. On the other hand, approximation methods, such as the FORM and SORM [2,46], are in general much more efficient, but may not be accurate for highly nonlinear limit-state functions. To this end, design of experiments (DoE) based MCS methods have been developed for high accuracy and efficiency.

DoE methods are used to generate initial TPs, and a surrogate model is built for a limit-state function, then MCS is performed based on the surrogate model. The DoE methods for reliability analysis include response surface modeling [710], artificial neural networks [7,11], support vector machines [12], polynomial chaos expansions [13], and Kriging [7,1417]. Most of these methods evaluate the limit-state function at a number of predefined points and then create a surrogate model to replace the limit-state function in the subsequent MCS.

The Kriging-based active learning reliability method is used to create the surrogate model in a sequential manner [18]. After the initial surrogate model is built with a small number of initial TPs, more TPs are added one-by-one until the surrogate model accurately represents the original limit-state function. A learning function is employed in the model building process to select the best TPs intelligently and refine the surrogate model in a most efficient fashion.

A learning function is a function that defines the criteria of selecting a best TP so that the surrogate model can be refined with improved accuracy. Different Kriging-based reliability methods use different learning functions. Based on the efficient global optimization [19], the efficient global reliability analysis (EGRA) [1] uses the expected feasibility function to determine TPs, while the active learning reliability method combining Kriging and Monte Carlo Simulation (AK-MCS) [18] uses the probability of predicting the correct sign of the limit-state function as its learning function. Other leaning functions are also available [20,21], and discussions on learning functions can be found in Refs. [22] and [23].

EGRA and AK-MCS have also been applied to system reliability [24,25] and time-dependent reliability analyses [26,27], and other improvements have been made [21,22,2830]. The Kriging-based reliability methods can be further improved with respect to accuracy and efficiency because of the following reason: even though the responses predicted by Kriging are realizations of a Gaussian process and are therefore dependent on one another, the above methods do not account for the dependencies between the responses.

To further improve the efficiency of Kriging-based reliability methods, this work proposes a new Kriging-based reliability method. Its general process is similar to that of AK-MCS. An initial surrogate model is created with a small number of TPs. Then, the surrogate model is refined with more TPs. Once the surrogate model becomes accurate, MCS is used to estimate the probability of failure. The contributions of this research include the following new components that help select the new TPs more efficiently.

  1. (1)The method selects a new TP using the complete Gaussian process output of the surrogate model that is available from the Kriging method. It can therefore fully account for the correlations between output variables at all the MCS sample points.
  2. (2)The new method calculates both the mean and variance of the estimated probability of failure with new formulas that involve the mean and covariance functions of the above Gaussian process. This makes it more effective to select new TPs.
  3. (3)Instead of focusing on the accuracy of the limit-state function, the new method focuses directly on the accuracy of the reliability estimate with a new convergence criterion. This improves the efficiency of the reliability analysis without jeopardizing the accuracy.

With the above new components, the new method is in general more effective than the methods that use independent Kriging predictions.

The remainder of this paper is organized as follows: Section 2 reviews the Kriging method and existing Kriging-based reliability methods. The new method and its implementation procedure are described in Sec. 3, followed by five examples in Sec. 4. Conclusions are made in Sec. 5.

In this section, we review the definition of reliability and Kriging-based reliability methods.

Reliability.

A performance function is defined by Display Formula

(1)y=g(x)

where x is a vector of random input variables, and y is a response. If y>0, no failure occurs; if y<0, a failure occurs. The threshold 0 is a limit state, and in this sense, g(x)=0 is called a limit-state function. Then, the reliability is defined by the following probability: Display Formula

(2)R=Pr{g(x)>0}

And the probability of failure is defined by Display Formula

(3)pf=Pr{g(x)<0}

As we discussed previously, R or pf can be estimated by MCS, surrogate MCS, FORM, and SORM.

Kriging.

Kriging is an interpolation method, which means that the prediction of an existing TP is the exact value of the response at the point. For a performance function y=g(x), Kriging considers y=g(x) being a realization of a Gaussian process, defined by [15] Display Formula

(4)G(x)=f(x)Tβ+Z(x)

f(x)Tβ is a deterministic term, providing the estimate of the mean response, f(x)=[f1(x),f2(x),,fp(x)]T is a vector of regression functions, and β=[β1,β2,,βp]T is a vector of regression coefficients. Z() is a stationary Gaussian process with zero mean and covariance Display Formula

(5)Cov[Z(xi),Z(xj)]=σZ2R(xi,xj)

in which σZ2 is the process variance, and R(,) is the correlation function. Due to the stochastic characteristics, Kriging provides not only the prediction of untried points but also the variance of the prediction. The variance indicates the uncertainty of the prediction. At an untried point x, the Kriging predictor ĝ(x) follows a Gaussian distribution denoted by: Display Formula

(6)ĝ(x)N(μG(x),σG2(x))

where μG() and σG2() are the Kriging prediction and Kriging variance, respectively. More details about Kriging method can be found in Appendix A and Refs. [14] and [15].

Independent Kriging Methods (IKM).

The output of the surrogate model from the Kriging method follows a Gaussian process. As a result, two output variables predicted by the surrogate model are two realizations of the Gaussian process and are likely dependent. The IKM ignore such dependence. In other words, the output variables are assumed independent. This assumption simplifies the process of building the surrogate model, but may adversely affect the efficiency of the model building process.

The other assumption of IKM is that the surrogate model at the limit state will produce an accurate reliability estimate if the surrogate model is accurate. Although this assumption is valid, it emphasizes the accuracy of the surrogate model, instead of devoting effort directly for improving the accuracy of the reliability estimate. This may also affect the efficiency.

A sequential process is involved as the surrogate model is built iteratively with the help of MCS. After obtaining the accurate surrogate model, MCS is performed on it, and the Kriging predictions of the surrogate model are treated independently. The general idea of IKM is as follows:

  1. (1)Generate a small number of TPs for input random variables x, denoted by xT. And build an initial surrogate model based on these TPs.
  2. (2)Generate a large number of Monte Carlo sample points for x. These points serve as candidates for the TPs and are called candidate points (CPs), denoted by xC.
  3. (3)If the surrogate model is not accurate, a learning function is used to select the best TP to refine the surrogate model in a most efficient fashion.
  4. (4)Add the new TP to the existing TPs and then refine the surrogate model.

Steps (3) and (4) are repeated until convergence is attained. The flowchart of the process is provided in Fig. 1. Note that the size of CPs may change during the process if the error of reliability analysis from MCS is large. The error can be found in Ref. [31].

The methods differ from one another by their learning functions. While the AK-MCS [18] uses a probability measure in its learning function, the EGRA method [1] uses the limit-state function value directly. Since the two methods have the similar performance [18], we will compare the proposed method with only AK-MCS, which is reviewed briefly next.

AK-MCS uses the probability of predicting the correct sign of the limit-state function in its learning function. With the surrogate model ŷ=ĝ(x), the probability of failure is estimated by Display Formula

(7)pf=ĝ(x)<0f(x)dx=I(x)f(x)dx=E(I(x))

where E() stands for expectation, f(x) is the joint probability density function (PDF) of x, and the indicator function I() is defined by Display Formula

(8)I(x)={1,ŷ=ĝ(x)<00,otherwise

Thus, the accuracy of the reliability analysis depends on the accuracy of the indicator function or the correct sign of ĝ(x). As ĝ() at x is normally distributed, the probability of a wrong sign is Display Formula

(9)pw(x)=Φ(|μ(x)|σ(x))

where μ(x) and σ(x) are the mean and standard deviation of the Kriging prediction at x, respectively.

The learning function of AK-MCS is defined by Display Formula

(10)U(x)=|μ(x)|σ(x)

The smaller is U, and the higher is pw. Hence, a new TP is identified with the minimum U among CPs. When U is sufficiently large, the surrogate model will be accurate at the limit state g=0, and the process will then converge. U=2 is used in AK-MCS [18], and it is equivalent to pw=0.0228.

As mentioned previously, the size of CPs varies during the analysis process. The coefficient of variation of pf is used to determine the size and is given by Display Formula

(11)cov=1pfNCpf

where NC is the size of CPs. If cov>0.05, NC will be increased [18].

Upon convergence, the probability of failure is estimated by Display Formula

(12)pf=1NCi=1NCI(xi)

The above equation shows that the estimate of pf only uses the sign information of the predictions, and correlations between the predictions are not considered. The surrogate model built with Kriging, however, is a Gaussian process, and its output variables (the Kriging predictions) are dependent. This work develops a new method that accounts for such prediction dependency in order to improve the performance of IKM.

We now discuss the new method that accounts for the dependencies between Kriging predictions. The method is referred to as the DKM for brevity.

Overview.

The key points of DKM are as follows:

  1. (1)DKM considers correlations between Kriging predictions at all CPs.
  2. (2)As a result, DKM uses complete statistical characteristics of the Gaussian process, including not only the mean and standard deviation functions of the Gaussian process but also its correlation function.
  3. (3)DKM focuses directly on the accuracy of the estimate of reliability, instead of that of the limit-state function considered by IKM. DKM is therefore probability oriented, instead of function value oriented.

With the above considerations, new components of the proposed method are developed, including a new way to estimate pf, a new learning function, a new convergence criterion, and a new procedure.

Fundamentals.

We now discuss the aforementioned new components.

A New Way to Calculate pf.

Let the surrogate model be ŷ=ĝ(x)=μ(x)+ε(x). This model is the Kriging prediction given in Eq. (4), and ε(x) is a Gaussian process with N(0,σ2(x)) and correlation R. Thus, Display Formula

(13)pf=μ(x)+ε(x)<0f(x)dx=I(x)f(x)dx=E(I(x))

Then, Display Formula

(14)pf=1NCi=1NCI(xi)=1NCi=1NCIi

where Ii=I(xi). The probability of failure at xi is Display Formula

(15)Pr{Ii=1}=Pr{ĝ(xi)<0}=Φ(μi/σi)=ei

where μi=μ(xi) and σi=σ(xi). And Display Formula

(16)Pr{Ii=0}=1ei

The expectation of the indicator at xi is then Display Formula

(17)E(Ii)=1Pr{Ii=1}+0Pr{Ii=0}=Pr{Ii=1}=ei

And its variance is Display Formula

(18)Var(Ii)=E(Ii2)(E(Ii))2=eiei2=ei(1ei)

The indicator function is a random variable because the integration region μ(x)+ε(x)<0 in Eq. (13) is random; pf is also a random variable as suggested in Eq. (14). The randomness comes from the uncertainty in Kriging predictions. Then, we propose to use the expectation of pf for the estimate of the probability of failure, which is given by Display Formula

(19)E(pf)=1NCi=1NCE(Ii)=1NCi=1NCei

The error of the estimate can be measured by the variance of pf, which is computed by Display Formula

(20)Var(pf)=1NC2Vari=1NCIi=1NC2[i=1NCVar(Ii)+2j=1NCk>jNCCov(Ij,Ik)]

The variance accounts for the correlation between Kriging predictions through the term involving the covariance Cov(Ij,Ik), which is derived as follows: Display Formula

(21)Cov(Ij,Ik)=E(IjIk)E(Ij)E(Ik)=Pr(Ij=1,Ik=1)E(Ij)E(Ik)=ejkejek

where ejk=Pr(ĝ(xj)<0,ĝ(xk)<0), which is the cumulative distribution function (CDF) of the bivariate normal distribution defined by means [μj,μk], standard deviations [σj,σk], and correlation rjk. Therefore, Display Formula

(22)Var(pf)=1NC2[i=1NCei(1ei)+2j=1NCk>jNC(ejkejek)]=1NC2i=1NCci

where Display Formula

(23)ci=ei(1ei)+j=1,jiNC(eijeiej)

The standard deviation of pf is then given by Display Formula

(24)σpf=1NCi=1NCci

A New Learning Function.

The purpose of the learning function is to identify new TPs so that the error or Var(pf) can be reduced. As indicated in Eq. (22), each CP contributes to Var(pf), and the contribution is different. The sum of the terms involving xi in Var(pf) is ci in Eq. (23). We therefore define ci as the contribution to Var(pf) and use ci as the learning function. The learning function represents the contribution of the uncertainty of the Kriging prediction at xi to the overall uncertainty in the estimate of pf. Note that the contribution also accounts for the correlation of the response at xi with those at all the other CPs. Then, we select the point that has the highest contribution to Var(pf) as a new TP, which is therefore found by Display Formula

(25)xnew=xCk,k=argmaxi=1,2,,NC{ci}

where xCk is the kth point of CPs xC=(x1,x2,,xNC).

After the TP, xnew=xCk and its response g(xnew) are added to train the surrogate model, the standard deviation σk becomes zero [14,15,32], and then Display Formula

(26)ek=Φ(μkσk)={0,ifμk>01,ifμk<0

Thus, Display Formula

(27)ek(1ek)=0
Display Formula
(28)ekj=Pr{Ik=1,Ij=1,j=1,2,,NC}={0,ifμk>0Pr{Ij=1}=ej,ifμk<0

and Display Formula

(29)ekjekej={00,ifμk>0ejej,ifμk<0=0

Therefore, Display Formula

(30)ck=0

This indicates that the contribution to Var(pf) at the newly added TP becomes zero. Recall that this point has the highest contribution to Var(pf) before it is added to TPs. Keep on adding new TPs this way will provide the highest effectiveness way to reach convergence.

The learning function of DKM uses all the information of the Gaussian process, including its mean, variance, and correlation functions, while the IKM uses only the mean and variance functions of the Gaussian process. DKM is also more direct because it focuses on the probability of failure itself while IDM employs two steps—create an accurate surrogate model first and then calculate the probability of failure. As a result, the former method will be in general more effective than the latter method.

If correlations are neglected, the new learning function of DKM is equivalent to the learning function of IKM that has been discussed in Sec. 2.3. IKM can therefore be regarded as a special case of DKM. The proof is given in Appendix B.

A New Convergence Criterion.

If the error is small enough, the process of adding new TPs terminates, and then the final surrogate model is used to estimate the probability of failure. Let the allowable relative error of the probability of failure be ε. Assume the confidence is 1α, and then, the confidence interval of the estimate is given by E(pf)±Φ1(α/2)σpf. Thus, the relative error is computed by Display Formula

(31)|E(pf)±Φ1(α/2)σpfE(pf)|E(pf)=|Φ1(α/2)σpfE(pf)|

Letting |Φ1(α/2)σpf/E(pf)|<ε, we obtain the following convergence criterion: Display Formula

(32)σpfE(pf)|εΦ1(α/2)|

The convergence criterion is therefore given by the ratio of σpf to E(pf). This convergence criterion is directly linked to the error of the estimate of the probability of failure, and such a direct link does not exist in IKM.

Implementation.

With the use of the full information of the Gaussian process, DKM will be in general more effective than IKM. Directly using the strategy of the new learning function, however, will be computationally intensive because of calculating the bivariate joint probabilities eij(i,j=1,2,,NC;ij) for all the CPs. The number of such calculations is NC(NC+1)/2. If the size of the CPs is 105, the number of calculations will be 105(105+1)/25×109. (But note that eij does not require calling the original limit-state function.)

To avoid using all the CPs, we select a small portion of the CPs. The points in this small portion are called selected candidate points (SCPs). SCPs are selected based on two criteria: small errors in the estimate of pf and high potential contributions to Var(pf).

Let the size of SCPs be NS and the number of failures estimated by the surrogate model be NF in the domain of SCPs. Then, the probability of failure using SCPs is approximately given by Display Formula

(33)p̃f=NFNS

The error of this estimate is proportional to (1p̃f/NSp̃f), as indicated in Eq. (11). This shows that the higher is p̃f, the smaller is the error. For the first criterion or a smaller error, we therefore prefer a larger NF. Then, we add all the CPs in the failure region to SCPs, and this means that SCPs contain all the CPs in the failure region. In addition, to have a good balance between the two criteria, we make sure that 25–75% of the SCPs is in the failure region, namely, Display Formula

(34)25%NFNS75%

The second criterion requires high potential contributions to Var(pf). Recall that the contribution of a CP is given by ci=ei(1ei)+j=1,jiNC(eijeiej). To avoid calculating the bivariate probabilities eij(j=1,2,,NC,ji), we only consider the first term ei(1ei), which is the variance of the indicator function at xi. As a result, the CPs that have the highest variances of indicator functions will be added to the set of SCPs, and the number of these points is NSNF. The SCPs therefore consist of all the points in the failure region and other points with the highest indicator function variances in the safe region.

After the set of SCPs is formed, the learning function at each point of SCPs is calculated, and the SCP with the highest learning function value will be chosen as a new TP. Recall that evaluating the learning function needs to calculate bivariate probabilities. With the use of SCPs, the total number of bivariate probability calculations will be reduced to NS(NS+1)/2. If 200 SCPs are used, the total number of bivariate probability calculations will be 200(200+1)/2=20,100, which is much less than the number when all the CPs are used.

We now discuss how to use the learning function to identify a new TP from SCPs. The learning function now becomes Display Formula

(35)ci=ei(1ei)+j=1,jiNS(eijeiej)

where j=1,2,,NS. ei is found using Eq. (17). Denote the SCPs by xS. eij is the joint probability Pr{ĝ(xSi)<0,ĝ(xSj)<0}, which is the joint CDF of the bivariate normal distribution N2(μij,Σij) at (0,0), and xSi and xSj are the ith and jth components of xS, respectively. μij is given by Display Formula

(36)μij=[μ(xSi),μ(xSj)]

and Σij is given by Display Formula

(37)Σij=[σ2(xSi)Cov(xSi,xSj)Cov(xSj,xSi)σ2(xSj)]

where Cov(xSj,xSi)=Cov(xSi,xSj), and Cov(xSi,xSj) is given by [33] Display Formula

(38)Cov(xSi,xSj)=σ2{R(xSi,xSj)rT(xSi)R1r(xSj)+[f(xSi)FTR1r(xSi)]T[FTR1F]1×[f(xSj)FTR1r(xSj)]}

The symbols in the above equation are the same as those in Appendix A.

Figure 2 shows the domains of CPs, SCPs, and the failure region, denoted by xC, xS, and xF, respectively. From the figure, we have

Display Formula

(39)pf=Pr(xxF)=Pr(xxS,xxF)=Pr(xxF|xxS)Pr(xxS)

where Pr(xxF|xxS) is the probability of failure in the domain of xS. Denoting it by pfS and using Eqs. (19) and (24), we have Display Formula

(40)E(pfS)=1NSi=1NSei

and Display Formula

(41)σpfS=1NSi=1NSci

For a given set of xS, Pr(xxS) can be estimated by NS/NC and can be treated as a constant. Then, Display Formula

(42)E(pf)=E(pfS)Pr(xxS)
Display Formula
(43)σpf=σpfSPr(xxS)

The convergence criterion in Eq. (32) then becomes Display Formula

(44)σpfSE(pfS)=σpfE(pf)|εΦ1(α/2)|

This means that we can just use the SCPs to determine the convergence criterion. After the convergence criterion is satisfied, MCS is performed on the final surrogate model to evaluate the probability of failure. The flowchart of the DKM is shown in Fig. 3.

In this section, we present one numerical example and four engineering examples. These problems cover a wide range of applications. While the first example demonstrates the implementation process of DKM, the four other engineering problems evaluate the applicability of DKM for various situations, including vibration in Example 2, structural analysis in Example 3, mechanical component design in Example 4, and mechanism analysis in Example 5. All the five examples involve nonlinear limit-state functions. To build the initial surrogate model, we use Latin Hypercube sampling [34] to generate the initial TPs, and the sample size is 12 as suggested in Ref. [18]. We also use the number of limit-state function calls (NFC) to measure the efficiency and the following percentage error to measure the accuracy: Display Formula

(45)ε=|pfpfMCS|pfMCS×100%

where pfMCS is from MCS with a large sample size and the original limit-state function. pfMCS is therefore regarded as an accurate solution for accuracy comparison. pf is from a non-MCS method, namely, DKM, IKM, or AK-MCS. Since both DKM and IKM are based on random sampling, their results are also random. We therefore run DKM and IKM 20 times independently and then report their average results.

To have a fair comparison between DKM and IKM, ideally, we should incorporate the same convergence criteria. The direct equivalency of the convergence criteria between the two methods, however, does not exist. Thus, we implement the following strategy for the comparison.

  1. (1)For DKM, set the confidence in Eq. (31) to be 98%, or α=0.02, and the allowable error to be ε=0.02. The number of SCPs is 200.
  2. (2)Run DKM until convergence and record the number of function calls NFC.
  3. (3)Use the same value of NFC and initial TPs to run IKM. This means that if the total number of TPs reaches NFC, IKM terminates.

Repeat the above steps 20 times and report the average pf, ε, NFC, and the standard deviation of pf. With the above strategy, the accuracy of the two methods is compared with the same number of TPs or function calls.

As discussed previously, it is not easy to estimate the error of the estimated probability of failure if we use the existing Kriging-based reliability methods. DKM can easily overcome this drawback because the process of model training terminates once the estimated error of the probability of failure is small enough. To show this advantage, we also perform AK-MCS with its original procedure [18] 20 times using the same CPs as those of DKM. The parameters we use for AK-MCS are those in Ref. [18], and they are U=2 and cov=5%. Then, the results from MCS, DKM, IKM, and AK-MCS are put together in a table for an easy comparison.

The process of building the surrogate model actually takes place in the space of independent random variables that follow standard normal distributions. This means that all the random variables are transformed into standard normal variables during the analysis. This transformation makes programing the process more convenient, and it does not affect the performance, such as the accuracy and efficiency, of the reliability analysis.

Example 1.

A highly nonlinear performance function is defined by [1] Display Formula

(46)g(x)=sin5x12+2(x12+4)(x21)20

where x1 and x2 are independently and normally distributed with x1N(1.5,12) and x2N(2.5,12).

The contour of the limit-state function is plotted in Fig. 4, which shows the high nonlinearity of the limit-state function. This figure also shows the initial TPs, added TPs, CPs, and SCPs in the last iteration for one of the 20 DKM runs. The procedure is as follows:

  1. (1)Generate 12 initial TPs, indicated by pentagrams in Fig. 4, and use them to build an initial surrogate model.
  2. (2)Generate NC sample points as CPs, denoted by solid dots in Fig. 4. NC is determined by Eq. (34), where NS=200 remains the same for all the examples. Then, all the new SCPs and TPs, indicated by stars and circles, respectively, in Fig. 4, will be selected from the CPs.
  3. (3)Select SCPs from CPs based on the state of each CP (either in the safe or failure region) and the variance of the indicator function.
  4. (4)Select a new TP from SCPs if the point has the highest contribution.
  5. (5)Add the new TP and its response to the existing set of TPs; update the surrogate model.

Steps (2)–(5) are repeated until convergence. The contour of the final surrogate model at the limit state is plotted in Fig. 5, which shows that the surrogate model is accurate in the region where the random variables have high probability density and is less accurate in the region where the random variables have low probability density. This feature keeps the number of TPs minimal. Then, Eq. (19) is used to calculate the probability of failure using the final surrogate model and the same CPs.

After DKM is completed, the total number of limit-state function calls NFC, which is also the total number of TPs, is recorded. Then, with the same initial TPs, IKM is performed, and its TPs are added iteratively until the total number of TPs reaches the recoded number NFC. Then, Eq. (12) is used to calculate the probability of failure using the final surrogate model and the same CPs.

For comparison, the initial TPs, added TPs, and CPs of IKM are also plotted in Fig. 6. By comparing Figs. 4 and 6, we see that patterns of the added TPs of IKM and DKM are similar even though the two methods generate different TPs. The TPs of IKM are generated to minimize the error of the wrong sign of the limit-state function while the TPs of DKM are generated to minimize the error in the estimate of the probability of failure. As discussed previously, both the two TP updating strategies help increase the accuracy, and this is the reason that the two patterns are similar; the two strategies also have different foci, and this is the reason that the individual TPs from the two strategies are different. As indicated in the results, the strategy of DKM makes the updating process more efficient.

After DKM and IKM are performed 20 times, the average results are calculated and are shown in the row of DKM and IKM, respectively, in Table 1. With the same average 26.3 function calls or the same efficiency, DKM is more accurate than IKM. The results also show that DKM is more robust since the standard deviation of pf is smaller than that of IKM and AK-MCS.

The original AK-MCS is also performed 20 times with the same initial TPs as that of DKM or IKM. The results are given in the last row of Table 1. Both its average errors and number of function calls are larger than those of DKM and IKM. AK-MCS is less accurate for this problem because it requires a sample size smaller than that of DKM. AK-MCS is also less efficient because it requires a minimum value U=2 (or the minimum probability of wrong sign = 0.0228). This requirement does not have a direct link to the error of probability of failure, and it is satisfied with more function calls than that of DKM.

Example 2.

Example 2 involves a nonlinear oscillator [18,3537], as shown in Fig. 7. With six independently and normally distributed random variables, the performance function reads as

Display Formula

(47)g(x)=3r|2F1mw02sin(w0t12)|

where x=(m,c1,c2,r,F1,t1), and w0=(c1+c2)/m. The distributions are given in Table 2.

The results are shown in Table 3, which indicate that DKM has higher accuracy than IKM, and both DKM and IKM outperform AK-MCS.

Example 3.

A roof truss structure [38,39] is shown in Fig. 8. Assume the truss bars bear a uniformly distributed load q, which can be transformed into nodal load P=ql/4. The perpendicular deflection of the truss peak node C is calculated by

Display Formula

(48)ΔC=ql22(3.81AcEc+1.13AsEs)

where Ac and As are the cross-sectional areas of the reinforced concrete and steel bars, respectively; Ec and Es are their corresponding elastic modulus; and l is the length of the truss.

A failure event occurs when the perpendicular deflection ΔC exceeds 3cm. The performance function is then defined by Display Formula

(49)g(x)=0.03ql22(3.81AcEc+1.13AsEs)

where x=(q,l,As,Ac,Es,Ec). All the random variables are independent, and their distributions are given in Table 4.

Table 5 shows the average results from 20 runs, which indicate that DKM is more accurate than IKM and is also more accurate and efficient than AK-MCS.

Example 4.

The cantilever tube [40] shown in Fig. 9 is subjected to external forces F1, F2, P, and torsion T. The performance function is defined as the difference between the yield strength Sy and the maximum stress σy

Display Formula

(50)g=Syσmax

where σmax is the maximum von Mises stress on the top surface of the tube at the origin and is given by Display Formula

(51)σmax=σx2+3τzx2

The normal stress σx is calculated by Display Formula

(52)σx=P+F1sinθ1+F2sinθ2A+Md2I

where the first term is the normal stress due to the axial forces, and the second term is the normal stress due to the bending moment M, which is given by Display Formula

(53)M=F1L1cosθ1+F2L2cosθ2

The cross-sectional area of the tube is A=(π/4)[d2(d2t)2], and the moment of inertia of the tube is I=(π/64)[d4(d2t)4]. The torsional stress τzx at the origin is τzx=(Td/2J), where J=2I. The distributions of the independent random variables are given in Table 6.

The results from Table 7 also show the higher accuracy of DKM.

Example 5.

This example involves a slider–crank mechanism [27], as shown in Fig. 10. The crank is a disk with a radius of x1. The angular velocity is ω=1 rad/s, and the length of the coupler is x2. The motion output is the displacement of the slider, which is given by Display Formula

(54)S=x1cosθ+x22(x1sinθ)2

where x1 and x2 are independently and normally distributed with x1N(100,0.12)mm and x2N(150,0.12)mm. θ is the motion input defined by θ=ωt. The required motion output is the nominal displacements of the slider, given by Display Formula

(55)SR=μ1cosθ+μ22(μ1sinθ)2

where μ1 and μ2 are the mean values of x1 and x2, respectively. Thus, the motion error is Display Formula

(56)ΔS=|SRS|

The motion error should not be greater than the allowable motion error 0.55 mm. The system is required to produce accurate motion output within a full motion cycle θ[0,2π]. Thus, the maximum motion error on [0,2π] should not exceed 0.55 mm, and then the performance function is defined by Display Formula

(57)g(x)=εΔSmax=0.55maxθ{|(μ1x1)cosθ+μ22(μ1sinθ)2x22(x1sinθ)2|},θ[0,2π]

The maximum motion error ΔSmax is shown in Fig. 11, and the failure region is shown in Fig. 12. The two figures indicate the irregularity and nonlinearity of the performance function and the failure region, for which traditional reliability methods, such as the FORM and SORM, will not be accurate. The DKM works quite well for this problem, as indicated by the results in Table 8. The results show that DKM is more accurate than IKM.

The efficiency of the reliability analysis is critical because it calls the associated limit-state function repeatedly. Kriging-based reliability methods are computationally efficient. As a result, they have increasingly been researched and applied, especially for highly nonlinear limit-state functions, for which the traditional FORM and SORM are not applicable. This study clearly demonstrates that the efficiency can be further improved by accounting for the dependencies between Kriging predictions.

The new DKM in this work improves the efficiency with its three new components. The first component is the new formula of calculating the probability of failure. The formula uses the average probability of failure at all the Monte Carlo samples, as well as both means and standard deviations of the Kriging predictions. The second component is the new learning function for selecting TPs. For a single sample point, the learning function considers not only the contribution of the point itself to the error of the probability of failure but also those of the dependencies from all the other points. The third component is the new stopping criterion that guarantees a good balance between accuracy and efficiency. The five examples indicate that DKM is more accurate than the Kriging-based methods that use only independent Kriging predictions.

The future work includes the following directions: (1) Improve the performance of DKM for problems with an extremely low probability of failure. (2) Extend DKM for system reliability analysis with at least two limit-state functions. (3) Incorporate DKM in reliability-based design, and (4) develop new DKM for time-dependent reliability analysis.

This material is based upon the work supported by the National Science Foundation through Grant Nos. CMMI 1234855 and CMMI 1300870. The support from the Intelligent Systems Center (ISC) at the Missouri University of Science and Technology is also acknowledged.

Appendix A: Kriging Method

There are several models for the correlation function. In this work, we used the commonly used Gaussian correlation function defined by [14,15,32] Display Formula

(A1)R(xi,xj)=exp[k=1dθk(xikxjk)2]

where xik and xjk are the kth coordinates of points xi and xj, respectively; d is the dimensionality of x; and θk indicates the correlation between the points in dimension k. The Kriging prediction and Kriging variance are computed by [14] Display Formula

(A2)μG(x)=f(x)Tβ̂+r(x)TR1(yFβ̂)

and Display Formula

(A3)σG2(x)=σ̂Z2{1r(x)TR1r(x)+[FTR1r(x)f(x)]T(FTR1F)1[FTR1r(x)f(x)]}

where y is a vector of responses at the TPs, F is a m×p matrix with rows f(x)T, m is the number of sample points, and r() is the correlation vector containing the correlation between the x and each of the m TPs Display Formula

(A4)r(x)=[R(x,x1),R(x,x2),,R(x,xm)]T

R is the correlation matrix, which is composed of correlation functions evaluated at each possible combination of the m sample points Display Formula

(A5)R=[R(xi,xj)],1im;1jm

β̂ is the generalized least square estimate of β given by [14,15] Display Formula

(A6)β̂=(FTR1F)1FTR1y

and the maximum likelihood estimation of the process variance is Display Formula

(A7)σ̂Z2=1m(yFβ̂)TR1(yFβ̂)

Appendix B: IKM as a Special Case of DKM

We herein show that the learning function of IKM (AK-MCS) is a special case of DKM. The learning function of DKM is ci=ei(1ei)+j=1,jiN(eijeiej), which is maximized for searching for a new TP. If no correlations are considered, the learning function reduces to Display Formula

(B1)ci=ei(1ei)

IKM (AK-MCS) uses the probability of wrong sign or Ui=|μi|/σi as its leaning function. The leaning function is minimized for searching for a new TP. Recall that ei=Φ(μi/σi), and then Display Formula

(B2)ci={Φ(Ui)[1Φ(Ui)]ifμi0Φ(Ui)[1Φ(Ui)]otherwise

Since Φ(Ui)=1Φ(Ui), Φ(Ui)[1Φ(Ui)]=[1Φ(Ui)]Φ(Ui), we have Display Formula

(B3)ci=[1Φ(Ui)]Φ(Ui)

ci is monotonic with respect to Ui as shown in Fig. 13. This indicates that maximizing ci in DKM without considering correlations is equivalent to minimizing Ui in IKM. The learning function of IKM is therefore a special case of that of DKM.

Bichon, B. J. , Eldred, M. S. , Swiler, L. P. , Mahadevan, S. , and McFarland, J. M. , 2007, “ Multimodal Reliability Assessment for Complex Engineering Applications Using Efficient Global Optimization,” AIAA Paper No. 2007-1946.
Ditlevsen, O. , and Madsen, H. O. , 1996, Structural Reliability Methods, Wiley, New York, NY.
Mourelatos, Z. P. , Kuczera, R. , and Latcha, M. , 2006, “ An Efficient Monte Carlo Reliability Analysis Using Global and Local Metamodels,” AIAA Paper No. 2006-7087.
Du, X. , and Hu, Z. , 2012, “ First Order Reliability Method With Truncated Random Variables,” ASME J. Mech. Des., 134(9), p. 091005. [CrossRef]
Zhu, Z. , Hu, Z. , and Du, X. , 2015, “ Reliability Analysis for Multidisciplinary Systems Involving Stationary Stochastic Processes,” ASME Paper No. DETC2015-46168.
Lee, I. , Choi, K. K. , and Gorsich, D. , 2010, “ Sensitivity Analyses of FORM-Based and DRM-Based Performance Measure Approach (PMA) for Reliability-Based Design Optimization (RBDO),” Int. J. Numer. Methods Eng., 82(1), pp. 26–46.
Simpson, T. W. , Peplinski, J. D. , Koch, P. N. , and Allen, J. K. , 2001, “ Metamodels for Computer-Based Engineering Design: Survey and Recommendations,” Eng. Comput., 17(2), pp. 129–150. [CrossRef]
Gomes, H. M. , and Awruch, A. M. , 2004, “ Comparison of Response Surface and Neural Network With Other Methods for Structural Reliability Analysis,” Struct. Saf., 26(1), pp. 49–67. [CrossRef]
Zou, T. , Mourelatos, Z. P. , Mahadevan, S. , and Tu, J. , 2008, “ An Indicator Response Surface Method for Simulation-Based Reliability Analysis,” ASME J. Mech. Des., 130(7), p. 071401. [CrossRef]
Youn, B. D. , and Choi, K. K. , 2004, “ A New Response Surface Methodology for Reliability-Based Design Optimization,” Comput. Struct., 82(2), pp. 241–256. [CrossRef]
Viana, F. A. C. , Simpson, T. W. , Balabanov, V. , and Toropov, V. , 2014, “ Metamodeling in Multidisciplinary Design Optimization: How Far Have We Really Come?,” AIAA J., 52(4), pp. 670–690. [CrossRef]
Smola, A. J. , and Schölkopf, B. , 2004, “ A Tutorial on Support Vector Regression,” Stat. Comput., 14(3), pp. 199–222. [CrossRef]
Xiu, D. , and Karniadakis, G. E. , 2003, “ Modeling Uncertainty in Flow Simulations Via Generalized Polynomial Chaos,” J. Comput. Phys., 187(1), pp. 137–167. [CrossRef]
Lophaven, S. N. , Nielsen, H. B. , and Sondergaard, J. , 2002, “ DACE—A Matlab Kriging Toolbox, Version 2.0,” Technical University of Denmark, Lyngby, Denmark.
Sacks, J. , Welch, W. J. , Toby, J. M. , and Wynn, H. P. , 1989, “ Design and Analysis of Computer Experiments,” Stat. Sci., 4(4), pp. 409–423. [CrossRef]
Zhao, L. , Choi, K. , and Lee, I. , 2011, “ Metamodeling Method Using Dynamic Kriging for Design Optimization,” AIAA J., 49(9), pp. 2034–2046. [CrossRef]
Joseph, V. R. , Hung, Y. , and Sudjianto, A. , 2008, “ Blind Kriging: A New Method for Developing Metamodels,” ASME J. Mech. Des., 130(3), p. 031102. [CrossRef]
Echard, B. , Gayton, N. , and Lemaire, M. , 2011, “ AK-MCS: An Active Learning Reliability Method Combining Kriging and Monte Carlo Simulation,” Struct. Saf., 33(2), pp. 145–154. [CrossRef]
Jones, D. , Schonlau, M. , and Welch, W. , 1998, “ Efficient Global Optimization of Expensive Black-Box Functions,” J. Glob. Optim., 13(4), pp. 455–492. [CrossRef]
Lv, Z. , Lu, Z. , and Wang, P. , 2015, “ A New Learning Function for Kriging and Its Applications to Solve Reliability Problems in Engineering,” Comput. Math. Appl., 70(5), pp. 1182–1197. [CrossRef]
Wang, Z. , and Wang, P. , 2013, “ A Maximum Confidence Enhancement Based Sequential Sampling Scheme for Simulation-Based Design,” ASME J. Mech. Des., 136(2), p. 021006. [CrossRef]
Dubourg, V. , Sudret, B. , and Deheeger, F. , 2013, “ Metamodel-Based Importance Sampling for Structural Reliability Analysis,” Probab. Eng. Mech., 33, pp. 47–57. [CrossRef]
Bect, J. , Ginsbourger, D. , Li, L. , Picheny, V. , and Vazquez, E. , 2012, “ Sequential Design of Computer Experiments for the Estimation of a Probability of Failure,” Stat. Comput., 22(3), pp. 773–793. [CrossRef]
Fauriat, W. , and Gayton, N. , 2014, “ AK-SYS: An Adaptation of the AK-MCS Method for System Reliability,” Reliab. Eng. Syst. Saf., 123, pp. 137–144. [CrossRef]
Bichon, B. J. , McFarland, J. M. , and Mahadevan, S. , 2011, “ Efficient Surrogate Models for Reliability Analysis of Systems With Multiple Failure Modes,” Reliab. Eng. Syst. Saf., 96(10), pp. 1386–1395. [CrossRef]
Hu, Z. , and Du, X. , 2015, “ Mixed Efficient Global Optimization for Time-Dependent Reliability Analysis,” ASME J. Mech. Des., 137(5), p. 051401. [CrossRef]
Zhu, Z. , and Du, X. , 2015, “ Extreme Value Metamodeling for System Reliability With Time-Dependent Functions,” ASME Paper No. DETC2015-46162.
Viana, F. A. C. , Haftka, R. T. , and Watson, L. T. , 2013, “ Efficient Global Optimization Algorithm Assisted by Multiple Surrogate Techniques,” J. Glob. Optim., 56(2), pp. 669–689. [CrossRef]
Chaudhuri, A. , and Haftka, R. T. , 2014, “ Efficient Global Optimization With Adaptive Target Setting,” AIAA J., 52(7), pp. 1573–1577. [CrossRef]
Wang, Z. , and Wang, P. , 2015, “ An Integrated Performance Measure Approach for System Reliability Analysis,” ASME J. Mech. Des., 137(2), p. 021406. [CrossRef]
Koehler, E. , Brown, E. , and Haneuse, S. J. P. A. , 2009, “ On the Assessment of Monte Carlo Error in Simulation-Based Statistical Analyses,” Am. Stat., 63(2), pp. 155–162. [CrossRef] [PubMed]
Martin, J. D. , and Simpson, T. W. , 2005, “ Use of Kriging Models to Approximate Deterministic Computer Models,” AIAA J., 43(4), pp. 853–863. [CrossRef]
Arendt, P. D. , Apley, D. W. , and Chen, W. , 2012, “ Quantification of Model Uncertainty: Calibration, Model Discrepancy, and Identifiability,” ASME J. Mech. Des., 134(10), p. 100908. [CrossRef]
Viana, F. A. C. , Venter, G. , and Balabanov, V. , 2010, “ An Algorithm for Fast Optimal Latin Hypercube Design of Experiments,” Int. J. Numer. Methods Eng., 82(2), pp. 135–156.
Schueremans, L. , and Van Gemert, D. , 2005, “ Benefit of Splines and Neural Networks in Simulation Based Structural Reliability Analysis,” Struct. Saf., 27(3), pp. 246–261. [CrossRef]
Balesdent, M. , Morio, J. , and Marzat, J. , 2013, “ Kriging-Based Adaptive Importance Sampling Algorithms for Rare Event Estimation,” Struct. Saf., 44, pp. 1–10. [CrossRef]
Gayton, N. , Bourinet, J. M. , and Lemaire, M. , 2003, “ CQ2RS: A New Statistical Approach to the Response Surface Method for Reliability Analysis,” Struct. Saf., 25(1), pp. 99–121. [CrossRef]
Zhao, H. , Yue, Z. , Liu, Y. , Gao, Z. , and Zhang, Y. , 2015, “ An Efficient Reliability Method Combining Adaptive Importance Sampling and Kriging Metamodel,” Appl. Math. Modell., 39(7), pp. 1853–1866. [CrossRef]
Song, S. , Lu, Z. , and Qiao, H. , 2009, “ Subset Simulation for Structural Reliability Sensitivity Analysis,” Reliab. Eng. Syst. Saf., 94(2), pp. 658–665. [CrossRef]
Du, X. , 2008, “ Unified Uncertainty Analysis by the First Order Reliability Method,” ASME J. Mech. Des., 130(9), p. 091401. [CrossRef]
Copyright © 2016 by ASME
View article in PDF format.

References

Bichon, B. J. , Eldred, M. S. , Swiler, L. P. , Mahadevan, S. , and McFarland, J. M. , 2007, “ Multimodal Reliability Assessment for Complex Engineering Applications Using Efficient Global Optimization,” AIAA Paper No. 2007-1946.
Ditlevsen, O. , and Madsen, H. O. , 1996, Structural Reliability Methods, Wiley, New York, NY.
Mourelatos, Z. P. , Kuczera, R. , and Latcha, M. , 2006, “ An Efficient Monte Carlo Reliability Analysis Using Global and Local Metamodels,” AIAA Paper No. 2006-7087.
Du, X. , and Hu, Z. , 2012, “ First Order Reliability Method With Truncated Random Variables,” ASME J. Mech. Des., 134(9), p. 091005. [CrossRef]
Zhu, Z. , Hu, Z. , and Du, X. , 2015, “ Reliability Analysis for Multidisciplinary Systems Involving Stationary Stochastic Processes,” ASME Paper No. DETC2015-46168.
Lee, I. , Choi, K. K. , and Gorsich, D. , 2010, “ Sensitivity Analyses of FORM-Based and DRM-Based Performance Measure Approach (PMA) for Reliability-Based Design Optimization (RBDO),” Int. J. Numer. Methods Eng., 82(1), pp. 26–46.
Simpson, T. W. , Peplinski, J. D. , Koch, P. N. , and Allen, J. K. , 2001, “ Metamodels for Computer-Based Engineering Design: Survey and Recommendations,” Eng. Comput., 17(2), pp. 129–150. [CrossRef]
Gomes, H. M. , and Awruch, A. M. , 2004, “ Comparison of Response Surface and Neural Network With Other Methods for Structural Reliability Analysis,” Struct. Saf., 26(1), pp. 49–67. [CrossRef]
Zou, T. , Mourelatos, Z. P. , Mahadevan, S. , and Tu, J. , 2008, “ An Indicator Response Surface Method for Simulation-Based Reliability Analysis,” ASME J. Mech. Des., 130(7), p. 071401. [CrossRef]
Youn, B. D. , and Choi, K. K. , 2004, “ A New Response Surface Methodology for Reliability-Based Design Optimization,” Comput. Struct., 82(2), pp. 241–256. [CrossRef]
Viana, F. A. C. , Simpson, T. W. , Balabanov, V. , and Toropov, V. , 2014, “ Metamodeling in Multidisciplinary Design Optimization: How Far Have We Really Come?,” AIAA J., 52(4), pp. 670–690. [CrossRef]
Smola, A. J. , and Schölkopf, B. , 2004, “ A Tutorial on Support Vector Regression,” Stat. Comput., 14(3), pp. 199–222. [CrossRef]
Xiu, D. , and Karniadakis, G. E. , 2003, “ Modeling Uncertainty in Flow Simulations Via Generalized Polynomial Chaos,” J. Comput. Phys., 187(1), pp. 137–167. [CrossRef]
Lophaven, S. N. , Nielsen, H. B. , and Sondergaard, J. , 2002, “ DACE—A Matlab Kriging Toolbox, Version 2.0,” Technical University of Denmark, Lyngby, Denmark.
Sacks, J. , Welch, W. J. , Toby, J. M. , and Wynn, H. P. , 1989, “ Design and Analysis of Computer Experiments,” Stat. Sci., 4(4), pp. 409–423. [CrossRef]
Zhao, L. , Choi, K. , and Lee, I. , 2011, “ Metamodeling Method Using Dynamic Kriging for Design Optimization,” AIAA J., 49(9), pp. 2034–2046. [CrossRef]
Joseph, V. R. , Hung, Y. , and Sudjianto, A. , 2008, “ Blind Kriging: A New Method for Developing Metamodels,” ASME J. Mech. Des., 130(3), p. 031102. [CrossRef]
Echard, B. , Gayton, N. , and Lemaire, M. , 2011, “ AK-MCS: An Active Learning Reliability Method Combining Kriging and Monte Carlo Simulation,” Struct. Saf., 33(2), pp. 145–154. [CrossRef]
Jones, D. , Schonlau, M. , and Welch, W. , 1998, “ Efficient Global Optimization of Expensive Black-Box Functions,” J. Glob. Optim., 13(4), pp. 455–492. [CrossRef]
Lv, Z. , Lu, Z. , and Wang, P. , 2015, “ A New Learning Function for Kriging and Its Applications to Solve Reliability Problems in Engineering,” Comput. Math. Appl., 70(5), pp. 1182–1197. [CrossRef]
Wang, Z. , and Wang, P. , 2013, “ A Maximum Confidence Enhancement Based Sequential Sampling Scheme for Simulation-Based Design,” ASME J. Mech. Des., 136(2), p. 021006. [CrossRef]
Dubourg, V. , Sudret, B. , and Deheeger, F. , 2013, “ Metamodel-Based Importance Sampling for Structural Reliability Analysis,” Probab. Eng. Mech., 33, pp. 47–57. [CrossRef]
Bect, J. , Ginsbourger, D. , Li, L. , Picheny, V. , and Vazquez, E. , 2012, “ Sequential Design of Computer Experiments for the Estimation of a Probability of Failure,” Stat. Comput., 22(3), pp. 773–793. [CrossRef]
Fauriat, W. , and Gayton, N. , 2014, “ AK-SYS: An Adaptation of the AK-MCS Method for System Reliability,” Reliab. Eng. Syst. Saf., 123, pp. 137–144. [CrossRef]
Bichon, B. J. , McFarland, J. M. , and Mahadevan, S. , 2011, “ Efficient Surrogate Models for Reliability Analysis of Systems With Multiple Failure Modes,” Reliab. Eng. Syst. Saf., 96(10), pp. 1386–1395. [CrossRef]
Hu, Z. , and Du, X. , 2015, “ Mixed Efficient Global Optimization for Time-Dependent Reliability Analysis,” ASME J. Mech. Des., 137(5), p. 051401. [CrossRef]
Zhu, Z. , and Du, X. , 2015, “ Extreme Value Metamodeling for System Reliability With Time-Dependent Functions,” ASME Paper No. DETC2015-46162.
Viana, F. A. C. , Haftka, R. T. , and Watson, L. T. , 2013, “ Efficient Global Optimization Algorithm Assisted by Multiple Surrogate Techniques,” J. Glob. Optim., 56(2), pp. 669–689. [CrossRef]
Chaudhuri, A. , and Haftka, R. T. , 2014, “ Efficient Global Optimization With Adaptive Target Setting,” AIAA J., 52(7), pp. 1573–1577. [CrossRef]
Wang, Z. , and Wang, P. , 2015, “ An Integrated Performance Measure Approach for System Reliability Analysis,” ASME J. Mech. Des., 137(2), p. 021406. [CrossRef]
Koehler, E. , Brown, E. , and Haneuse, S. J. P. A. , 2009, “ On the Assessment of Monte Carlo Error in Simulation-Based Statistical Analyses,” Am. Stat., 63(2), pp. 155–162. [CrossRef] [PubMed]
Martin, J. D. , and Simpson, T. W. , 2005, “ Use of Kriging Models to Approximate Deterministic Computer Models,” AIAA J., 43(4), pp. 853–863. [CrossRef]
Arendt, P. D. , Apley, D. W. , and Chen, W. , 2012, “ Quantification of Model Uncertainty: Calibration, Model Discrepancy, and Identifiability,” ASME J. Mech. Des., 134(10), p. 100908. [CrossRef]
Viana, F. A. C. , Venter, G. , and Balabanov, V. , 2010, “ An Algorithm for Fast Optimal Latin Hypercube Design of Experiments,” Int. J. Numer. Methods Eng., 82(2), pp. 135–156.
Schueremans, L. , and Van Gemert, D. , 2005, “ Benefit of Splines and Neural Networks in Simulation Based Structural Reliability Analysis,” Struct. Saf., 27(3), pp. 246–261. [CrossRef]
Balesdent, M. , Morio, J. , and Marzat, J. , 2013, “ Kriging-Based Adaptive Importance Sampling Algorithms for Rare Event Estimation,” Struct. Saf., 44, pp. 1–10. [CrossRef]
Gayton, N. , Bourinet, J. M. , and Lemaire, M. , 2003, “ CQ2RS: A New Statistical Approach to the Response Surface Method for Reliability Analysis,” Struct. Saf., 25(1), pp. 99–121. [CrossRef]
Zhao, H. , Yue, Z. , Liu, Y. , Gao, Z. , and Zhang, Y. , 2015, “ An Efficient Reliability Method Combining Adaptive Importance Sampling and Kriging Metamodel,” Appl. Math. Modell., 39(7), pp. 1853–1866. [CrossRef]
Song, S. , Lu, Z. , and Qiao, H. , 2009, “ Subset Simulation for Structural Reliability Sensitivity Analysis,” Reliab. Eng. Syst. Saf., 94(2), pp. 658–665. [CrossRef]
Du, X. , 2008, “ Unified Uncertainty Analysis by the First Order Reliability Method,” ASME J. Mech. Des., 130(9), p. 091401. [CrossRef]

Figures

Grahic Jump Location
Fig. 2

Domains of xC, xS, and xF

Grahic Jump Location
Fig. 4

Sample points of DKM

Grahic Jump Location
Fig. 5

Final surrogate model

Grahic Jump Location
Fig. 6

Sample points of IKM

Grahic Jump Location
Fig. 7

A nonlinear oscillator

Grahic Jump Location
Fig. 8

A roof truss structure

Grahic Jump Location
Fig. 10

A slider–crank mechanism

Grahic Jump Location
Fig. 11

Maximum motion error of [0,2π] s

Grahic Jump Location
Fig. 12

The failure region

Grahic Jump Location
Fig. 13

The curve of ci with respect to Ui

Tables

Table Grahic Jump Location
Table 1 Average results from 20 runs
Table Grahic Jump Location
Table 2 Random variables of example two
Table Grahic Jump Location
Table 3 Average results from 20 runs
Table Grahic Jump Location
Table 4 Random variables of example three
Table Grahic Jump Location
Table 5 Average results from 20 runs
Table Grahic Jump Location
Table 6 Random variables of example four
Table Grahic Jump Location
Table 7 Average results from 20 runs
Table Grahic Jump Location
Table 8 Average results from 20 runs

Errata

Discussions