Abstract
Metal additive manufacturing (AM) has recently attracted attention due to its potential for batch/mass production of metal parts. This process, however, currently suffers from problems including low productivity, inconsistency in the properties of the printed parts, and defects such as lack of fusion and keyholing. Finite element (FE) modeling cannot accurately model the metal AM process and has a high computational cost. Empirical models based on experiments are time-consuming and expensive. This paper enhances a previously developed framework that takes advantages of both empirical and FE models. The validity and accuracy of the metamodel developed in the earlier framework depend on the initial assumption of parameter uncertainties. This causes a problem when the assumed uncertainties are far from the actual values. The proposed framework introduces an iterative calibration process to overcome this limitation. After comparing several calibration metrics, the second-order statistical moment-based metric (SMM) was chosen as the calibration metric in the improved framework. The framework is then applied to a four-variable porosity modeling problem. The obtained model is more accurate than using other approaches with only ten available experimental data points for calibration and validation.
1 Introduction
Additive manufacturing (AM) has attracted attention due to its capability of producing complex parts and structures through the layer-by-layer process without wasting material. Among all the AM types and techniques, metal AM reduces lead time and produces nearly full-dense final parts, and has helped designers to design novel structural materials and motivated them to change material properties to meet specific performance requirements that could not be done with traditional manufacturing techniques [1–3]. The underlying metal AM physics is so sophisticated and the metal goes through a very complex thermal phenomenon during the process. This causes the final parts to have defects such as pinhole defects, lack of fusion, and keyholing [4]. To manufacture a defect-free part, the process should be modeled and the process parameters that lead to defect-free parts should be determined. Modeling and simulation help quantify the influence of process parameters on final part properties [5].
It was claimed that more than 130 parameters affect the final part property and quality in the process of metal AM [6]. Many of these parameters are observed to have temporal fluctuations during fabrication, including layer thickness (LT), laser power (LP), powder size, scan speed (SS), laser spot diameter, powder absorptivity, and so on [7,8]. For example, it was found that the laser power can be 20% less than the set value during the process [9]. These uncertainties and fluctuations will prevent metallic printed parts from having the desired quality even when the same process parameters and machine are used [10]. The uncertainties associated with the process should be quantified (uncertainty quantification) in modeling as the effect of the uncertainties on the final quality of the product is important [11].
Two commonly used models for AM process are empirical (experimental) and physics-based (computational) models. Using the design of experiment (DOE) approaches, one can perform experiments and model the final part property. The problem with empirical models is that they are not accurate enough unless many experiments are performed [12]. Moreover, empirical models cannot be transferred from one machine to another. On the other hand, due to the complex thermal phenomena of the process, physics-based models should consider all three modes of heat transfer, i.e., radiation from the heat source, convection across the surface of the material, and conduction through the metallic part. Considering above all will result in a computationally costly model [13]. Furthermore, multi-physics models include many assumptions that may affect the final model’s accuracy. To improve the computational efficiency of physics-based approaches, metamodeling approaches have been recently applied [14–16].
As discussed earlier, both empirical and physics-based methods have their own limitations. If one can design a modeling framework that leads to a model with high accuracy and low computational cost with a limited number of experiments, as well as can be easily transferred from machine to machine, the problems with both of the methods will be resolved. Moreover, this model should be able to accommodate parameter uncertainties. Olleak and Xi [8] recently proposed a calibration and validation framework for metal AM that takes the advantages of both physics-based models and metamodels (a multi-fidelity model). Their framework can be applied with a limited number of experiments. In the case study, they used 14 data points for both calibration and validation. The framework starts by developing a metamodel that can predict the desired objective (e.g., melt pool size, porosity, etc.). Input data for training the metamodel were acquired from finite element analysis (FEA). They assumed that the metamodel inaccuracy can be caused by two factors: a bias from experimental data and randomness of uncontrollable parameters during the fabrication process. Uncertainties for uncontrollable parameters were assumed and the bias was calibrated as a function of process parameters using metamodeling approaches. The training data for bias calibration were a portion of experimental data. To calibrate the model, they minimized an area metric called the u-pooling metric [17] which is calculated using the training experimental data. Finally, a hypothesis test was performed to check whether there is a need to perform more experiments, or if the metamodel can be accepted. This work is an important advancement in the field and could help to develop a highly efficient and accurate model. Their proposed framework, however, suffers from the following problems:
The framework just assumes uncertainties for uncontrollable parameters. In reality, uncertainties can be observed for many controllable parameters as discussed earlier.
If the assumed uncertainty values are far from the actual value, the metamodel accuracy will decrease significantly and the framework would erroneously conclude that more experiments are needed. In other words, the validity of the metamodel depends on the assumed uncertainties of the parameters before calibrating the metamodel.
The framework used the u-pooling metric for calibration. Given the fact that the model assumes the mean and standard deviation for the parameters, which correspond to the first two statistical moments, the statistical moment-based metric (SMM) may be a better metric than the u-pooling metric for calibration.
In this work, an improved framework is developed so that it can model part properties (e.g., porosity) with a limited number of experiments based on multi-fidelity models. Uncertainty will be assumed for both uncontrollable and controllable parameters and will be calibrated at the end of the framework. A loop in the framework will be defined to assure that accepting or rejecting the metamodel does not depend on the accuracy of the initial guess for parameter uncertainties. Different metrics are tested and compared as the calibration metric and the one that leads to the most accurate metamodel will be chosen. Moreover, a suggestion on the percentage of experiments to be used for validating the metamodel is provided.
The remainder of the paper is organized as follows. Section 2 explains the framework developed by Olleak and Xi [8]. Section 3 describes the new improved framework and the changes with respect to the original one. In Sec. 4, the framework is applied to a case study and the results are presented. The results are further discussed in Sec. 5 and, finally, the conclusions are drawn in Sec. 6.
2 Validation and Calibration Framework Developed by Olleak and Xi
Olleak and Xi [8] developed a framework that uses multi-fidelity models to predict component final properties using a limited number of experiments. In this framework, it is assumed that parameters that can affect the final part property are categorized into two groups: controllable and uncontrollable parameters. Controllable parameters are the ones that can be set in the process (e.g., laser power and scan speed). It is hypothesized that the uncertainties of these parameters can be ignored. Uncontrollable parameters are the ones that cannot be set during the process and their exact value cannot be measured (e.g., powder absorptivity and laser spot diameter). They have assumed that these parameters contain inherent randomness which will be calibrated in the framework. The flowchart of their framework is shown in Fig. 1. The steps of the framework are as follows:
Developing a physics-based model that can predict the property of the printed part using FEA.
Property predictions at inputs generated from DOEs.
Building a metamodel based on the predictions from the physics-based model.
Assuming uncertainties for uncontrollable parameters and calibrating a bias between experimental data and metamodel predictions using training experimental data points.
Calibrating the statistical moments of uncontrollable variables using training experimental data points.
Checking the validity of the corrected metamodel with experiment validation data.
Performing new experiments if needed and repeat the process from step 4.
In the following subsections, each of the steps of the framework will be explained in detail.
2.1 Developing a Physics-Based Model.
At this step, a physics-based model is developed. Physics-based FEA models have been developed using the thermo-mechanical physics of the metal AM process to predict the final property of the printed part.
2.2 Property Predictions at Design of Experiment Points.
After defining each parameter, a range for each parameter is set. DOE is adopted and sampling from physics-based model is performed. To have a more accurate metamodel, a sufficient number of samples should be employed.
2.3 Building a Metamodel.
The results of the earlier step are used as a training data set to build a metamodel that can predict the printed part property. Different types of models such as Gaussian process (GP) regression, response surface methodology (RSM), radial basis function (RBF), kriging, and high-dimensional model representation can be employed to build the metamodel. Olleak and Xi used GP regression [8].
2.4 Calibrating the Bias.
In Eq. (2), E(.) is the expected value of the function under a determined uncertainty of uncontrollable parameters θ. Using Eq. (2), one can find a value at each printing configuration for training the bias function. Again, Olleak and Xi used GP regression to calibrate the bias using the training set obtained from Eq. (2).
2.5 Calibrating the Statistical Moments of Process Parameters.
In Eq. (3), U[.,.] is the u-pooling metric value that should be minimized. The value of the u-pooling metric is between 0 and 0.5. If it becomes zero it means that the experiment data and model have the perfect agreement.
2.6 Validation Hypothesis Test.
After training the metamodel, the u-pooling metric between the corrected metamodel and experiment validation data will be calculated, if it is less than the critical value of the u-pooling metric then the corrected metamodel is valid and the framework is finished but if it is higher than the critical value, new experiments should be performed to improve the accuracy of the metamodel and the process should be repeated from step 4. This hypothesis test for validity check is introduced in Ref. [19]. The critical value of the u-pooling metric for each number of experiments can be calculated using the following procedure:
Step 1: Take n values from a given distribution (n is the number of experiment data for validation).
Step 2: Calculate the u-pooling metric after calculating the empirical CDF of the n samples.
Step 3: Steps 1 and 2 should be repeated sufficient times, e.g., 1 × 106, to obtain the u-pooling metric distribution for given n samples.
Step 4: Choose a one-side confidence level, e.g., 95%, and select the critical value from the distribution of the u-pooling metric.
If the aforementioned process is repeated for different numbers of experiment points, Fig. 2 will be generated.
Although the earlier framework has shown promising results, it suffers from some drawbacks which are explained in the last section. Section 3 describes the proposed framework that overcomes these problems.
3 The Improved Framework
A new framework based on the earlier framework is defined, as shown in Fig. 3. The overall framework is similar to what was proposed by Olleak and Xi [8], but it has several changes and improvements that are explained in this section.
The first two steps of the earlier framework are repeated here. After developing an FEA model that can simulate the process and output the desired part final property, DOE is adopted and the property is predicted at different DOE points. Based on the data gathered from FEA results, different metamodels are developed. Chosen metamodel techniques are noisy GP regression, RSM, RBF, and GP regression without noise (kriging). 90% of the simulation data is used for training the metamodels and 10% is used as testing data to compare different metamodels. The one that has the highest R-squared value calculated with the testing data points is chosen. Randomization over different sets of training simulation data is adopted to increase the accuracy of the metamodel. One more difference with the earlier framework is that the improved framework considers uncertainty for all the parameters. For uncontrollable parameters, the uncertainty assumption is the same as before and the two first statistical moments (mean and standard deviation) of these parameters are calibrated. Moreover, for the controllable parameters, it is assumed that they have uncertainty but their uncertainty is unbiased and the expected value of these parameters is set in the process. In other words, for controllable parameters, the framework calibrates their standard deviations. If θ contains l and x contains m parameters, the number of variables that should be calibrated during the process is 2l + m. These 2l + m variables are called uncertainty calibration variables (UCVs). A set of UCVs includes the mean and standard deviation of uncontrollable parameters and the standard deviation of the controllable parameters. Same as the earlier framework, an assumption for UCVs is made and the bias is calibrated using the experimental training data.
Each one of the mentioned metrics or any combination of them can be used to calibrate UCVs. It is worth mentioning that in the improved framework u-pooling is still being used as the validation metric but the calibration metric would be the one that increases the agreement between the model and experimental data. Comparing different metrics and choosing the one that leads to the lowest u-pooling value on validation data points is done for the case study in the next section.
In the earlier framework, parameters with calibrated randomness along with experimental validation data were used to perform a hypothesis test to determine if the corrected metamodel is valid or not. However, if the uncertainty that is assumed before the bias calibration is far from reality, it leads to rejecting the metamodel and more experiments are needed. This invalid metamodel could actually be valid if the assumed UCVs were closer to the actual value as different assumed UCVs change metamodel predictions. To overcome this problem, if the hypothesis test concludes that the metamodel is not valid, the framework uses the calibrated UCVs as the initial guess to perform the metamodel calibration. Then, the new calibrated UCVs along with validation data points are used to perform the hypothesis test. This process terminates while the relative change in the u-pooling metric value in two consecutive iterations, i.e., a convergence criterion, is less than ɛ(|(Ui − Ui−1)/Ui−1| < ɛ). If the convergence criterion is met and the framework could not generate a valid metamodel, it means that the framework cannot improve the metamodel accuracy and more experiments should be performed. By using this strategy, it will be shown in the next section that the validity and accuracy of the final metamodel do not depend on the assumed UCVs.
In Sec. 4, the improved framework is applied to a case study, which will help to describe the proposed strategy in detail.
4 Application of the Improved Framework to a Case Study
Porosity is one of the most concerned properties associated with additively manufactured parts [21]. Efficient models with high accuracy will help the operator of metal AM machines to choose the best set of process parameters so that the final printed part has no or the least amount of porosity [22]. This paper is focused on introducing an uncertainty calibration framework and is not meant to be a research study on porosity prediction and formation. Porosity modeling is used as a case study to verify the developed modeling strategy. The improved framework was used to predict the porosity of the additively printed parts made of Stainless Steel 316L during a laser powder bed fusion (LPBF) process. The model that is built is purely empirical and is blind to the physics of the process meaning that the physical laws and relationships that are true for porosity modeling is not considered and the model holds a relationship between input and output variables. ansys additive print solution (ANSYS Additive 2020 R2) was used as the FEA software to predict the solid ratio of 2 × 2 × 2 mm cubic parts. The experiment data used for calibrating and validating the metamodel were taken from Ref. [23]. Table 1 shows values of process parameters during the experiments and the final solid ratio of the fabricated parts. The total number of experiments is ten. Six data points were used for calibrating the model and four data points were used for validation. LP, SS, and hatch spacing (HS) are the parameters selected for the experiment. As it is claimed that one of the uncertain parameters during LPBF process is the layer thickness, even though the thickness is set as constant [7]; this parameter was assumed to be the uncertain parameter that should be calibrated. In other words, LP, SS, and HS were the controllable parameters (m = 3) and layer thickness was the uncontrollable parameter (l = 1). Full factorial sampling was used and a total of 240 FEA simulations were run at different levels of four process parameters. The values for each process parameter are listed in Table 2. The final results of all of the 240 FEA simulations were tabulated and can be downloaded.2
Data ID | Laser power (W) | Scan speed (mm/s) | Hatch spacing (mm) | Solid ratio |
---|---|---|---|---|
1 | 150 | 1250 | 0.080 | 0.966 |
2 | 200 | 1667 | 0.080 | 0.974 |
3 | 150 | 714 | 0.140 | 0.975 |
4 | 200 | 952 | 0.140 | 0.974 |
5 | 150 | 750 | 0.120 | 0.987 |
6 | 175 | 750 | 0.120 | 0.997 |
7 | 150 | 781 | 0.080 | 0.999 |
8 | 200 | 1042 | 0.080 | 0.997 |
9 | 150 | 446 | 0.140 | 0.998 |
10 | 200 | 595 | 0.140 | 0.993 |
Data ID | Laser power (W) | Scan speed (mm/s) | Hatch spacing (mm) | Solid ratio |
---|---|---|---|---|
1 | 150 | 1250 | 0.080 | 0.966 |
2 | 200 | 1667 | 0.080 | 0.974 |
3 | 150 | 714 | 0.140 | 0.975 |
4 | 200 | 952 | 0.140 | 0.974 |
5 | 150 | 750 | 0.120 | 0.987 |
6 | 175 | 750 | 0.120 | 0.997 |
7 | 150 | 781 | 0.080 | 0.999 |
8 | 200 | 1042 | 0.080 | 0.997 |
9 | 150 | 446 | 0.140 | 0.998 |
10 | 200 | 595 | 0.140 | 0.993 |
Parameter | # of levels | Parameter values |
---|---|---|
Power (W) | 4 | 140, 160, 180, and 200 |
Scan speed (mm/s) | 5 | 500, 800, 1100, 1400, and 1700 |
Hatch spacing (mm) | 4 | 0.08, 0.1, 0.12, and 0.14 |
Layer thickness (mm) | 3 | 0.02, 0.03, and 0.04 |
Parameter | # of levels | Parameter values |
---|---|---|
Power (W) | 4 | 140, 160, 180, and 200 |
Scan speed (mm/s) | 5 | 500, 800, 1100, 1400, and 1700 |
Hatch spacing (mm) | 4 | 0.08, 0.1, 0.12, and 0.14 |
Layer thickness (mm) | 3 | 0.02, 0.03, and 0.04 |
The bias was calibrated using the six experimental training data. The metamodeling technique used for bias calibration is GP regression. To calibrate the uncertainty of the parameters, three different metrics (u-pooling, first-order SMM, and second-order SMM) or a combination of these metrics could be used as the objective function of the optimization. For this case study, seven different cases were compared with each other. The cases include:
Using u-pooling as the objective function.
Using first-order SMM (I1) as the objective function.
Using second-order SMM (I2) as the objective function.
Using u-pooling and I1 as the objective functions and performing a multi-objective optimization (MOO). Select UCVs that has the highest I1 and the lowest u-pooling from the Pareto front.
Using u-pooling and I1 as the objective functions and performing a MOO. Select UCVs that has the lowest I1 and the highest u-pooling from the Pareto front.
Using u-pooling and I2 as the objective functions and performing a MOO. Select UCVs that has the highest I2 and the lowest u-pooling from the Pareto front.
Using u-pooling and I2 as the objective functions and performing a MOO. Select UCVs that has the lowest I2 and the highest u-pooling from the Pareto front.
The case that had the lowest u-pooling metric value on the validation experiment point was chosen as the metric to be used in the framework. The results of this step are presented in Table 3. Each row represents the results of one initial guess for UCVs, which is specified in the first column. Columns 2–8 are the u-pooling values calculated on the validation points and the calibrated UCVs achieved by optimizing each one of the objective functions described above. Column 9 is the u-pooling metric value before performing any optimization. In Table 3, except for one case, the u-pooling value has the lowest value when I2 is used as the objective function. This means the metamodel is in the best agreement with the validation experiment points when I2 is used. The last row shows the average of u-pooling metric values for each of the objective functions. The average u-pooling value when the second-order SMM is used (u-pooling = 0.1687) is the lowest compared to the cases where other objective functions are used. Hence, I2 is used as the metric to be optimized in the framework.
Assumed UCVs | Calibration metric value used as the objective function | Before optimization | ||||||
---|---|---|---|---|---|---|---|---|
U-pooling | I1 | I2 | U-pooling + I1 | U-pooling + I2 | ||||
Highest I1 | Lowest I1 | Highest I2 | Lowest I2 | |||||
(0.024, 0.002, 0.006, 60, 3) | 0.1834 | 0.1053 | 0.1010 | 0.1692 | 0.1333 | 0.1723 | 0.1206 | 0.1365 |
(0.026, 0.003, 0.003, 30, 4.5) | 0.1394 | 0.1367 | 0.1224 | 0.1845 | 0.1232 | 0.2344 | 0.1656 | 0.1981 |
(0.028, 0.002, 0.006, 48, 6) | 0.2767 | 0.1301 | 0.1336 | 0.2757 | 0.1979 | 0.2787 | 0.1775 | 0.2179 |
(0.03, 0.001, 0.003, 24, 3) | 0.1570 | 0.2127 | 0.1403 | 0.3082 | 0.2167 | 0.2423 | 0.2115 | 0.2960 |
(0.032, 0.003, 0.009, 24, 6) | 0.3005 | 0.2950 | 0.1293 | 0.3067 | 0.2531 | 0.3060 | 0.2012 | 0.2496 |
(0.034, 0.001, 0.003, 24, 6) | 0.3120 | 0.2696 | 0.1998 | 0.3112 | 0.2706 | 0.3115 | 0.2567 | 0.2970 |
(0.036, 0.015, 0045, 42, 4.8) | 0.3120 | 0.2809 | 0.1502 | 0.3120 | 0.2709 | 0.3120 | 0.2534 | 0.2950 |
(0.038, 0.001, 0.0045, 72, 4.2) | 0.3107 | 0.2548 | 0.1887 | 0.3120 | 0.2418 | 0.3110 | 0.2375 | 0.2784 |
(0.04, 0.002, 0.0054, 60, 3) | 0.2975 | 0.2524 | 0.3097 | 0.3115 | 0.2523 | 0.1889 | 0.2258 | 0.2623 |
(0.039, 0.002, 0.054, 60, 3) | 0.2345 | 0.2572 | 0.2124 | 0.3057 | 0.2629 | 0.3112 | 0.2506 | 0.2658 |
Average u-pooling metric value | 0.2524 | 0.2195 | 0.1687 | 0.2797 | 0.2222 | 0.2668 | 0.2100 | 0.2497 |
Assumed UCVs | Calibration metric value used as the objective function | Before optimization | ||||||
---|---|---|---|---|---|---|---|---|
U-pooling | I1 | I2 | U-pooling + I1 | U-pooling + I2 | ||||
Highest I1 | Lowest I1 | Highest I2 | Lowest I2 | |||||
(0.024, 0.002, 0.006, 60, 3) | 0.1834 | 0.1053 | 0.1010 | 0.1692 | 0.1333 | 0.1723 | 0.1206 | 0.1365 |
(0.026, 0.003, 0.003, 30, 4.5) | 0.1394 | 0.1367 | 0.1224 | 0.1845 | 0.1232 | 0.2344 | 0.1656 | 0.1981 |
(0.028, 0.002, 0.006, 48, 6) | 0.2767 | 0.1301 | 0.1336 | 0.2757 | 0.1979 | 0.2787 | 0.1775 | 0.2179 |
(0.03, 0.001, 0.003, 24, 3) | 0.1570 | 0.2127 | 0.1403 | 0.3082 | 0.2167 | 0.2423 | 0.2115 | 0.2960 |
(0.032, 0.003, 0.009, 24, 6) | 0.3005 | 0.2950 | 0.1293 | 0.3067 | 0.2531 | 0.3060 | 0.2012 | 0.2496 |
(0.034, 0.001, 0.003, 24, 6) | 0.3120 | 0.2696 | 0.1998 | 0.3112 | 0.2706 | 0.3115 | 0.2567 | 0.2970 |
(0.036, 0.015, 0045, 42, 4.8) | 0.3120 | 0.2809 | 0.1502 | 0.3120 | 0.2709 | 0.3120 | 0.2534 | 0.2950 |
(0.038, 0.001, 0.0045, 72, 4.2) | 0.3107 | 0.2548 | 0.1887 | 0.3120 | 0.2418 | 0.3110 | 0.2375 | 0.2784 |
(0.04, 0.002, 0.0054, 60, 3) | 0.2975 | 0.2524 | 0.3097 | 0.3115 | 0.2523 | 0.1889 | 0.2258 | 0.2623 |
(0.039, 0.002, 0.054, 60, 3) | 0.2345 | 0.2572 | 0.2124 | 0.3057 | 0.2629 | 0.3112 | 0.2506 | 0.2658 |
Average u-pooling metric value | 0.2524 | 0.2195 | 0.1687 | 0.2797 | 0.2222 | 0.2668 | 0.2100 | 0.2497 |
It can be seen in Table 3 that using the u-pooling metric value as the objective function will not improve the metamodel in this case study. This can be found by comparing the last column with the second one. In some cases (e.g., second and fourth row), the u-pooling value is higher after optimization when the u-pooling metric value is used as the objective function. Although this is not desirable, this should not bring up any surprise as optimization is done using the calibration data points but the u-pooling is calculated using the validation points. The conclusion is that the earlier framework could not calibrate the uncertain parameters properly and optimizing the u-pooling did not improve the metamodel. To compare the improved metamodel with the earlier one, one can compare columns 2 and 4 of Table 3. Except for one case, the u-pooling metric value calculated using the improved framework (column 4) is less than the one calculated using the previous framework (column 2). This shows that the framework is improved and agreement between metamodel and experimental data is increased.
To understand the benefits brought by considering uncertainty for all the parameters, a comparative study is done. The comparison is made between two cases which are: (1) when uncertainty is assumed for all the parameters, and (2) when uncertainty is assumed for only uncontrollable parameters which is the same as the earlier framework. In case (1), the number of UCVs is two, which are the mean and standard deviation of LT. In case (2), the number of UCVs is five, which is the same as we defined before. The u-pooling values on validation data points are calculated when different initial uncertainties are assumed to start the calibration. The initial assumed uncertainties of controllable parameters were the same for each of the cases to have a fair comparison. The initial assumed uncertainties of controllable parameters which are used in case (2) and did not exist in case (1) are set to zero in case (1) for all the different starting points. Figure 4 is drawn to compare cases (1) and (2). Solid bars show the u-pooling metric values when uncertainty is assumed for all the parameters (improved framework) and dotted lines show the metric when uncertainty is only assumed for uncontrollable parameters. It can be seen that for the entire initial starting points, case (2) (solid line) achieved a lower u-pooling metric value than case (1) (dotted line). This means that the model developed by the improved framework has a better agreement with the experimental data points.
Another question yet to be addressed is, what is the percentage of data should be used for training and validation, respectively? In Fig. 6, the u-pooling metric value on the validation points is calculated with respect to different percentages of data used for validation. Although, using all the data for training is recommended Fig. 6 shows that the percentage of data used for validation should ensure that the calculated u-pooling value (solid line) goes below the dashed line (critical value). When only 10% of the data are used for validation, the u-pooling metric value is above the critical value, which indicates that the number of data points used for calculating the u-pooling metric is not enough. On the other hand, when more than 60% of the data are used for validation, the data used for training are insufficient and the metamodel doesn't shows enough accuracy, which in turn leads to rejecting the metamodel. In conclusion, the number of experimental validation data points should not be less than four as u-pooling is sensitive to the empirical CDF of the validation data points. However, if the number of experimental data exceeds 40, using 10% of the data as the validation set satisfy the sensitivity of the u-pooling. In the presenting case study, as the number of experiments is ten, four data points are used for validation. It means that 40% of the data are used for validation (square bold points in Fig. 6).
5 Discussion
Several observations can be made from the case study. First, the effect of initial UCVs on the validity of the metamodel is significant. This can be seen in Table 3 that different initial UCVs result in different u-pooling metric values. This becomes critical when the framework starts using initial UCVs far from their actual values. If the earlier framework was used, this would lead to rejecting the metamodel and performing more experiments. However, using the iterative process devised in the improved framework resolves this issue and finds the valid metamodel if any exists (Fig. 5).
In the presented case study, the second-order SMM is selected as the calibration metric since it leads to the lowest u-pooling metric value calculated on the validation points (Table 3). The reason lies under the definition of I2. The second-order SMM considers the discrepancies in the first two statistical moments (mean and standard deviation) between the metamodel and experiment data [20]. This matches the framework better than other SMMs as the presented framework only assumes two first statistical moments and neglects the higher moments such as the skewness or kurtosis. It is also found that involving the u-pooling metric value as the objective for model calibration is neither helpful, nor necessary. Moreover, it was found that assuming uncertainty for all the parameters rather than uncontrollable parameters not only more compatible with reality but increases the agreement between the model and the experimental data points. The reason is that when the uncertainty is assumed for all the parameters a five-variable optimization problem is solved. This would lead to more optimum results compared with the case that uncertainty is only considered for controllable parameters, in which case a two-variable optimization problem is solved.
Although the proposed framework assumes Gaussian distribution for uncertain variables, the proposed framework should work if variables follow other distributions. In this case, however, higher order statistical moment-based metric may lead to lower u-pooling metric value than the SMM. For example, if the distribution of the parameters is assumed to have skewness, the framework that optimizes the third-order statistical moment-based metric is expected to lead to a lower u-pooling metric value than the one that uses the second- or first-order metric.
The optimized parameters found during the calibration can be far from reality as there are many optimum points in the optimization problem. To further increase the accuracy of the final metamodel and find more exact UCVs, performing more experiments is highly recommended. In general, more experiments lead to more accurate final metamodel and more exact uncertainty distribution. Moreover, nothing prevents this framework to be utilized for high-dimensional problems, however, for higher dimensional problems more experimental data for both bias and uncertainty calibration steps are required.
Although the proposed framework addresses some of the problems in the last framework, it still has some limitations. For example, the user should be careful about choosing the ɛ. In the described case study, if the ɛ is set to 0.05 then the framework decides to perform more experiments after the first iteration whereas it could have found a valid metamodel after three iterations (Fig. 5) if a proper ε was used (ɛ = 0.01). The other limitation is that the efficiency of the framework highly depends on the assumed initial UCVs as different UCVs lead to different metamodels and this can affect the number of iterations and efficiency of the framework.
The metamodel works only on the range where the experimental data are captured. For example, in the case study the range that the laser power is captured is 150–200 W. Hence, the developed model only works when the power is set to be between 150 and 200. If one wants to extrapolate the model in a wider range, then data should be taken from the wider range. It then means that more experimental data are needed to achieve similar model accuracy. If one starts the framework with insufficient number of experiments in a wide range, the chance to find a valid metamodel will decrease and the framework may call for more experiments.
It is also worth noting that the number of data points used for validation should be at least four points as the u-pooling metric is sensitive to the empirical CDF of the experimental validation data points. Nevertheless, if the number of experimental data points exceeds 40, 10% of the total data points satisfy the sensitivity of the u-pooling. The remaining points can be used for calibration purposes.
6 Conclusion
This study proposes an iterative calibration and validation framework for modeling the metal AM process that considers uncertainties of both controllable and uncontrollable parameters with limited experimental data. The validity of the metamodel generated in this iterative framework does not depend on the initial assumption about the uncertain variables.
The framework was utilized to develop a metamodel that can predict the porosity of the metallic parts manufactured by LPBF process. Several calibration metrics were compared and the second-order SMM (I2) was chosen as it matches the metamodel assumptions better than other cases and led to higher accuracy. The metamodel was accurate and efficient although it was trained and validated with only ten experimental data points. The developed framework can help operators and designers to model the final property of the parts fabricated by metal AM machines with limited experimental data and optimize the process parameters to print parts with improved final properties. It can also calibrate the statistical moments of the parameters involved in the process. Future work will examine the applicability of the proposed method to different AM machines and technologies.
Footnote
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.