## Abstract

Feasibility robust optimization techniques solve optimization problems with uncertain parameters that appear only in their constraint functions. Solving such problems requires finding an optimal solution that *is* feasible for all realizations of the uncertain parameters. This paper presents a new feasibility robust optimization approach involving uncertain parameters defined on continuous domains. The proposed approach *is* based on an integration of two techniques: (i) a sampling-based scenario generation scheme and (ii) a local robust optimization approach. An analysis of the computational cost of this integrated approach *is* performed to provide worst-case bounds on *its* computational cost. The proposed approach *is* applied to several non-convex engineering test problems and compared against two existing robust optimization approaches. The results show that the proposed approach can efficiently find a robust optimal solution across the test problems, even when existing methods for non-convex robust optimization *are* unable to find a robust optimal solution. A scalable test problem *is* solved by the approach, demonstrating that *its* computational cost scales with problem size as predicted by an analysis of the worst-case computational cost bounds.

## 1 Introduction

The goal of feasibility robust optimization is to find the best solution to a problem that is feasible under all possible values that any uncertain parameters present in that problem can take. Problem 1, as shown in Eq. (1), provides a general formulation for a feasibility robust optimization problem with no uncertainty in its objective function *f*(*x*) based on the formulation given in Ref. [1], where *f*(*x*), *d _{l}*(

*x*),

*g*(

_{i}*x*,

*u*), and

*q*(

_{j}*u*) are all assumed to be continuously differentiable with respect to both

*x*and

*u*, which are assumed to be continuous:

Typical methods for solving feasibility robust optimization problems represent uncertainty using sets of scenarios (sets of possible values for the uncertain parameters) [1–3], randomly sampled scenarios [4], or worst-case analysis [5]. However, many of these methods can become impractical for engineering design problems because they might have to deal with highly non-convex constraints, scalability limitations due to the number of uncertain parameters, or require too much computational effort to obtain a robust optimal solution.

Most existing methods [1] for solving non-convex robust optimization problems require finding a set of scenarios $U\xaf={u1,\u2026,uK},U\xaf\u2286U$ that can be used in place of $U$ in Problem 2. The resulting formulation can be referred to as being a scenario-based feasibility robust optimization problem, as given in Problem 2 in Eq. (2).

An optimal solution to Problem 2 is a robust optimal solution (an optimal solution to Problem 1) if for each constraint *g _{i}*(

*x*,

*u*) containing uncertainty,

*g*

_{i}(

*x*,

*u*

_{k}) ≤ 0 for all scenarios $uk\u2208U\xaf$ implies that

*g*

_{i}(

*x*,

*u*) ≤ 0 for any possible scenario

*u*∈ $U$.

In this paper, a new feasibility robust optimization approach is developed for solving robust optimization problems (in the form of Problem 1) by finding a finite set of scenarios $U\xaf\u2286U$ so that the solution to Problem 2 will be the same as the solution to Problem 1 (the robust optimal solution). By using a combination of random sampling, optimization-based scenario generation, and worst-case analysis, the new approach is able to efficiently find $U\xaf$ and solve Problem 2 (and thus obtain a robust optimal solution), even in problems where implementations of existing robust optimization techniques fail to efficiently find a robust optimal solution.

The rest of this paper is organized as follows. Section 2 summarizes current methods for solving non-convex robust optimization problems. Section 3 discusses the formulation for scenario robust optimization used by the proposed new approach. Section 4 details the proposed new approach. Section 5 demonstrates the new approach on five different examples and compares its performance against existing robust optimization approaches. Section 6 summarizes the conclusions of this paper. Appendix C discusses the theoretical computational performance of the proposed new approach relative to existing robust optimization approaches.

## 2 Related Work

The most basic approach for constructing the set $U\xaf$ is to assume that $U\xaf$ consists of a single “worst-case” scenario *u _{w}* and that

*g*

_{i}(

*x*,

*u*

_{w}) ≤ 0 implies that

*g*

_{i}(

*x*,

*u*) ≤ 0 for any possible scenario

*u*∈ $U$, commonly referred to as a “worst-case analysis” [6]. Bertsimas et al. [7] use a gradient ascent approach for a worst-case analysis while simultaneously solving an optimization problem. Bertsimas and Nohadani [8] develop a simulated annealing approach which extends the approach of Bertsimas et al. [7] to perform a global search for a robust optimal solution. Li et al. [9,10] develop a measure of robustness around a nominal scenario and use a genetic algorithm for determining the worst-case scenario. Zhou et al. [5,11] develop a sequential quadratic programing robust optimization algorithm, where the worst-case scenario for each constraint and the objective function are found via maximization at each sequential quadratic programing iteration. Cheng and Li [12] extend the approach of Zhou et al. [5] to solve problems for global optimality by using a differential evolution method as an outer optimizer. Similar forms of worst-case analysis are used in the context of reliability-based design optimization (RBDO, also called probabilistic or stochastic optimization) by Du and Chen [13], where the most probable point (MPP) is found via an inner optimization problem and used to ensure that constraints are satisfied at a predetermined reliability level. Liang et al. [14] develop a single loop algorithm for RBDO which avoids using the MPP by converting the RBDO problem into a deterministic problem via the first-order Karush–Kuhn–Tucker optimality conditions of the inner optimization problem. In practice, methods which rely on a worst-case or reliability analysis require assumptions that may not hold for non-convex robust optimization, where there can exist multiple “local” worst-case scenarios for a single constraint (see Example 1 in Sec. 5) and where no probability distribution exists for the uncertain parameters.

An alternative to approaches that search for worst cases is to instead construct the set $U\xaf$ using randomly sampled scenarios, an idea first applied by Calafiore and Campi [4] to the problem of robust control design. Chamanbaz et al. [15] and Calafiore [16,17] developed sequential optimization approaches which alternate between checking the feasibility of a candidate solution by sampling further scenarios and finding a new candidate solution when scenarios are found under which the candidate solution is infeasible. Rudnick-Cohen et al. [18] proposed an approach that generates additional scenarios from randomly sampled scenarios through a best and worst-case analysis. Rudnick-Cohen et al. [18] also proposed a method for performing scenario reduction, which can limit the number of scenarios used in a scenario robust optimization problem. Margellos et al. [19] discuss the tractability and expected number of samples needed for this class of methods. Ramponi [20] introduces the property of “essential robustness” to refer to the conditions under which such methods will asymptotically converge to a robust optimal solution. Because sampling-based approaches converge asymptotically, it is difficult for them to maintain a pre-specified constraint tolerance for feasibility under uncertainty. Note that randomly sampling scenarios are an inherently global search (every scenario is equally likely to be sampled). Thus, the robust optimal solution found by these methods can be considered to be feasible under uncertainty in a global sense, without needing to make any assumption about worst-case scenarios. Sampling-based approaches (excluding Ref. [18]) make no attempt to minimize the size of $U\xaf$; this can lead to a much larger optimization problem than would be considered in worst-case analysis-based approaches. While sampling-based approaches asymptotically converge to the robust optimal solution when given infinite samples, in practice they must be run with a finite number of samples or iterations. This means that in practice, sampling-based approaches can incorrectly report a non-robust optimal solution with very low worst-case constraint violations as being the robust optimal solution.

This paper presents a new feasibility robust optimization approach that can efficiently solve non-convex robust optimization problems via local search in the design variables space, while maintaining a global search over the uncertain parameters space. This method has two key components. The first component is a new scenario generation method that can both generate scenarios via sampling and refine them with a local optimization method in order to quickly reach a feasibility robust optimal solution. The second component is a new scenario-based local robust optimization method, which refines the final solution to ensure that it satisfies the desired constraint tolerance. Computational experiments demonstrate that using the proposed techniques together requires less overall computational effort in some cases than existing robust optimization approaches and that the proposed new method can solve robust optimization problems that cannot be solved with locally robust optimal techniques.

The new feasibility robust optimization approach presented is based on the framework for sampling-based robust optimization presented in Ref. [18], with five key differences: (i) the new approach uses a single improved method for scenario generation over the two methods proposed in Ref. [18], (ii) the new approach contains a new local robust optimization step for ensuring the feasibility of the final solution found, (iii) the new approach uses a more efficient formulation than Problem 2 that can contain fewer constraints than it, (iv) the new approach avoids solving scenario robust optimization problems twice per iteration as done in Ref. [18], and (v) the new approach does not use scenario reduction.

## 3 Problem Formulation

Many sampling-based approaches, such as Refs. [4,15–17], use a reduced form of Problem 2, where scenarios only impose the constraints which they found a design to violate, rather than imposing all constraints containing uncertainty. This reduced scenario robust optimization (RSRO) formulation is given in Problem 3 (Eq. (3)), where $U\xaf$ is a finite set of scenario under which the constraints need to be imposed and where *R*(*u _{k}*) is the set of the indices of the constraints

*g*

_{i}(

*x*,

*u*

_{k}) ≤ 0 that should be imposed under scenario $uk\u2208U\xaf$.

Problem 3 can consist of fewer constraints than Problem 2 would for the same set of scenarios $U\xaf$. Thus, the approaches using Problem 3 (Refs. [4,15–17] and this paper) should perform better for problems with larger numbers of constraints than those using Problem 2 (such as Ref. [18]), since Problem 3 does not need to impose every constraint *g _{i}*(

*x*,

*u*) under every scenario $uk\u2208U\xaf$. By using Problem 3 within the robust optimization framework presented in Ref. [18] and incorporating several other improvements, an efficient and scalable robust optimization approach can be developed.

Note that Problem 1 contains an infinite number of constraints, unlike Problems 2 and 3, making it impossible to solve as a non-convex optimization problem. All robust optimization approaches that use either Problem 2 or Problem 3 in place of Problem 1 assume that there exists a finite set of scenarios $U\xaf$ such that the feasible region of Problem 2 or 3 under the scenarios in $U\xaf$ is the same as the feasible region of Problem 1. When this assumption does not hold, an “infinite” number of scenarios may be necessary to solve a robust optimization problem. This paper considers problems where this assumption holds.

## 4 Proposed Approach: Scenario Generation With Local Robust Optimization

The proposed approach, called scenario generation with local robust optimization (SGLRO), solves Problem 1 and consists of two components: a scenario generation method (Sec. 4.1) and a local robust optimization method (Sec. 4.2). SGLRO starts off using a sampling-based robust optimization approach (see Fig. 1), using scenario generation in a similar manner to the approach in Ref. [18] (subsequently referred to as SGR^{2}O). Each time a scenario is generated, it is added to $U\xaf$ and *R*, which are then used to re-solve Problem 3. This process continues for a finite number of iterations, after which SGLRO uses a local robust optimization method to obtain its final solution.

Normally, a sampling-based robust optimization approach can return a non-robust optimal solution with very low worst-case constraint violations after being run for a finite number of iterations. However, a solution with very low worst-case constraint violations should be near the boundaries of the feasible region of Problem 1. Thus, locally searching for worst-case scenarios for that solution should yield additional scenarios that could be added to the set $U\xaf$ in Problems 2 and 3 so that their feasible regions become the same as Problem 1's. These additional scenarios will enable Problem 2 or 3 to find the robust optimal solution. From a practical standpoint, this local worst-case search is largely the same as running a robust optimization approach that searches for worst-case scenarios [5].

A simple strategy for mitigating the asymptotic convergence of sampling-based methods is thus to use their final solution as the initial conditions for a local “worst-case” based robust optimization approach. SGLRO implements this strategy once it finishes randomly sampling scenarios, by transitioning to a local “worst-case” based robust optimization approach that makes use of both the design and the scenarios generated during random sampling (the “Solve Local Worst Case Robust Optimization using $U\xaf$” step in Fig. 1). Thus, SGLRO is able to maintain the same global search over the uncertain parameters as a random sampling-based approach to robust optimization, without the limitations of asymptotic convergence.

An implementation of SGLRO algorithm is shown in Table 1. In Table 1, the set $U\xaf$ is a set of scenarios, which is initially empty (cf. line 1). “Solve RSRO” corresponds to solving Problem 3, which should return *x _{new}*. The function “Sample Possible Scenario” samples a random scenario from $U$ and returns it. As shown in Table 1, SGLRO first solves Problem 3 with no scenarios to generate a candidate design

*x*(cf. lines 1–2). Then, it randomly samples scenarios until a scenario is found where the candidate design is infeasible (cf. line 7). It then generates additional scenarios using scenario generation and adds all these scenarios (including the original randomly sampled one) to $U\xaf$, the current set of scenarios (cf. lines 14–17). SGLRO repeats these steps for a fixed number of iterations (cf. lines 4–5) and then switches to a local robust optimization method (cf. line 19). The number of iterations should be chosen to be sufficiently large such that SGLRO will sample enough scenarios to find the robust optimal solution. When

_{B}*x*is near (or at) the robust optimal solution after these iterations, the local robust optimization method (cf. line 19) takes care of refining

_{B}*x*to ensure it is the robust optimal solution. The local robust optimization method is initialized with the set of scenarios $U\xaf$; it attempts to find new worst-case scenarios which are not in $U\xaf$ and updates

_{B}*x*to ensure feasibility in these new worst-case scenarios. When the local robust optimization is done, SGLRO returns its current solution as the robust optimal solution.

_{B}### 4.1 Sampling-Based Scenario Generation.

Maximizing constraint violations for a candidate design can be used to find a worst-case scenario, which is more likely to be one of the scenarios in $U\xaf$ than a randomly sampled scenario. Let *V* be the set of constraints violated by design *x _{B}* by a randomly sampled scenario

*s*(

*i*∈

*V*if and only if

*g*

_{i}(

*x*

_{B},

*s*) ≥

*ɛ*, cf. lines 9–12 in Table 1). Problem 4, shown in Eq. (4), gives the formulation from [18] for finding a new scenario

*u*that maximizes the sum of the violated constraints in

*V*, where

*ɛ*is a small positive constraint violation.

After solving Problem 4, additional constraints may now be violated for *x _{B}*, which Problem 4 did not attempt to maximize. Additional scenarios can be generated by solving Problem 4 again for these new violated constraints until solving Problem 4 does not violate any constraint for

*x*that has not already been used in the current iteration of scenario generation. Algorithm 2 (Table 2) describes this scenario generation process, where “Solve Worst Case Search” refers to solving Problem 4 from initial point

_{B}*u*. Algorithm 2 works by repeatedly solving Problem 4 for a set of violated constraints (

*V*) from a given scenario (

_{new}*u*), and then adding Problem 4's solution as a new scenario (cf. lines 7–10, Table 2). The first scenario and set of constraints considered are

_{gen}*V*and

*u*(cf. lines 1–3, Table 2), which are the randomly sampled scenario from Algorithm 1 (cf. line 14, Table 1). The next scenario to be considered is the newly generated scenario found by solving Problem 4 (cf. line 7, Table 2), with

_{q}*V*being determined from the set of constraints that have yet to be violated by a scenario being generated (cf. lines 11–15, Table 2). Algorithm 2 stops when there are no constraints left that are violated (cf. line 5, Table 2).

_{new}### 4.2 Scenario-Based Local Robust Optimization.

SGLRO uses a simple scenario-based local robust optimization method that iteratively performs a local search to find the worst-case scenario for each constraint present. The implementation of the local robust optimization method is given in Appendix A, Table 7. In each iteration, the local robust optimization method solves Problem 4 to find the worst-case scenario (cf. line 5, Appendix A, Table 7) for each constraint. Any worst-case scenarios that do violate constraints are added to $U\xaf$ (cf. lines 6–9, Appendix A). If new scenarios have been added to $U\xaf$, the scenario robust optimization problem is solved to obtain a new candidate robust optimal solution (cf. lines 10–11, Appendix A, Table 7). This process repeats until no new scenarios are added to $U\xaf$ (cf. lines 2 and 10, Appendix A, Table 7), after which the local robust optimization method stops and returns its current solution as the robust optimal solution.

## 5 Examples

SGLRO's performance was compared against a deterministic double loop robust optimization method (see Appendix B, Table 8) and SGR^{2}O [18] across five different examples of non-convex robust optimization problems. The fifth example problem was a scalable test problem, which was run for increasing numbers of design variables, uncertain parameters, and constraints. All examples considered only interval uncertainty.

Because sampling-based robust optimization methods (e.g., SGR^{2}O and SGLRO) are inherently similar to the sampling done in Monte Carlo simulation, Monte Carlo simulation could not be used to verify the robust feasibility of the solutions found. However, in three of the five examples (1, 3, and 4), the set of worst-case scenarios for constraints at the robust optimal solution are known to consist of having uncertain parameters at combinations of their maximum and minimum values. The set of all such scenarios was used to determine the worst-case constraint violations of the approaches compared in these examples. In the remaining two examples, the worst-case constraint violations were determined through alternate analyses (graphically in Example 2 and analytically in Example 5).

All examples used the lower bounds for the design variables as the initial conditions for the approaches compared, except where noted otherwise. In all examples, the objective function was treated as being unaffected by uncertainty. In all five examples, SGR^{2}O used *N _{s}* = 12 scenarios,

*N*= 10,

_{R}*N*= 1 (number of scenarios sampled per iteration), and

_{F}*ɛ*= 10

^{−6}, which is the same as the constraint feasibility tolerance used by the optimization solver. The nominal scenario

*u*used by SGLRO was the midpoint of the range for each uncertain parameter. All methods randomly sampled scenarios from a uniform distribution between the lower and upper bounds for each uncertain parameter.

_{nom}The number of iterations used for each problem was set based on the specific features of the problem. As SGR^{2}O and SGLRO are non-deterministic, they were run 100 times for Examples 1, 2, 3, and 4. Because Example 5 has a single worst-case scenario, SGLRO's performance was deterministic, thus it was run once for each problem size. However, SGR^{2}O was non-deterministic for Example 5, so it was run 10 times for each problem size considered. SGR^{2}O was not run 100 times in Example 5 due to the high computational cost associated with very large problem sizes.

All optimization problems used by SGR^{2}O and SGLRO were solved using matlab's fmincon solver with the sequential quadratic programing option [21]. However, the deterministic double loop approach used the interior point option instead, as it could not find the robust optimal solution in Example 3 when using sequential quadratic programing. When SGR^{2}O solved the scenario reduction refinement problem detailed in Ref. [18], fmincon's “OptimalityTolerance” and “StepTolerance” settings were set to 10^{−3}, additionally the “M” parameter from Ref. [18] was set to 10^{6}. When any method compared solved Problem 3, fmincon's “MaxIterations” setting ($N\alpha $) was set to 1000 and its “MaxFunctionEvaluations” setting was set to 10^{6}. All other formulations were solved using fmincon's default parameters. Gradient information was not supplied to fmincon for any of the examples.

### 5.1 Example 1: Basic Circle Problem.

While Eq. (5) is an extremely simple optimization problem, its feasible region requires the constraint (*x* − *u*_{1})^{2} + (*y* − *u*_{2})^{2} − 5 ≤ 0 to be imposed for four different combinations of values for *u*_{1} and *u*_{2} (see Fig. 2). There are four locally optimal solutions to Eq. (5), (± 1, 0) and (0, ± 1), which all share the same globally optimal cost of −1.

All methods compared in Example 1 used *x* = 0.5 and *y* = 0 as their initial conditions. SGR^{2}O and SGLRO were run with *N _{I}* = 100 iterations. Figure 3 shows a graphical example of how SGLRO solves Example 1. Note that because SGLRO uses local optimization, it does not need to find all four scenarios which define the feasible region depicted in Fig. 2.

Table 3 lists the results of the three methods used to solve Example 1. SGLRO reliably converged to the robust optimal solution of Example 1. SGR^{2}O found the robust optimal solution in 99 of its 100 runs. The one run where SGR^{2}O did not converge was caused by sampling a scenario (*u* = [−0.94, 0.86]) that was extremely close to one of the four scenarios defining the feasible region in Fig. 2. This scenario reduced the probability of sampling a scenario which showed SGR^{2}O's current solution to be infeasible, causing it to run out iterations before finding such a scenario. The deterministic double loop approach did not converge in Example 1, becoming trapped in an infinite loop going between the scenarios shown in Fig. 2.

Approach | Sum of all objective function calls | Sum of all constraint function calls | Largest worst-case constraint violation | Final objective function value | Final number of scenarios |
---|---|---|---|---|---|

SGR^{2}O (mean) | 137.3 | 655.4 | 0.0056 | −1.0015 | 5.97 |

SGR^{2}O (standard deviation) | 6.338 | 39.65 | 0.0557 | 0.0151 | 0.30 |

SGLRO (mean) | 56.9 | 314.6 | 0 | −1 | 4.68 |

SGLRO (standard deviation) | 22.2 | 115.8 | 0 | 1.6521 × 10^{−15} | 1.21 |

Deterministic double loop | ∞ | ∞ | N/A | N/A | N/A |

Approach | Sum of all objective function calls | Sum of all constraint function calls | Largest worst-case constraint violation | Final objective function value | Final number of scenarios |
---|---|---|---|---|---|

SGR^{2}O (mean) | 137.3 | 655.4 | 0.0056 | −1.0015 | 5.97 |

SGR^{2}O (standard deviation) | 6.338 | 39.65 | 0.0557 | 0.0151 | 0.30 |

SGLRO (mean) | 56.9 | 314.6 | 0 | −1 | 4.68 |

SGLRO (standard deviation) | 22.2 | 115.8 | 0 | 1.6521 × 10^{−15} | 1.21 |

Deterministic double loop | ∞ | ∞ | N/A | N/A | N/A |

### 5.2 Example 2: Local Maxima Example.

^{2}O and SGLRO's global searches relative to a local method like the deterministic double loop approach.

There is only one robust feasible solution to Example 2, which is *x* = 0, all other values of *x* are infeasible for at least one value of *u*. SGR^{2}O and SGLRO were run with *N _{I}* = 100 iterations for Example 2. All approaches in Example 2 used the initial point

*x*= 0.1. Figure 4 shows a plot of the constraint in Example 2 as a function of

_{IC}*x*and

*u*, along with the solutions found by the approaches run on Example 2.

Table 4 lists the results of the three approaches compared in Example 2. SGLRO and SGR^{2}O reliably found the robust optimal solution; however, the deterministic double loop approach found an infeasible solution (see Fig. 4). This occurred because the deterministic double loop approach uses a local search to find worst-case scenarios, which caused it to find a scenario that locally maximizes the value of the constraint (*u* = 0.215) instead of the global maximum (*u* = 1). SGLRO was significantly faster than the other two approaches compared in Example 2. The deterministic double loop approach would be fastest if it used sequential quadratic programing as its solver, but it would still find the same infeasible solution shown in Fig. 4. SGR^{2}O used scenario reduction in 15 of its 100 runs. These 15 runs required many more constraint function calls than the other 85 runs, which caused the large standard deviation in SGR^{2}O's number of constraint function calls.

Approach | Sum of all objective function calls | Sum of all constraint function calls | Largest worst-case constraint violation | Final objective function value | Final number of scenarios |
---|---|---|---|---|---|

SGR^{2}O (mean) | 178 | 1865 | 1.2824 × 10^{−12} | −2.3544 × 10^{−12} | 6.34 |

SGR^{2}O (standard deviation) | 214 | 3144 | 8.98711 × 10^{−12} | 1.8123 × 10^{−11} | 2.78 |

SGLRO (mean) | 14.4 | 143.5 | 2.4689 × 10^{−14} | −4.5330 × 10^{−14} | 2.18 |

SGLRO (standard deviation) | 4.79 | 17.24 | 2.2012 × 10^{−13} | 4.0414 × 10^{−13} | 0.58 |

Deterministic double loop | 118 | 188 | 0.4659 | −0.719 | 1 |

Approach | Sum of all objective function calls | Sum of all constraint function calls | Largest worst-case constraint violation | Final objective function value | Final number of scenarios |
---|---|---|---|---|---|

SGR^{2}O (mean) | 178 | 1865 | 1.2824 × 10^{−12} | −2.3544 × 10^{−12} | 6.34 |

SGR^{2}O (standard deviation) | 214 | 3144 | 8.98711 × 10^{−12} | 1.8123 × 10^{−11} | 2.78 |

SGLRO (mean) | 14.4 | 143.5 | 2.4689 × 10^{−14} | −4.5330 × 10^{−14} | 2.18 |

SGLRO (standard deviation) | 4.79 | 17.24 | 2.2012 × 10^{−13} | 4.0414 × 10^{−13} | 0.58 |

Deterministic double loop | 118 | 188 | 0.4659 | −0.719 | 1 |

### 5.3 Example 3: Robust Welded Beam.

Example 3 is a robust optimization variant of the well-known welded beam problem considered by Ragsdell and Phillips [22], taken from Refs. [18,23]. The eight uncertain parameters considered were deviations in the values of the problem's four design variables (dimensions of the weld and of the beam) and the length, load, and failure stresses of the beam. The objective function is to minimize the cost of the beam without considering uncertainty, accounting for the material cost of the beam and the cost of the weld. Example 1 has six constraints: two require that the beam does not fail under shear and bending stress and the other four limits the deflection of the beam, ensures that the beam does not buckle, requires that the weld's thickness is not larger than the beam's width, and limits the weld's thickness. SGR^{2}O and SGLRO were run with *N _{I}* = 100 iterations.

Table 5 lists the results for all three approaches in Example 3. SGLRO and the deterministic double loop approach reliably converged to the robust optimal solution, but SGR^{2}O found the robust optimal solution in only 99 of its 100 runs. Both SGR^{2}O and SGLRO reached the robust optimal solution after performing scenario generation twice. Note that the number of scenarios sampled by SGR^{2}O was approximately a tenth of those used for this problem in Ref. [18], with more iterations all 100 runs of SGR^{2}O would have converged as it did in Ref. [18]. The deterministic double loop approach was the fastest approach in Example 3.

Approach | Sum of all objective function calls | Sum of all constraint function calls | Largest worst-case constraint violation | Final objective function value | Final number of scenarios |
---|---|---|---|---|---|

SGR^{2}O (mean) | 536.9 | 14,509 | 0.001 | 2.7859 | 4.17 |

SGR^{2}O (standard deviation) | 34.75 | 1245.5 | 0.001 | 3.9414 × 10^{−4} | 0.40 |

SGLRO (mean) | 278.2 | 5349.4 | 8.0312 × 10^{−9} | 2.7859 | 4.2 |

SGLRO (standard deviation) | 19.35 | 452.02 | 3.1742 × 10^{−8} | 1.97 × 10^{−8} | 0.402 |

Deterministic double loop | 320 | 4679 | 6.6404 × 10^{−6} | 2.7859 | 6 |

Approach | Sum of all objective function calls | Sum of all constraint function calls | Largest worst-case constraint violation | Final objective function value | Final number of scenarios |
---|---|---|---|---|---|

SGR^{2}O (mean) | 536.9 | 14,509 | 0.001 | 2.7859 | 4.17 |

SGR^{2}O (standard deviation) | 34.75 | 1245.5 | 0.001 | 3.9414 × 10^{−4} | 0.40 |

SGLRO (mean) | 278.2 | 5349.4 | 8.0312 × 10^{−9} | 2.7859 | 4.2 |

SGLRO (standard deviation) | 19.35 | 452.02 | 3.1742 × 10^{−8} | 1.97 × 10^{−8} | 0.402 |

Deterministic double loop | 320 | 4679 | 6.6404 × 10^{−6} | 2.7859 | 6 |

### 5.4 Example 4: Enhanced Robust Speed Reducer.

*g*

_{13}constrains the allowable variation of the distance between the two shafts in the speed reducer. This new constraint relaxes constraints

*g*

_{5}and

*g*

_{6}, which allows a wider range of designs than the original problem did in Ref. [24]. The upper and lower bounds for the design variables in the problem have been changed to allow a larger feasible region. The objective function is to minimize the sum of the normal stresses present in the two gears (

*m*

_{2}and

*m*

_{3}). The volume of the speed reducer (

*m*

_{1}) is constrained by constraint

*g*

_{10}. Objective robustness (considered in Ref. [5]) is not considered. The initial conditions used are the same as the ones used in Ref. [5] ([

*x*

_{1},

*x*

_{2},

*x*

_{3},

*x*

_{4},

*x*

_{5},

*x*

_{6},

*x*

_{7}] = [3.58, 0.71, 18, 8, 8, 3.5, 5.3]). The uncertain deviations of the design variables used in this example were [

*u*

_{1},

*u*

_{2},

*u*

_{3},

*u*

_{4},

*u*

_{5},

*u*

_{6},

*u*

_{7}] = [Δ

*x*

_{1}, Δ

*x*

_{2}, Δ

*x*

_{3}, Δ

*x*

_{4}, Δ

*x*

_{5}, Δ

*x*

_{6}, Δ

*x*

_{7}]. Unlike the original problem, in Example 4, some constraints have multiple worst-case scenarios, which make solving the robust optimization problem more challenging. SGR

^{2}O and SGLRO were run for

*N*= 100 iterations. Table 6 provides the results for the approaches tested.

_{I}Approach | Sum of all objective function calls | Sum of all constraint function calls | Largest worst-case constraint violation | Final objective function value | Final number of scenarios |
---|---|---|---|---|---|

SGR^{2}O (mean) | 914.3 | 57,350 | 0.1668 | 1.885 | 9.3 |

SGR^{2}O (standard deviation) | 95.72 | 12,500 | 4.4807 × 10^{−4} | 5.0658 × 10^{−4} | 1.51 |

SGLRO (mean) | 731.1 | 11,310 | 5.3529 × 10^{−9} | 1.886 | 9.11 |

SGLRO (standard deviation) | 85.99 | 1564 | 2.822 × 10^{−8} | 3.9391 × 10^{−6} | 1.39 |

Deterministic double loop | _{∞} | _{∞} | N/A | N/A | N/A |

Approach | Sum of all objective function calls | Sum of all constraint function calls | Largest worst-case constraint violation | Final objective function value | Final number of scenarios |
---|---|---|---|---|---|

SGR^{2}O (mean) | 914.3 | 57,350 | 0.1668 | 1.885 | 9.3 |

SGR^{2}O (standard deviation) | 95.72 | 12,500 | 4.4807 × 10^{−4} | 5.0658 × 10^{−4} | 1.51 |

SGLRO (mean) | 731.1 | 11,310 | 5.3529 × 10^{−9} | 1.886 | 9.11 |

SGLRO (standard deviation) | 85.99 | 1564 | 2.822 × 10^{−8} | 3.9391 × 10^{−6} | 1.39 |

Deterministic double loop | _{∞} | _{∞} | N/A | N/A | N/A |

Only SGLRO found the robust optimal solution every time. SGR^{2}O reliably found an infeasible solution that is extremely close to the robust optimal solution but is not robust because small worst-case constraint violations are present in constraints *g*_{11}, *g*_{12}, and *g*_{13}. Note that SGR^{2}O did not perform scenario reduction in Example 4. Like Example 1, Example 4 required multiple worst-case scenarios for one of its constraints (*g*_{13}), which caused the deterministic double loop approach to enter an infinite loop.

### 5.5 Example 5: Robust DTLZ9.

SGR^{2}O [18], SGLRO, and the deterministic double loop approach were run for varying sizes of Example 5, ranging from *n* = 10 design variables to *n* = 250 design variables. The parameter *M* in the DTLZ9 test problem was chosen to be half the number of design variables (*M* = *n*/2). Example 5 has a single worst-case scenario, in which every deviation equals −0.09, so the robust optimal solution assigns a value of 0.1 to all but the last two design variables, which instead equal 0.5527. All three approaches found the robust optimal solution to Example 5 for all problem sizes considered. From the computational complexity analysis described in Appendix C, it should be noted that the number of constraints and uncertain parameters in the DTLZ9 problem [25] increases linearly with the number of design variables, so SGR^{2}O [18], SGLRO, and the deterministic double loop approach should all have *O*(*n*^{2}) constraint calls relative to the number of design variables (*n*). Because SGR^{2}O's behavior was not deterministic in Example 5, it was run 10 times for each problem size and the medians of the number of function calls were used for comparison. This non-deterministic behavior was caused by the scenario generation method used by SGR^{2}O, which generates some scenarios by minimizing constraint violations.

As shown in Fig. 5, the number of objective function calls made by SGR^{2}O, SGLRO, and the deterministic double loop approach increased linearly with problem size, which is expected as the same number of solver iterations required for most problem sizes, but computing the gradient of the objective function increased linearly in function calls as more design variables were added. SGLRO always found the worst-case on the first iteration, so it needed fewer objective function calls than the deterministic double loop approach.

As shown in Fig. 6, the number of constraint function calls that SGR^{2}O, SGLRO, and the deterministic double loop approach made increased quadratically as the size of the problem increased (*R*^{2} value for fitting a quadratic curve is 0.96 for SGR^{2}O, 1 for SGLRO, and 0.99 for the deterministic double loop approach). This result numerically demonstrates that all three approaches have comparable scalability. This relationship also confirms the predicted *O*(*n*^{2}) computation cost and demonstrates the correctness of the computational complexity results presented in Appendix C. SGLRO used fewer constraint function calls than the deterministic double loop approach only when the deterministic double loop approach used matlab's interior point solver. When it used sequential quadratic programing as the solver, the deterministic double loop approach required fewer constraint function calls in Example 5 than the other approaches.

### 5.6 Discussion of Results.

For the five examples considered, SGLRO was the only approach that reliably found a robust optimal solution. The deterministic double loop approach found robust optimal solutions for Examples 3 and 5, but it could not do so for Examples 1, 2, and 4. In Example 2, the local maxima present in the constraints prevented the deterministic double loop approach from finding the true worst-case scenarios for the constraints. In Examples 1 and 4, however, the deterministic double loop approach failed because both problems have some constraints with multiple worst-case scenarios. This violates the assumption that each constraint has a single worst-case scenario, which the deterministic double loop approach and other worst-case-based approaches to robust optimization [5] require. SGLRO does not require this assumption, which allows it to find all of the scenarios that are needed in order to find the robust optimal solutions to Examples 1 and 4. Additionally, SGLRO uses random sampling when initially searching for worst-case scenarios, which allows it to avoid issues with local maxima such as those present in Example 2.

Although SGR^{2}O almost always found the robust optimal solution in Examples 1 and 3, it reliably failed to find the robust optimal solution in Example 4. The occasional failures in Examples 1 and 3 occurred because SGR^{2}O's number of iterations was set too low. In Example 4, however, increasing the number of iterations would not improve SGR^{2}O's performance. SGR^{2}O failed to find a robust optimal solution in Example 4 because it found an infeasible solution for which the probability of sampling a scenario in which that solution was infeasible was extremely low. Robust optimization methods that rely solely on random sampling, such as SGR^{2}O, are unable to distinguish between this type of solution and a robust optimal solution. SGLRO avoided this problem because its local robust optimization step can easily find a scenario where this solution is infeasible, allowing it to find the robust optimal solution to Example 4. This step also ensured that SGLRO reliably found the robust optimal solution to Examples 1 and 3 using fewer iterations than SGR^{2}O needed.

Curiously, although Example 2 is a very small problem, using scenario reduction actually increased SGR^{2}O's computational cost in Example 2. This occurred because the scenario reduction method proposed in Ref. [18] attempts to remove as many scenarios as possible. This causes SGR^{2}O to remove some scenarios that it needs to find the robust optimal solution, so additional function calls are needed to find these scenarios a second time. Thus, SGLRO was faster than SGR^{2}O when finding the robust optimal solution to Example 2.

## 6 Conclusions

This paper presented SGLRO, a new approach for solving non-convex robust optimization problems. SGLRO extends past work [18] in using sampling-based approaches to solve robust optimization problems via a new approach for scenario generation and a new local robust optimization method for refining SGLRO's final solution. The introduction of the local robust optimization method shown makes SGLRO more reliable at finding the robust optimal solution than sampling-based approaches and worst-case approaches. It was also demonstrated experimentally that SGLRO was capable of finding the robust optimal solution to several different example problems, even when existing robust optimization methods could not reliably find the robust optimal solution.

The results presented demonstrate that SGLRO can efficiently solve complex non-convex robust optimization problems with large amounts of uncertainty. However, the results also indicate several areas of potential improvement for SGLRO. SGLRO's performance could potentially be improved by fully integrating the local robust optimization method into the process of sampling scenarios, rather than running it after all scenarios are sampled. A non-uniform scenario sampling approach could make use of existing infeasible scenarios to find new infeasible scenarios more quickly when near the robust optimal solution, speeding up the rate of convergence. Alternate strategies for scenario generation could more quickly obtain useful scenarios, providing a similar benefit. Developing an approach for scenario reduction which avoids the issues observed with the method presented in Ref. [18] could also potentially provide an improvement in performance.

All of the approaches discussed require that there exists a finite set $U\xaf$ that can be used to find the robust optimal solution. It is possible to have a robust optimization problem where Problem 2 requires an infinite number of scenarios (such as a line or other continuous curve of scenarios) in order to reach the robust optimal solution. It may be possible to extend the framework of SGLRO to use robust feasibility cuts, such as in Ref. [26], or surrogate modeling-based techniques, such as in Ref. [27], to handle such problems.

While this paper has only discussed feasibility robust optimization (uncertainty only appearing in the constraints), uncertainty in an objective function (*f*(*x*,*u*) instead of *f*(*x*)) can be dealt with by moving the objective function into the constraints (see Sec. 2.1 of Ref. [1]). This concept has also been extended to multi-objective robust optimization (MORO) [28]. As presented, the proposed approach cannot be used for solving MORO problems, as MORO requires accounting for a set of designs (which trade-off between objectives) rather than just one design. Future work will explore methods for using scenario-based approaches to solve MORO problems.

## Acknowledgment

The work presented here was supported in part by the Naval Air Warfare Center (Funder ID: 10.13039/100010217) under cooperative agreement N00421132M006. Such support does not constitute an endorsement by the funding agency of the opinions expressed in the paper.

Implementations of the SGLRO algorithm, SGR^{2}O, the double loop approach compared and all numerical examples used in this paper are available online.^{1}

## Footnote

## Nomenclature

### Notation

*u*=vector of all uncertain parameters present in the optimization problem

*x*=vector of all design variables present in the optimization problem

*D*=number of design variables

*P*=number of uncertain parameters

*V*=set of violated constraints

- $U\xaf$ =
set of scenarios used to solve the scenario robust optimization problem

- $U$ =
set of all possible combinations of uncertain parameters (domain of uncertain parameters)

*x*=_{B}current best solution for design variables

- $N\alpha $ =
maximum number of optimization solver iterations

*N*=_{I}number of iterations to run SGLRO algorithm

*N*=_{Q}number of iterations used by a local robust optimization method

*d*(_{l}*x*) =*l*th constraint without uncertainty*f*(*x*) =objective function

*g*(_{i}*x*,*u*) =*i*th constraint subject to uncertainty*q*(_{j}*u*) =*j*th constraint defining the domain of uncertain parameters*I*,*J*,*L*=number of constraints on design containing uncertainty, on the domain of uncertain parameters, and on design not containing uncertainty, respectively

*R*(*u*) =the set of the indices of the constraints which

*u*should impose in a reduced scenario robust optimization problem*ɛ*=user specified constraint tolerance

### Definitions

- Scenario =
a scenario assigns a value to all uncertain parameters present in a problem

- SGR
^{2}O = scenario generation and reduction robust optimization, robust optimization approach of Rudnick-Cohen et al. [18]

### Implementation of the Local Robust Optimization Method

### Deterministic Double Loop Robust Optimization Method

### Computational Complexity of SGLRO and Other Robust Optimization Approaches

Table 9 details the computational complexity of each of the steps within SGLRO in terms of the total number of constraint function calls (total number of times that any of the constraints *d _{l}*(

*x*),

*g*(

_{i}*x*,

*u*), and

*q*(

_{j}*u*) are evaluated). It is assumed that all optimization solvers use a central difference method to estimate derivatives and that the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm [29] is used to estimate the Hessian of

*f*(

*x*) for both the local robust optimization method and any optimization solvers which make use of Hessians (e.g., matlab's fmincon [21]), as it is more efficient than numerically computing the Hessian at every iteration via finite differences. Each entry in Table 9 is computed assuming that the step in question occurs on every iteration of SGLRO, which is why all steps except “Local Robust Optimization” are multiplied by

*N*. “Reduced Scenario Robust Optimization” requires at most $N\alpha \xd7D\xd7(I\xd7NI+L)$ function calls, as Problem 3 has at most

_{I}*I*×

*N*

_{I}+

*L*constraints (if a scenario is generated on every single iteration), which require

*D*function calls to evaluate the gradient of, to a maximum of $N\alpha $ times. A similar expression exists for “Worst Case Search,” except using

*P*instead of

*D*and

*J*instead of

*L*; however, the maximum number of constraints used during “Worst Case Search” will never increase. The cost of “Local Robust Optimization” is the sum of the costs of “Worst Case Search” and “Reduced Scenario Robust Optimization,” except that

*N*(the number of iterations “Local Robust Optimization” needs to converge) replaces

_{Q}*N*.

_{I}Step | Upper bound on number of function calls |
---|---|

Reduced scenario robust optimization | $O(NI2\xd7N\alpha \xd7D\xd7I\xd7NI\xd7N\alpha \xd7D\xd7L)$ |

Worst-case search | $O(NI\xd7N\alpha \xd7P\xd7I\xd7J\xd7NS)$ |

Feasibility checking (lines 8–13 of Algorithm 1) | O(N_{I} × I) |

Local robust optimization | $O(NQ\xd7N\alpha \xd7P\xd7I\xd7J+NQ2\xd7N\alpha $$\xd7D\xd7I+NQ\xd7N\alpha \xd7D\xd7L)$ |

Step | Upper bound on number of function calls |
---|---|

Reduced scenario robust optimization | $O(NI2\xd7N\alpha \xd7D\xd7I\xd7NI\xd7N\alpha \xd7D\xd7L)$ |

Worst-case search | $O(NI\xd7N\alpha \xd7P\xd7I\xd7J\xd7NS)$ |

Feasibility checking (lines 8–13 of Algorithm 1) | O(N_{I} × I) |

Local robust optimization | $O(NQ\xd7N\alpha \xd7P\xd7I\xd7J+NQ2\xd7N\alpha $$\xd7D\xd7I+NQ\xd7N\alpha \xd7D\xd7L)$ |

The term $N\Omega =NI+NQ$ can be used to represent the total number of iterations used by SGLRO, which simplifies its worst-case computational cost to the expression given in Table 10, which is the sum of the terms in Table 9. Table 10 also provides a comparison of SGLRO's computational cost against a basic deterministic double loop approach (see Appendix B, Table 8 for implementation) and SGR^{2}O [18]. *N _{S}* is the maximum limit on the number of scenarios used by SGR

^{2}O.

Approach | Theoretical worst-case computational cost |
---|---|

SGR^{2}O | $O(N\Omega \xd7N\alpha \xd7(NS\xd7I+L)\xd7D+N\Omega \xd7N\alpha \xd7(NS\xd7I+J)\xd7P)$ |

Deterministic double loop | $O(N\Omega \xd7N\alpha \xd7P\xd7I\xd7J+N\Omega \xd7N\alpha \xd7D\xd7I+N\Omega \xd7N\alpha \xd7D\xd7L)$ |

SGLRO | $O(N\Omega 2\xd7N\alpha \xd7P\xd7I\xd7J+N\Omega 2\xd7N\alpha \xd7D\xd7I+N\Omega \xd7N\alpha \xd7D\xd7L)$ |

Approach | Theoretical worst-case computational cost |
---|---|

SGR^{2}O | $O(N\Omega \xd7N\alpha \xd7(NS\xd7I+L)\xd7D+N\Omega \xd7N\alpha \xd7(NS\xd7I+J)\xd7P)$ |

Deterministic double loop | $O(N\Omega \xd7N\alpha \xd7P\xd7I\xd7J+N\Omega \xd7N\alpha \xd7D\xd7I+N\Omega \xd7N\alpha \xd7D\xd7L)$ |

SGLRO | $O(N\Omega 2\xd7N\alpha \xd7P\xd7I\xd7J+N\Omega 2\xd7N\alpha \xd7D\xd7I+N\Omega \xd7N\alpha \xd7D\xd7L)$ |

From a theoretical standpoint, both SGR^{2}O [18] and a deterministic double loop approach should be faster than SGLRO, as SGLRO has $N\Omega 2$ terms present. SGR^{2}O appears faster because SGR^{2}O uses scenario reduction to limit the maximum number of scenarios in use, which changes the cost of solving Problem 2 or Problem 3 to be $O(NI\xd7NS\xd7N\alpha \xd7D\xd7I\xd7NI\xd7N\alpha \xd7D\xd7L)$. The deterministic double loop optimization approach only considers one scenario per constraint, which provides a similar benefit. However, there exist robust optimization problems where the robust optimal solution cannot be found by only considering one scenario per constraint. Additionally, the use of scenario reduction may require additional scenarios to be generated, which can result in SGR^{2}O requiring more constraint function calls than SGLRO.