Research Papers: Design Automation

Diagonal Quadratic Approximation for Parallelization of Analytical Target Cascading

[+] Author and Article Information
Yanjing Li

Department of Electrical Engineering and Computer Science, Stanford University, Stanford, CA 94305yanjingl@stanford.edu

Zhaosong Lu

Department of Mathematics, Simon Fraser University, Burnaby, British Columbia V5A 1S6 Canadazhaosong@sfu.ca

Jeremy J. Michalek

Department of Mechanical Engineering, and Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, PA 15213jmichalek@cmu.edu

A quasi-separable problem is nearly separable except for a few linking variables that appear in multiple subsystems, as we will define rigorously later.

Note that the penalty weights are squared in the definition of multipliers in this case because they are squared in the definition of the quadratic penalty term.

Generally speaking, three convergence criteria are widely used in the implementation of optimization methods: The gradient of the Lagrangian function is close to 0, objective function value stops changing, and solution point stops changing. We are using the third method here, and for practical purposes, a small nonzero convergence tolerance is given to determine when the solution stops changing.

We did not use the number of redesigns (i.e., the number of times each subproblem must be solved) as a metric because it can be misleading. In some iterations the subproblem may take many function evaluations to solve, whereas in many of the later iterations it will be very fast because the starting point is close to the solution. Thus, the redesigns metric does not seem to be an accurate or easy-to-interpret metric of computational cost.

Application: MATLAB Version 7.0 with TOMLAB NPSOL SOLVER Version 5.3; OS: SUSE Linux; Processor: Intel(R) Xeon(TM); CPU: 2.80GHz. Also, for the test results we used the time required to complete the longest running subproblem in each iteration to measure the effect of imperfect load balancing. We did not explicitly capture the communication overhead of multiprocessors, but this aspect will be very small in the examples compared to the computation time required at each iteration.

J. Mech. Des 130(5), 051402 (Mar 25, 2008) (11 pages) doi:10.1115/1.2838334 History: Received October 13, 2006; Revised June 26, 2007; Published March 25, 2008

Analytical target cascading (ATC) is an effective decomposition approach used for engineering design optimization problems that have hierarchical structures. With ATC, the overall system is split into subsystems, which are solved separately and coordinated via target/response consistency constraints. As parallel computing becomes more common, it is desirable to have separable subproblems in ATC so that each subproblem can be solved concurrently to increase computational throughput. In this paper, we first examine existing ATC methods, providing an alternative to existing nested coordination schemes by using the block coordinate descent method (BCD). Then we apply diagonal quadratic approximation (DQA) by linearizing the cross term of the augmented Lagrangian function to create separable subproblems. Local and global convergence proofs are described for this method. To further reduce overall computational cost, we introduce the truncated DQA (TDQA) method, which limits the number of inner loop iterations of DQA. These two new methods are empirically compared to existing methods using test problems from the literature. Results show that computational cost of nested loop methods is reduced by using BCD, and generally the computational cost of the truncated methods is superior to the nested loop methods with lower overall computational cost than the best previously reported results.

Copyright © 2008 by American Society of Mechanical Engineers
Your Session has timed out. Please sign back in to continue.



Grahic Jump Location
Figure 1

(a) Hierarchical problem structure and variable allocation for ATC; (b) variable allocation for ATC after introducing target copies

Grahic Jump Location
Figure 2

Example problem structures

Grahic Jump Location
Figure 3

Example 1: Computation cost and latency versus solution accuracy

Grahic Jump Location
Figure 4

Example 2: Computational cost and latency versus solution accuracy

Grahic Jump Location
Figure 5

Example 3: Computational cost and execution latency versus solution accuracy

Grahic Jump Location
Figure 6

Example 4: Computation cost and latency versus solution accuracy

Grahic Jump Location
Figure 7

Flow charts of methods




Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In