Research Papers: Design Automation

J. Mech. Des. 2017;140(2):021401-021401-13. doi:10.1115/1.4038333.

Multidisciplinary systems with transient behavior under time-varying inputs and coupling variables pose significant computational challenges in reliability analysis. Surrogate models of individual disciplinary analyses could be used to mitigate the computational effort; however, the accuracy of the surrogate models is of concern, since the errors introduced by the surrogate models accumulate at each time-step of the simulation. This paper develops a framework for adaptive surrogate-based multidisciplinary analysis (MDA) of reliability over time (A-SMART). The proposed framework consists of three modules, namely, initialization, uncertainty propagation, and three-level global sensitivity analysis (GSA). The first two modules check the quality of the surrogate models and determine when and where we should refine the surrogate models from the reliability analysis perspective. Approaches are proposed to estimate the potential error of the failure probability estimate and to determine the locations of new training points. The three-level GSA method identifies the individual surrogate model for refinement. The combination of the three modules facilitates adaptive and efficient allocation of computational resources, and enables high accuracy in the reliability analysis result. The proposed framework is illustrated with two numerical examples.

Commentary by Dr. Valentin Fuster
J. Mech. Des. 2017;140(2):021402-021402-13. doi:10.1115/1.4038005.

Recent advances in simulation and computation capabilities have enabled designers to model increasingly complex engineering problems, taking into account many dimensions, or objectives, in the problem formulation. Increasing the dimensionality often results in a large trade space, where decision-makers (DM) must identify and negotiate conflicting objectives to select the best designs. Trade space exploration often involves the projection of nondominated solutions, that is, the Pareto front, onto two-objective trade spaces to help identify and negotiate tradeoffs between conflicting objectives. However, as the number of objectives increases, an exhaustive exploration of all of the two-dimensional (2D) Pareto fronts can be inefficient due to a combinatorial increase in objective pairs. Recently, an index was introduced to quantify the shape of a Pareto front without having to visualize the solution set. In this paper, a formal derivation of the Pareto Shape Index is presented and used to support multi-objective trade space exploration. Two approaches for trade space exploration are presented and their advantages are discussed, specifically: (1) using the Pareto shape index for weighting objectives and (2) using the Pareto shape index to rank objective pairs for visualization. By applying the two approaches to two multi-objective problems, the efficiency of using the Pareto shape index for weighting objectives to identify solutions is demonstrated. We also show that using the index to rank objective pairs provides DM with the flexibility to form preferences throughout the process without closely investigating all objective pairs. The limitations and future work are also discussed.

Topics: Shapes , Tradeoffs , Design
Commentary by Dr. Valentin Fuster
J. Mech. Des. 2017;140(2):021403-021403-11. doi:10.1115/1.4038596.

Simulation models are widely used to describe processes that would otherwise be arduous to analyze. However, many of these models merely provide an estimated response of the real systems, as their input parameters are exposed to uncertainty, or partially excluded from the model due to the complexity, or lack of understanding of the problem's physics. Accordingly, the prediction accuracy can be improved by integrating physical observations into low fidelity models, a process known as model calibration or model fusion. Typical model fusion techniques are essentially concerned with how to allocate information-rich data points to improve the model accuracy. However, methods on subtracting more information from already available data points have been starving attention. Subsequently, in this paper we acknowledge the dependence between the prior estimation of input parameters and the actual input parameters. Accordingly, the proposed framework subtracts the information contained in this relation to update the estimated input parameters and utilizes it in a model updating scheme to accurately approximate the real system outputs that are affected by all real input parameters (RIPs) of the problem. The proposed approach can effectively use limited experimental samples while maintaining prediction accuracy. It basically tweaks model parameters to update the computer simulation model so that it can match a specific set of experimental results. The significance and applicability of the proposed method is illustrated through comparison with a conventional model calibration scheme using two engineering examples.

Commentary by Dr. Valentin Fuster
J. Mech. Des. 2017;140(2):021404-021404-9. doi:10.1115/1.4038212.

A general methodology is presented for time-dependent reliability and random vibrations of nonlinear vibratory systems with random parameters excited by non-Gaussian loads. The approach is based on polynomial chaos expansion (PCE), Karhunen–Loeve (KL) expansion, and quasi Monte Carlo (QMC). The latter is used to estimate multidimensional integrals efficiently. The input random processes are first characterized using their first four moments (mean, standard deviation, skewness, and kurtosis coefficients) and a correlation structure in order to generate sample realizations (trajectories). Characterization means the development of a stochastic metamodel. The input random variables and processes are expressed in terms of independent standard normal variables in N dimensions. The N-dimensional input space is space filled with M points. The system differential equations of motion (EOM) are time integrated for each of the M points, and QMC estimates the four moments and correlation structure of the output efficiently. The proposed PCE–KL–QMC approach is then used to characterize the output process. Finally, classical MC simulation estimates the time-dependent probability of failure using the developed stochastic metamodel of the output process. The proposed methodology is demonstrated with a Duffing oscillator example under non-Gaussian load.

Commentary by Dr. Valentin Fuster

Research Papers: Design for Manufacture and the Life Cycle

J. Mech. Des. 2017;140(2):021701-021701-15. doi:10.1115/1.4038069.

Manufacturing systems need to be designed to cope with products’ variety and frequent changes in market requirements. Switching between product families in different production periods often requires reconfiguration of the manufacturing system with associated additional cost and interruption of production. A mixed integer linear programing (MILP) model is proposed to synthesize manufacturing systems based on the co-platforming methodology taking into consideration machine level changes including addition or removal of machine axes and changing setup as well as system level changes such as addition or removal of machines. The objective is to minimize the cost of change needed for transition between product families and production periods. An illustrative numerical example and an industrial case study from tier I automotive supplier are used for verification. Finally, the effect of maintaining a common core of machines in the manufacturing system on the total capital and change cost is investigated. It has been demonstrated that synthesizing manufacturing systems designed using the co-platforming strategy reduces the total investment cost including initial cost of machines and cost of reconfiguration.

Commentary by Dr. Valentin Fuster

Research Papers: Design of Mechanisms and Robotic Systems

J. Mech. Des. 2017;140(2):022301-022301-12. doi:10.1115/1.4038071.

Accurate modeling of static load distribution of balls is very useful for proper design and sizing of ball screw mechanisms (BSMs); it is also a starting point in modeling the dynamics, e.g., friction behavior, of BSMs. Often, it is preferable to determine load distribution using low order models, as opposed to computationally unwieldy high order finite element (FE) models. However, existing low order static load distribution models for BSMs are inaccurate because they ignore the lateral (bending) deformations of screw/nut and do not adequately consider geometric errors, both of which significantly influence load distribution. This paper presents a low order static load distribution model for BSMs that incorporates lateral deformation and geometric error effects. The ball and groove surfaces of BSMs, including geometric errors, are described mathematically and used to establish a ball-to-groove contact model based on Hertzian contact theory. Effects of axial, torsional, and lateral deformations are incorporated into the contact model by representing the nut as a rigid body and the screw as beam FEs connected by a newly derived ball stiffness matrix which considers geometric errors. Benchmarked against a high order FE model in case studies, the proposed model is shown to be accurate in predicting static load distribution, while requiring much less computational time. Its ease-of-use and versatility for evaluating effects of sundry geometric errors, e.g., pitch errors and ball diameter variation, on static load distribution are also demonstrated. It is thus suitable for parametric studies and optimal design of BSMs.

Commentary by Dr. Valentin Fuster
J. Mech. Des. 2017;140(2):022302-022302-19. doi:10.1115/1.4038300.

Additive manufacturing allows a direct fabrication of any sophisticated mechanism when the clearance of each joint is sufficiently large to compensate the fabrication error, which frees the designers of cumbersome assembly jobs. Clearance design for assembly mechanism whose parts are fabricated by subtractive manufacturing has been well defined. However, the related standard for parts fabricated by additive manufacturing is still under exploration due to the fabrication error and the diversity of printing materials. For saving time and materials in a design process, a designer may fabricate a series of small mechanisms to examine their functionality before the final fabrication of a large mechanism. As a mechanism is scaled, its joint clearances may be reduced, which affects the kinematics of the mechanisms. Maintaining certain clearance for the joints during the scaling process, especially for gear mechanisms, is an intricate problem involving the analysis of nonlinear systems. In this paper, we focus on the parametric design problem for the major types of joints, which allows the mechanisms to be scaled to an arbitrary level while maintaining their kinematics. Simulation and experimental results are present to validate our designs.

Commentary by Dr. Valentin Fuster

Research Papers: Design of Direct Contact Systems

J. Mech. Des. 2017;140(2):023301-023301-13. doi:10.1115/1.4038301.

A new finite element model for stress analysis of gear drives is proposed. Tie-surface constraints are applied at each tooth of the gear model to obtain meshes that can be independently defined: a finer mesh at contact surfaces and fillet and a coarser mesh in the remaining part of the tooth. Tie-surface constraints are also applied for the connection of several teeth in the model. The model is validated by application of the Hertz's theory in a spiral bevel gear drive with localized bearing contact and by observation of convergency of contact and bending stresses. Maximum contact pressure, maximum Mises stress, maximum Tresca stress, maximum major principal stress, and loaded transmission errors are evaluated along two cycles of meshing. The effects of the boundary conditions that models with three, five, seven, and all the teeth of the gear drive provide on the above-mentioned variables are discussed. Several numerical examples are presented.

Commentary by Dr. Valentin Fuster
J. Mech. Des. 2017;140(2):023302-023302-9. doi:10.1115/1.4037762.

The application of a Gleason Coniflex cutter (plane-cutter) to a modern Phoenix bevel gear machine tool in face gear manufacturing has an advantage of involving a universal cutter or grinder and an available existing machine. It is valuable to research this method for face gear manufacturing. First, the principle of the application of the plane-cutter in face gear manufacturing is presented. Then, the geometry of the cutter is defined, and the model of the face gear generated by this method in abstract is established. Third, a method that uses a predesigned contact path for the synthesis with the motion parameters of the plane-cutter is proposed; controllable transmission errors are considered in this process. Fourth, based on the equivalence principle of the position and direction, the computer numerical control (CNC) motion rules of all spindles of the machine are determined, and the surface generated by the machine is presented. Finally, numerical simulation of an example demonstrates that although the surface generated by the plane-cutter, to a certain extent, deviates from the theoretical surface generated by the traditional method, the surface, in meshing with the standard involute surface of the pinion, presents a good geometric meshing performance based on tooth contact analysis (TCA), except for a shortened contact ellipse.

Commentary by Dr. Valentin Fuster

Technical Brief

J. Mech. Des. 2017;140(2):024501-024501-6. doi:10.1115/1.4038563.

This paper proposes to apply the convolution integral method to the novel second-order reliability method (SORM) to further improve its computational efficiency. The novel SORM showed better accuracy in estimating the probability of failure than conventional SORMs by utilizing a linear combination of noncentral or general chi-squared random variables. However, the novel SORM requires significant computational time when integrating the linear combination to calculate the probability of failure. In particular, when the dimension of performance functions is higher than three, the computational time for full integration increases exponentially. To reduce this computational burden for the novel SORM, we propose to obtain the distribution of the linear combination using the convolution and to use the distribution for the probability of failure estimation. Since it converts an N-dimensional full integration into one-dimensional integration, the proposed method is computationally very efficient. Numerical study illustrates that the accuracy of the proposed method is almost the same as the full integral method and Monte Carlo simulation (MCS) with much improved efficiency.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In