The optimal solution of a design optimization problem is dependent on the predictive models used to evaluate the objective and constraints. Since different models give different predictions and can yield different design decisions, when more than one model is available, the choice of model used to represent the objectives/constraints of the design becomes important. This paper addresses the problem of model selection among physics-based models during the prediction stage, which is in contrast to model selection during the calibration and validation stages, and therefore affects design under uncertainty. Model selection during calibration addresses the problem of selecting the model that is likely to provide the best generalization of the calibration data over the entire domain. Model selection during the validation stage examines the validity of a calibrated model by comparing its predictions against the validation data. This paper presents an approach that allows for model selection during the prediction stage, which selects the “best” model for each prediction point. The proposed approach is based on the estimation of the model prediction error under stationary/nonstationary uncertainty. By selecting the best model at each prediction point, the proposed approach partitions the input domain of the models into nonoverlapping regions. The effects of measurement noise, sparseness of validation data, and model prediction uncertainty are included in deriving a probabilistic selection criterion for model selection. The effects of these uncertainties on the classification errors are analyzed. The proposed approach is demonstrated for the problem of selecting between two parametric models for energy dissipation in a mechanical lap joint under dynamic loading, and for the problem of selecting fatigue crack growth models.