0
Research Papers: Design Automation

Improving Design Preference Prediction Accuracy Using Feature Learning

[+] Author and Article Information
Alex Burnap

Design Science,
University of Michigan,
Ann Arbor, MI 48109
e-mail: aburnap@umich.edu

Yanxin Pan

Design Science,
University of Michigan,
Ann Arbor, MI 48109
e-mail: yanxinp@umich.edu

Ye Liu

Computer Science and Engineering,
University of Michigan,
Ann Arbor, MI 48109
e-mail: yeliu@umich.edu

Yi Ren

Mechanical Engineering,
Arizona State University,
Tempe, AZ 85287
e-mail: yiren@asu.edu

Honglak Lee

Computer Science and Engineering,
University of Michigan,
Ann Arbor, MI 48109
e-mail: honglak@eecs.umich.edu

Richard Gonzalez

Psychology,
University of Michigan,
Ann Arbor, MI 48109
e-mail: gonzo@umich.edu

Panos Y. Papalambros

Mechanical Engineering,
University of Michigan,
Ann Arbor, MI 48109
e-mail: pyp@umich.edu

1A. Burnap and Y. Pan contributed equally to this work.

Contributed by the Design Automation Committee of ASME for publication in the JOURNAL OF MECHANICAL DESIGN. Manuscript received October 15, 2015; final manuscript received April 15, 2016; published online May 18, 2016. Assoc. Editor: Carolyn Seepersad.

J. Mech. Des 138(7), 071404 (May 18, 2016) (12 pages) Paper No: MD-15-1708; doi: 10.1115/1.4033427 History: Received October 15, 2015; Revised April 15, 2016

Quantitative preference models are used to predict customer choices among design alternatives by collecting prior purchase data or survey answers. This paper examines how to improve the prediction accuracy of such models without collecting more data or changing the model. We propose to use features as an intermediary between the original customer-linked design variables and the preference model, transforming the original variables into a feature representation that captures the underlying design preference task more effectively. We apply this idea to automobile purchase decisions using three feature learning methods (principal component analysis (PCA), low rank and sparse matrix decomposition (LSD), and exponential sparse restricted Boltzmann machine (RBM)) and show that the use of features offers improvement in prediction accuracy using over 1 million real passenger vehicle purchase data. We then show that the interpretation and visualization of these feature representations may be used to help augment data-driven design decisions.

FIGURES IN THIS ARTICLE
<>
Copyright © 2016 by ASME
Your Session has timed out. Please sign back in to continue.

References

Figures

Grahic Jump Location
Fig. 1

The concept of feature learning as an intermediate mapping between variables and a preference model. The diagram on top depicts conventional design preference modeling (e.g., conjoint analysis) where an inferred preference model discriminates between alternative design choices for a given customer. The diagram on bottom depicts the use of features as an intermediate modeling task.

Grahic Jump Location
Fig. 2

The concept of principle component analysis shown using an example with a data point represented by three original variables x projected to a two-dimensional subspace spanned by w to obtain features h

Grahic Jump Location
Fig. 3

The concept of LSD using an example “part-worth coefficients” matrix of size 10 × 10 decomposed into two 10 × 10 matrices with low rank or sparse structure. Lighter colors represent larger values of elements in each decomposed matrix.

Grahic Jump Location
Fig. 4

The concept of the exponential family sparse RBM. The original data are represented by nodes in the visible layer by [x1,x2], while the feature representation of the same data is represented by nodes in the hidden layer [h1,h2,h3,h4]. Undirected edges are restricted to being only between the original layer and the hidden layer, thus enforcing conditional independence between nodes in the same layer.

Grahic Jump Location
Fig. 5

Data processing, training, validation, and testing flow

Grahic Jump Location
Fig. 6

Optimal vehicle distribution visualization. Every point represents the optimal vehicle for one consumer. In the left column, the optimal vehicle is inferred using the utility model with original variables. In the right column, LSD features are used to infer the optimal vehicle. In the first row, the optimal vehicles from SCI-XA customers are marked in big red points. Similarly, the optimal vehicles from MAZDA6, ACURA-TL, and INFINM35 customers are marked in big red points, respectively.

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In