Research Papers: D3 Methods

A Convolutional Neural Network Model for Predicting a Product's Function, Given Its Form

[+] Author and Article Information
Matthew L. Dering

Computer Science and Engineering,
Penn State University,
University Park, PA 16802
e-mail: mld284@psu.edu

Conrad S. Tucker

Engineering Design and Industrial Engineering,
Penn State University,
University Park, PA 16802
e-mail: ctucker4@psu.edu

1Corresponding author.

Contributed by the Design Automation Committee of ASME for publication in the JOURNAL OF MECHANICAL DESIGN. Manuscript received February 24, 2017; final manuscript received June 28, 2017; published online October 2, 2017. Assoc. Editor: Charlie C. L. Wang.

J. Mech. Des 139(11), 111408 (Oct 02, 2017) (14 pages) Paper No: MD-17-1178; doi: 10.1115/1.4037309 History: Received February 24, 2017; Revised June 28, 2017

Quantifying the ability of a digital design concept to perform a function currently requires the use of costly and intensive solutions such as computational fluid dynamics. To mitigate these challenges, the authors of this work propose a deep learning approach based on three-dimensional (3D) convolutions that predict functional quantities of digital design concepts. This work defines the term functional quantity to mean a quantitative measure of an artifact's ability to perform a function. Several research questions are derived from this work: (i) Are learned 3D convolutions able to accurately calculate these quantities, as measured by rank, magnitude, and accuracy? (ii) What do the latent features (that is, internal values in the model) discovered by this network mean? (iii) Does this work perform better than other deep learning approaches at calculating functional quantities? In the case study, a proposed network design is tested for its ability to predict several functions (sitting, storing liquid, emitting sound, displaying images, and providing conveyance) based on test form classes distinct from training class. This study evaluates several approaches to this problem based on a common architecture, with the best approach achieving F scores of >0.9 in three of the five functions identified. Testing trained models on novel input also yields accuracy as high as 98% for estimating rank of these functional quantities. This method is also employed to differentiate between decorative and functional headwear, which yields an 84.4% accuracy and 0.786 precision.

Copyright © 2017 by ASME
Your Session has timed out. Please sign back in to continue.



Grahic Jump Location
Fig. 1

Cross sections of selected layers of a 3D CNN. These show the activations of certain kernels of the layers in the trained network on the voxelized input shownleft.

Grahic Jump Location
Fig. 2

An artificial neuron. The inputs are represented by xi that are multiplied by the weights wi, summed with a bias term b, and activated by a function f to produce an output y. Each layer type principally defines how the inputs are mapped to theprevious layer, along with which activation function is employed. The rest of the terms are learned.

Grahic Jump Location
Fig. 3

Selected 3 × 3 × 3 learned kernels

Grahic Jump Location
Fig. 4

The convolution and pooling operation

Grahic Jump Location
Fig. 5

The scaling scheme of the functional quantity targets

Grahic Jump Location
Fig. 6

Confusion matrix for the binary (qualitative) variation of this network

Grahic Jump Location
Fig. 7

Confusion matrix for the softmax regression variation of this network

Grahic Jump Location
Fig. 8

Confusion matrix for the absolute regression variation of this network

Grahic Jump Location
Fig. 9

Confusion matrix for the aggregated left out classes

Grahic Jump Location
Fig. 10

Performance on this task by VoxNet Network [34]

Grahic Jump Location
Fig. 11

Activations of the left out networks on novel inputs for well performing inputs

Grahic Jump Location
Fig. 12

Headwear from the dataset [33], the left helmet is functional, while center is purely decorative. Right depicts the confusion matrix of this experiment.




Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In