Research Papers: Design Theory and Methodology

Applied Tests of Design Skills—Part III: Abstract Reasoning

[+] Author and Article Information
Maryam Khorshidi, Jami J. Shah

Mechanical & Aerospace Engineering,
Arizona State University,
Tempe, AZ 85287

Jay Woodward

Department of Educational Psychology,
Texas A&M University,
College Station, TX 77843

In most applications of deductive or inductive reasoning in Logics or Mathematics, numeric data is required; in early design however, availability of numeric data is usually limited to very few variables. Therefore, due to their particular application in early design, i.e., taking a rather qualitative form, in this paper qualitative deductive reasoning and qualitative inductive reasoning are used instead of the conventional terms of deductive and inductive reasoning.

The first domain is generally known as the source and the second one as the target.

Test problems were designed such that no irrelevant domain knowledge is required. Then by the alpha tests and the protocol study problems were re-examined for having any un-predicted correlations with irrelevant domain knowledge.

This criterion serves as a measure for the quality of an analogy; here is an example to further explain it: Ravens are very advanced creatures in incorporating tools, in that sense, an analogy between ravens' skills to use tools to do tasks (e.g., using stones to break nuts) and early humans' is more valid than the analogy between humans' skills in using tools and apes' even though apes and human beings are very similar in appearance, body structure, etc. (underlying structure vs. superficial form and appearance).

Semantic distance measures how close/far source and target are in an analogy. Various methods are proposed for quantifying it; an example could be found in Ref. [36].

Semantic distance could be considered as a measure of novelty of an analogy: the farther the two mediums are, the harder it is to come up with the analogy. For instance, in the analogy proposed above, apes and humans -but not ravens- belong to the same class of animals (mammals); even though it is easier to think of apes in an analogy with humans than ravens, the first one does not transfer the point about advanced behavior in incorporating tools and is therefore not valid.

Defined by the National Association of the Directors of Educational Research (T. B. Rogers, 1995, p. 25), test validity is how well a test measures what it purports to measure [60].

Reliability of a test on the other hand, suggests trustworthiness of its results and is based on the consistency of the test and the precision of the measurement process done by its problems.

Part I of this paper was published in J. Mech. Des.134(2), 021005 (Feb. 03, 2012).

Part II of this paper was published in J. Mech. Des.135(7), 071004 (May 24, 2013).

Contributed by the Design Theory and Methodology Committee of ASME for publication in the JOURNAL OF MECHANICAL DESIGN. Manuscript received October 23, 2013; final manuscript received June 24, 2014; published online July 31, 2014. Assoc. Editor: Janis Terpenny.

J. Mech. Des 136(10), 101101 (Jul 31, 2014) (11 pages) Paper No: MD-13-1479; doi: 10.1115/1.4027986 History: Received October 23, 2013; Revised June 24, 2014

Past studies have identified the following cognitive skills relevant to conceptual design: divergent thinking, spatial reasoning, visual thinking, abstract reasoning, and problem formulation (PF). Standardized tests are being developed to assess these skills. The tests on divergent thinking and visual thinking are fully developed and validated; this paper focuses on the development of a test of abstract reasoning in the context of engineering design. Similar to the two previous papers, this paper reports on the theoretical and empirical basis for skill identification and test development. Cognitive studies of human problem solving and design thinking revealed four indicators of abstract reasoning: qualitative deductive reasoning (DR), qualitative inductive reasoning (IR), analogical reasoning (AnR), and abductive reasoning (AbR). Each of these is characterized in terms of measurable indicators. The paper presents test construction procedures, trial runs, data collection, norming studies, and test refinement. Initial versions of the test were given to approximately 250 subjects to determine the clarity of the test problems, time allocation and to gauge the difficulty level. A protocol study was also conducted to assess test content validity. The beta version was given to approximately 100 students and the data collected was used for norming studies and test validation. Analysis of test results suggested high internal consistency; factor analysis revealed four eigenvalues above 1.0, indicating assessment of four different subskills by the test (as initially proposed by four indicators). The composite Cronbach’s alpha for all of the factors together was found to be 0.579. Future research will be conducted on criterion validity.

Copyright © 2014 by ASME
Your Session has timed out. Please sign back in to continue.






Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In