Trickett, S. B., & Trafton, J. G. (2007). “What if…”: The Use of Conceptual Simulations in Scientific Reasoning. Cognitive Science, 31(5), 843–875. doi:10.1080/03640210701530771

Summary

In this paper, Trickett & Trafton experimentally explore the use of conceptual simulations by expert scientists when reasoning about problems in their domain of expertise. They have two main hypotheses: that conceptual simulations are a core strategy used in scientific reasoning, and that they are used in particular to reason about situations in which there are high levels of uncertainty (e.g. partial knowledge, violation of expectation, etc.). They define conceptual simulation as:

…a three-step process that consists of first, visualizing some situation; second, carrying out one or more operations on it; and third, seeing what happens. The third part of the process—seeing what happens—is crucial. It distinguishes “what if” thinking from purely imagining because during this third phase causal reasoning occurs to the results of the manipulation(s) of the second phase.

In their experiments, Trickett & Trafton found that scientists do spontaneously use conceptual simulation and that they use it in cases where their expectations are violated (i.e. they have more uncertainty):

The research shows how conceptual simulation helps resolve uncertainty: conceptual simulation facilitates reasoning about hypotheses by generating an altered representation under the purported conditions expressed in the hypothesis and providing a source of comparison with the actual data, in the process of alignment by similarity detection. (pg. 866)

Additionally, the results of these experiments combined with other results from the literature suggest that conceptual simulations are used in situations where there the answer truly is unknown. In other cases, people can rely on background knowledge, existing models, etc.:

Frequently, studies of experts employ problems that are well-understood for an expert and that can be solved by recalling either this very problem (i.e., by model-based search) or another that shares the same deep structure (i.e., by analogy; cf. Chi et al., 1981). In contrast, our studies show experts reasoning about problems for which neither they nor anyone else knows the answer. (pg. 867)

That is, conceptual simulation is a type of model construction (pg. 866).

Methods

In Experiment 1, Trickett & Trafton performed an in vivo study of scientists across several different domains of science. The scientists were filmed while they analyzed their own data, either individually (with a verbal protocol) or collaboratively. The utterances of the scientists were coded for instances of conceptual simulation, for hypotheses, for and for other scientific reasoning strategies (data focus, empirical test, consult a colleague, tie-in with theory and domain knowledge, analogy, or alignment). They found that data focus was the most commonly used strategy. The next most frequently used strategies were tie-in with theory, alignment, and conceptual simulation; these were used at approximately the same frequency.

In analyzing the relationship between strategies, they found that conceptual simulations were almost always followed by a process of alignment, which was then usually either the end of the chain of reasoning, or which was followed by a return to data focus. Trickett & Trafton hypothesized that this sequence of conceptual simulation followed by alignment was used “to link the internal (result of the conceptual simulation) and external (phenomena in the data) representations” (pg. 858).

They additionally coded the data for hypotheses that were generated either based on evidence that violated expectations or which was consistent with expectations. They found that conceptual simulation more frequently followed violation of expectation hypotheses than those that did not have a violation of expectation, suggesting that the scientists used conceptual simulation in situations where they were more uncertain.

To causally test the previous hypothesis (that conceptual simulations are used in situations with higher uncertainty), Trickett & Trafton ran a second experiment. In Experiment 2, they recruited expert cognitive psychologists and gave them different scenarios and results of phenomena they were familiar with. The results were either consistent with the given scenario (Expectation Confirmation, EC) or inconsistent (Expectation Violation, EV). The scientists were instructed to engage in a process of explaining the data, and again were recorded doing so. Consistent with the results of Experiment 1, they found that conceptual simulations were used more frequently in the EV condition than in the EC condition, at a rate of approximately 2:1.

Algorithm

n/a

Takeaways

These types of “conceptual simulations”, as well as thought experiments like those described by Gendler, are really fascinating in that they seem to be qualitatively a very different sort of simulation than, for example, motor simulation or even certain types of mental imagery (like that which is used in language understanding). I think a relevant question is: are such types of conceptual simulations drawing on the same types of simulation processes that serve lower levels of reasoning? I would expect the answer to be “sometimes”, but I don’t have a good intuition for why certain low-level simulations would be available for high-level conceptual reasoning (e.g. imagery) while others wouldn’t (e.g. accurate simulation of physics via the motor system).