Gordon, R. M. (1992). The Simulation theory: Objections and misconceptions. Mind And Language, 7(1-2), 11–34. doi:10.1111/j.1468-0017.1992.tb00195.x

Summary

In this paper, Gordon attempts to define his own definition of simulation theory (ST), and contrasts it with theory theory (TT) as well as other forms of ST (“putting oneself in their shoes”, the Model Model).

First, Gordon explains why ST is not “putting oneself in the other’s place”. He says that ST is not this, because people don’t actually put themselves in others’ places most of the time (usually they are told to do that). Rather, being told to put oneself in the other’s place means that “you shouldn’t just project your own situation and psychology on the other”. This implies that, to begin with, you are projecting your own situation and psychology on the other, which is the core of what ST is about.

Gordon goes on to explain what he means by projection, and specifically, total projection. Total projection is the idea of projecting your own situation and psychology onto someone else, without making any adjustment (spatial or otherwise). This type of total projection is the default mode of simulation. In using projection, we search for explanations that would have caused ourselves to behave in the same way as other people. We can also adjust these projections in order to better explain other people’s actions, for example by imagining ourselves in their spatial location:

In general terms, what you are doing is shifting the locations and vectors of environmental features on your egocentric map—that is, the mental map in which things and events are represented in relation to yourself, here, and now—so as properly to engage your location-specific or vector-specific tendencies to action or emotion.

Alternately, you can “prep” yourself to have the appropriate attitudes or beliefs that would be necessary to understand the behavior of others. For example, to understand why someone likes the music of a particular artist that you do not particularly like, you find a suitable alternative that you enjoy to a similar degree and then use that alternative as a stand in for understanding the other person’s intentions (e.g. to go see that person in concert). Importantly, because we can make these adjustments to the projection, this allows us to simulate counterfactuals, which allows us to generate appropriate explanations.

Gordon contrasts ST with the idea that we might just use generalizations or laws to explain other people’s behavior. He argues that this cannot be the case, as generalizations/laws on their own are too brittle; we need to know when they are relevant so that we can appropriately apply them. Knowing when to apply them requires using our own knowledge about how the world works; this therefore ends up being just another way of using projection.

He also contrasts ST with the “Model” Model, which is the idea that simulation is just a model that you can use to run simulations (similar to running a simulation on a model of an airplane). Gordon argues that with the “Model” Model, you would need something like a theory to explain the outputs of the model, as it is essentially a black box. But, under the hypothesis of projection, this is not necessary because you yourself already understand the workings of the model because you are the model.

Takeaways

From a computational standpoint, it is really difficult to see how Gordon’s version of the ST would work. It seems that the core of his argument is that because we are projecting ourselves, we get for free the ability to generate explanations of behavior. I don’t think this follows, and in particular, it still doesn’t give us any insight into how those explanations are generated (it feels a bit like saying, “we generate explanations of other people’s behavior by using ourselves to generate explanations of other people’s behavior”). I also think he doesn’t give enough credit to the difficulty in knowing what things about the projection to change. He basically implies it is as simple as “trying out various options until one clicks”, but for a given scenario there might be many possible things you could try to change. How do you know which dimension requires modification? And then how do you know the right modification to make? And then what does it mean for an explanation to “click”? What is a “good enough” explanation? All of these things seem like they need some form of metacognition or higher-level, structured knowledge—for example, a theory.