Home Curriculum Vitae Publications Conference Reports Forthcoming Extramural Colloquia Expert Testimony Teaching Healthcare The Human Ecology of Memory Research Archive Publications and Reports Rants


From the Subject's Point of View:

The Experiment 

as Conversation and Collaboration

Between Investigator and Subject

(Honoring Martin T. Orne, 1927-2000)


John F. Kihlstrom

Yale University


Note: An edited version of this paper was subsequently published as Kihlstrom, J.F.  (2002).  Demand characteristics in the laboratory and the clinic: Conversations and collaborations with subjects and patients.  Prevention & Treatment [Special issue honoring Martin T. Orne], 5, Article 36c.  Link to published version: http://socrates.berkeley.edu/~kihlstrm/PDFfiles/OrneCommentary.pdf.


After 15 years of teaching the introductory psychology course, I have gradually come to the conclusion that an awful lot of psychological science could be presented as sustained, empirical and theoretical meditations on a relatively few pithy sentences. Some of these are real gems, and deserve to be inscribed on wallet-sized cards, bumper stickers, and inspirational posters of the sort you see advertised in airline magazines. For example:

Cogito ergo sum,

I think, therefore I am, from Rene Descartes=s Meditations (1641). All of psychology begins here: it turns out that the only thing we cannot deny, out of all that might be true in the whole universe, including quarks and punctuated equilibrium and postmodern literary theory, is that we exist; and the reason we cannot deny it is that we are aware that we think. Setting aside whether this argument works as a bulwark against radical skepticism, or the impossibility of knowing anything for sure, in this passage Descartes establishes epistemology as the primary concern of philosophy, and conscious experience, the self, and the relations between mind and body as the central topics for the scientific psychology which would follow in due course, about 200 years later.

William James had the same sort of thing in mind when he stated, in The Principles of Psychology (1890), that:

The universal conscious fact is not "feelings and thoughts exist"

but "I think" and "I feel".

When you unpack a sentence like this, at least if you do it the way James did, you=re led immediately to ask certain fundamental questions about the nature of consciousness, whether it makes sense to talk about unconscious mental life, if so what the relations between conscious and unconscious mental life are, what the nature of the self is, and how it plays a role in consciousness.

But is thinking all there is to psychology? In fact, for both Descartes and James, thinking is an umbrella term for consciousness. As Descartes put it:

What is a thing that thinks? It is a thing which doubts, understands, conceives, affirms, denies, wills, refuses, which also imagines and feels.

So what does mental life consist of? About 150 years after Descartes, another philosopher, Immanuel Kant, summarized what quickly became the prevailing view in both philosophy and psychology:

There are three absolutely irreducible faculties of mind, namely, 

knowledge, feeling, and desire.

This comes from the Critique of Pure Reason (1781), and there=s an awful lot to get out of this one. Does it make sense to talk about faculties at all? Implied in this question is the continuing tension in psychology between general systems for learning or information-processing, and mental modules; for the neuroscientifically inclined, this is the enduring conflict between specialization and holism. Are there different mental faculties, or systems, and if so what are their relations to neural systems? And if there are separate mental faculties, what are they, how do they relate to each other, and to what extent are they cognitively penetrable? If we take Kant=s answer as a reasonable approximation, to what extent are our emotional and motivational states under cognitive control; to what extent do our feelings and desires color our thoughts?

Now let=s move up another 150 years, from Kant to Sir Frederick Bartlett, and this statement from Remembering (1932):

The psychologist, of all people, 

must not stand in awe of the stimulus.

The entire history of 20th century psychology is wrapped up in this one, because it=s not so much about poor old Ebbinghaus and his nonsense syllables and his law of repetition, as it is about associationism and its evil twin, behaviorism, which -- at least in Bartlett's view -- tended to "overstress the determining character of the stimulus or of the situation" (1932 p. 6). In a very real sense, this sentence is the first shot in the cognitive revolution in psychology, because Bartlett is arguing that people don't behave in response to objective stimulus conditions; rather, their interactions with the environment, and with each other, reflect effort after meaning -- an effort which results in a mental representation of the situation and a plan for acting in accordance with this representation. So now we have to know all about mental representations of the world outside the mind, the relation between representations constructed through perception and those reconstructed through memory, the relations between so-called "lower" mental processes like perception, attention, learning, and memory, and so-called "higher" mental processes of thinking, reasoning, problem-solving, and language.

A couple of decades later, as the cognitive revolution began to build up steam, Jerome Bruner (1957) picked up on Bartlett's central theme when he wrote that:

Every act of perception is an act of categorization.

Here Bruner expresses his emphasis on going beyond the information given by linking the current stimulus situation with what is already known from prior experiences. For Bruner, perception is not complete until we know not just the form, location, and motion of an object, but also what sort of thing it is. Categorization allows us to make inferences about unseen properties of objects, and their past and future behavior, so that we know how to deal with them. So now we have to know all about the organization of concepts, proper sets and fuzzy sets, prototypes and exemplars, how concepts are acquired, and the relation cognitive of cognitive categories to the natural divisions in the world outside the mind.

Then too, Bruner (1957) also noted that:

The purpose of perception is action.

We want to know, we need to know, so that we know what to do. Cognitive psychology often leaves this part out: I checked three best-selling cognitive psychology texts, and found that neither action nor behavior appeared in the indexes of any of them. Mind in action seems more to be the province of social psychology, which has always been concerned with the relations between things like people's beliefs and attitudes on the one hand, and their interpersonal behavior on the other. How do beliefs translate into behavior, behavior which in turn creates reality? How does behavior flow from attitudes, and how do attitudes emerge from out behavior?

While we're at it, here's one for personality psychologists, from the Characters of Theophrastus, Aristotle's successor at the Peripatetic School:

I have often marveled... why it has come about that, 

while the whole of Greece lies in the same clime 

and all Greeks have a like upbringing, 

we have not the same constitution of character.

This sentence unpacks itself: how do individuals differ from each other in mind and behavior? Do these differences reside in the people observed or the eye of the beholder? If the former, are the differences best construed in terms of discrete types, continuous traits, or some other notion? What are their origins in heredity and environment?

As you can tell, many of the sentences psychologists meditate on are quite high-minded. Others, though, are somewhat more mundane. Consider this one, from Noam Chomsky (1957):

Colorless green ideas sleep furiously,

which has so much wrapped up in it that one hardly knows where to begin unwrapping: the difference between syntax and semantics, and between phrase structure and surface structure, the notion of mental processes operating according to rules, whether language acquisition is different from classical and instrumental conditioning. By the time we've figured out how novel utterances can be generated and understood, we've dealt with the tension between nativism and empiricism, arguments about the modularity of mind (again), and the nature of human creativity. Not bad for a sentence which doesn't mean anything.

Here's a sentence that does mean something:

The hippie touched the debutante.

Like Chomsky, John Anderson and Gordon Bower (1973) got a whole book out of this one, because it's all about propositional representations of knowledge, spreading activation and priming effects, episodic and semantic memory, and raises the question of whether knowledge is stored in nonpropositional, analog or imagistic, form.

And finally, the sentence that will serve as the basis for the rest of my talk:1

Could you pass the salt?

Children who reply to this question with a Ayes@ get dirty looks from their parents, and are immediately branded smart-alecks by their teachers, because this isn't a question about the listener's physical abilities; it's an indirect request to pass the salt. It harkens back to Bartlett's effort after meaning, as the listener tries to resolve the inherent ambiguity in the sentence. Syntax and semantics aren't enough for that purpose, it turns out, and so we also need a set of pragmatic principles which go beyond the information given by syntax and semantics, and which govern how people communicate with each other. In the final analysis, a sentence like this reminds us that language isn't just a tool for individual thought; it is also a tool for interpersonal communication -- or, as Herb Clark has put it, language doesn't have so much to do with words and what they mean as it does with people and what they mean. So, in addition to investigating the cognitive bases of language, we have to understand its social foundations as well; once again, social psychology addresses the use to which cognitive structures and processes are put (for reviews of the social psychology of language use, see Brown, 1986; Clark, 1985).

So, for example, from analyzing how sentences like this are understood, we learn that in order for the speaker and listener to communicate they have to establish common ground -- which Clark defines as the knowledge, beliefs, and suppositions that speaker and listener share in common. Each must have some sense of what the other person knows, believes, and supposes to be true, and each must use this knowledge in structuring his or her communication. If speaker and listener are not on common ground, they will not understand each other and their interactions cannot go very far.

In order to achieve this mutual understanding, people have to manage their conversations according to what the linguist Paul Grice (1975) has called the cooperative principle:

Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged.

This principle, in turn, is unpacked in terms of four conversational maxims, and their submaxims:

The maxim of quantity: Make your contribution as informative as is required (for current purposes), and do not make your contribution more informative than is required.
The maxim of quality: Try to make your contribution one that is true; do not say what you believe to be false, and do not say that for which you lack adequate evidence.
The maxim of relation: Be relevant.
The maxim of manner: Be brief, and orderly, avoiding obscurity and ambiguity of expression.

Grice and others interested in sociolinguistics (including Herb Clark, Tory Higgins, Arie kruglanski, and Norbert Schwartz), have shown how listeners assume that speakers are following these maxims, and how lots of mischief can result when this assumption proves false. What interests me is the possibility, which has also been raised by Norbert Schwartz, that the psychological experiment itself can be construed as a conversation, with the experimenter in the role of speaker and the subject in the role of listener. It turns out that experimenters violate conversational rules pretty often, in a way that makes things difficult for subjects, who are trying to figure out what the experimenter is up to; these violations also make things difficult for experimenters -- who, when they do not realize that they are being misunderstood, proceed to misinterpret the results of their own experiments.

Let me give you a simple example, taken from the Centennial Exhibition, "Understanding Ourselves, Understanding Each Other", sponsored by the American Psychological Association. At some point in the exhibit, you come upon a corridor paved in a black and white checkerboard pattern, with a sign -- multilingual, yet -- warning you to step only on the black squares. Everybody does this, of course, and when you get to the other end you are given a little lecture on mindless conformity. Except, of course, there's nothing mindless about it. More than likely, the average visitor went beyond the information given in the sign, inferred that there must be a good reason for the injunction, and decided to behave in a cooperative manner. To use a real-world analogy, we hardly ever demand to know why we cannot cross police lines; mostly we figure that there must be a good reason for the barrier (perhaps there's a hole in the sidewalk; perhaps there's evidence that needs to be protected). Neither case is fairly construed as mindless obedience to authority; they might well reflect reasoned, cooperative behavior, the equivalent of simply passing the salt if asked whether you could.

Of course, you don't need to be a Gricean sociolinguist to think about experiments that way. Martin Orne had the same kind of idea in his arguments, which he began to voice in the late 1950s and early 1960s, about demand characteristics and the ecological validity of psychological experiments. Over the years, both notions -- demand characteristics and ecological validity -- have been somewhat controversial, and I thought I would take this opportunity to remind people what the argument was all about, and reflect on its connection to Gricean sociolinguistics, and on the meaning of both for what we do as psychological scientists.

From Orne's point of view, the purpose of laboratory research is to understand the real world: to make the problem simple so that it can be studied effectively; and to control relevant variables so that important relations, especially causal relations, can be revealed. Unfortunately, generalization from the lab to the real world requires an inferential leap: its legitimacy depends on the degree of similarity between the conditions which obtain in the laboratory and those found in the real world. In the natural sciences, perhaps, it is safe to assume that the lab is in important respects like life. But in psychology, Orne argued that this assumption is not safe.

The situation is bad enough in animal research -- for example, Martha McClintock's studies of sexual behavior in rats show how theory can be greatly misled when the experimental situation is not naturalistic. But it is even worse in human research, for the simple reason that human subjects are not reagents in test tubes, passive responders to the experimenter's manipulations. They are sentient, curious creatures, constantly thinking about what is happening to them, evaluating the proceedings, figuring out what they are supposed to do, and planning their responses. These normal human cognitive activities may interact with experimental procedures in unknown ways. At best, they can obscure the effects of a manipulation, by adding noise to the system; at worst, they can render an entire experiment invalid.

For Orne, the experiment is a unique form of social encounter, with roles and rules that are not found anywhere else (except perhaps in doctors' offices). This uniqueness may preclude generalizations from lab to life -- which is what Orne means by threats to ecological validity. In the first place, the human subject is a volunteer who, in addition to his or her desire for remuneration (whether in the form of cash or research participation points) has an emotional investment in research which stems from three goals: to further scientific knowledge; to find out something about him- or herself; and to present him- or herself in a positive light. Second, the experimenter and the subject enter into an implicit contractual arrangement with specifically defined roles: the subject agrees to tolerate deception, discomfort, and boredom for the good of science; and the experimenter guarantees the safety and well-being of the subject (this guarantee legitimizes any requests that the experimenter might make).

As an aside, let me point out that in terms of this analysis, the injunction of the 4th edition of the APA Publication Manual, that we substitute "participants" for "subjects" when talking about human beings, may be politically correct but it is not psychologically correct. Both the experimenter and the subject are participants in the social encounter known as the experiment, and each has his or her own unique role to play in that encounter. The experimenter is the experimenter, and the subject is the subject.

Third, the experiment is by its very nature episodic: in important respects it is divorced from the rest of the subject's life experiences; and, in any event, it is expressly time-limited, and should have no permanent consequences for the subject. Fourth, subjects perceive the experiment in the context of their entire experience. They are engaged in "effort after meaning", trying to discern what the experiment is all about and deciding what to do. In so doing, they pick up on what Orne called the demand characteristics of the experiment, by which Orne meant the totality of cues available in the experimental situation which communicate the experimenter's design, hypotheses, and predictions. Some of these cues are explicitly present in the experimenter's instructions to the subject, but many of them are implicit in the solicitation materials, campus scuttlebutt, incidental remarks made by the research assistants, and hints communicated by the procedures. The important thing to understand is that demand characteristics aren't just communicated by the experimenter, and unlike experimenter bias, they can't be controlled by keeping the experimenter blind to the experimental hypothesis. Rather, some demand characteristics are brought into the experiment by the subject, while others arise as the experiment proceeds; in either case, they're everywhere. In the final analysis, they are internal to the subject, they can't be predicted in advance by someone external to the experiment, and in principle they cannot be controlled; they can only be evaluated.

The point is that regardless of the experimenter's intentions and instructions, the subject's behavior in the experiment is determined by his or her perceptions of the experimental situation, perceptions which are formed as the subject goes beyond the information given by the experimenter. For this reason, the subject's perceptions may be at variance with the experimenter's intentions. If this occurs, the experimenter and the subject are literally participating in two different experiments, and ecological validity is lost.

An anecdote from the early days of sleep research: One night a subject arrived for an experiment which was, truthfully, advertised as recording physiological responses while subjects were asleep. One participant dutifully submitted to having electrodes attached to his head and other body parts and crawled into bed; the experimenter told him to go to sleep and turned out the light. Half an hour later, the subject was still awake. The experimenter asked if everything was all right, the subject replied yes, experimenter reminded the subject to go to sleep, and the subject agreed. Half an hour later the subject was still awake, the scene was repeated, and again half an hour after that, and again half an hour after that. Finally, the experimenter burst into the bedroom in a fit of exasperation and demanded to know why the subject was refusing to go to sleep. To which the subject replied, "You mean the mouse in my bed isn't supposed to be there?"  The story may be apocryphal, but the experience is ubiquitous. The very first group hypnosis session I ran at Wisconsin, with Ernie Mross and Paula Niedenthal, both now PhD psychologists, as novice undergraduate research assistants, a subject became slightly nauseated just as the tape-recorded induction procedure began -- not from the hypnosis, mind you, but from a particularly potent combination of beef stroganoff and Old Milwaukee beer which he had just eaten at the student union. We escorted the subject out of the room under the watchful gaze of about 125 other subjects, and the rest of the session went smoothly, but we had a devil of a time convincing many subjects that the whole thing hadn't been staged to see how their responses would be affected. Paula, Ernie, and I were just trying to assess hypnotizability; these subjects thought we were doing something else entirely.

This happens all the time in personality research, where subjects can't believe that all you want them to do is fill out a questionnaire. Once they've read about the bystander intervention studies, they keep expecting smoke to pour in through the ventilation ducts, or for a research assistant to fall and break his foot.

So in order to make sense of experimental outcomes, the experimenter must attempt to understand the subject's behavior from the subject's point of view. Unfortunately, this understanding is impeded by what Orne called the pact of ignorance implied by the experimental contract. Both parties want the experiment to work; therefore, the subject agrees not to tell the experimenter that he or she has figured out the experiment, while the experimenter agrees not to force the subject to admit that he or she possesses this forbidden information. The situation is caricatured by a scenario in which the experimenter, who has already mentioned to the subject that the experiment is part of her dissertation research, pulls the subject's payment out of her purse, puts it on the table in front of the subject, debriefs the subject as to the actual purpose of the experiment, and then asks one last question: Did the subject catch onto any of this? The subject dutifully replies "no" (else he would have just wasted both the experimenter's time and his own), the experimenter breathes a sign of relief, hands over the money, and they both go on their respective ways.

In order to break the pact of ignorance, Orne argued, the experimenter and subject must alter their usual roles, concluding the experimental episode and transforming what once was a subject into a genuine co-investigator, who feels it legitimate to reflect truthfully and dispassionately on what has gone on before. That's what Orne's real-simulator design was all about. Simulators aren't subjects in the usual sense, because they are only pretending to be in an experiment. They're not controlling for demand characteristics, or indeed for any other experimental variable; they are collaborators of the experimenter, helping to evaluate the experimental design.

Orne was famous for applying the real-simulator design to evaluating the demand characteristics of hypnosis experiments, and two of his studies, both performed with Fred Evans, nicely illustrate the point about the pragmatics of experimenter-subject conversations.

First, an experiment on whether antisocial and self-injurious behavior can be induced by hypnosis. This question goes back more than 200 years, to the French Royal Commission's investigation of Franz Anton Mesmer, and more recently was even asked by the Central Intelligence Agency. And it's a legitimate one: hypnotized subjects are highly responsive to suggestions, especially for perceptual distortions, and perhaps this suggestibility gives the hypnotist a special power to coerce antisocial and self-injurious behavior.

In 1939, Rowland had reported an experiment in which subjects were hypnotized, placed in front of a large, active, diamondback rattlesnake, told it was a coil of rope, and asked to pick it up. One of two subjects who received this request immediately complied, at which point he struck his hand on a pane of invisible glass which had been interposed between him and the reptile. By contrast, 41/42 control subjects, asked if they would do the same thing, refused to go anywhere near the snake. In 1952, P.C. Young, who had been one of Hull's students at Yale, replicated Rowland's finding: 7 out of 8 subjects attempted to pick up the snake and threw a vial of nitric acid at a research assistant (who was also protected by invisible glass).

Orne and Evans were deeply suspicious of both experiments, because their procedures, appeared to violate the basic contract between experimenter and subject: the subject agrees to do what the experimenter wants, and the experimenter agrees to protect the subject from harm. The request to pick up the snake or throw the acid has some of the character of "Could you pass the salt?": the subjects go beyond what the words mean to infer what the experimenter means. It's not enough that 41/42 unhypnotized controls said they wouldn't pick up the snake: they might behave quite differently in the actual experimental situation.

Anyway, Orne and Evans contrived an experiment in which a group of highly hypnotizable subjects were hypnotized and asked to reach into a wooden box for all sorts of things: a 2-shilling coin (this experiment was done in Australia), a piece of chalk, a harmless gecko lizard, and a harmless green tree snake. All the subjects did everything, except one subject who fainted at the sight of the snake. The surviving subjects were then asked to pick up a new snake. This was a red-bellied black snake, otherwise known as the Australian two-step, because that's how far you get after it's bitten you. All five of the subjects attempted to pick up the snake, which of course was shielded behind a pane of invisible glass. They also were willing to remove a partially dissolved 2-shilling coin from a beaker of fuming nitric acid (this is perfectly safe if you do it right, but don't try it at home), and finally to throw the acid at poor Fred Evans, who was also protected by invisible glass. All of the subjects eagerly complied. But so did a group of insusceptible subjects who had been instructed to simulate hypnosis; and, for that matter, so did a group of unselected subjects run in the normal waking state. All of them, interviewed later, reported that they felt perfectly safe in the experiment, secure in the knowledge that appropriate safeguards were in place -- as in fact they were. Their safety had been clearly communicated by the demand characteristics of the experiment.

Reading about such an experiment makes you re-evaluate the procedures used in Milgram's classic studies of obedience to authority. As you'll remember, Milgram ostensibly recruited subjects in pairs, and, ostensibly, randomly assigned one to be the teacher and the other the learner in an experiment on punishment and learning. To make a long story short, the learner was a confederate of the experimenter, making errors according to a prearranged script, and a surprisingly large number of subjects were willing to administer intense punishment in response to the learner's apparent mistakes. To an external observer, the level of obedience is chillingly compelling. But then you have to ask yourself some questions from the subject's point of view -- chiefly, "What am I doing here? If my only job is to administer punishment, why can't the experimenter do it himself, and run both of us as learners? If he's interested in the effects of punishment on learning, why is he in here watching me?@ The totality of cues present in the situation -- even when it's conducted in a run-down building in Bridgeport rather than the hallowed spires of Yale -- are enough to lead the teacher to conclude that things are not what they appear to be, and to generate the hypothesis that he, not the learner, is the actual subject of the experiment. If so, the deception has failed, the experimenter and the subject are in different experiments, and all bets are off.

I know that the Milgram experiment is a classic (in fact, that's what the black-and-white checkerboard is all about at the APA exhibit), and I'm not saying that it was nothing but demand characteristics. Milgram might (in fact, he did) say that the power of the experimental situation was an illustration of his points about the power of situations in general. Perhaps; but, as Orne noted at the outset, generalization from the lab to life depends on the experimenter and the subject being in the same experiment. This dispute won't get resolved anytime soon, so let me be clear that I'm just using the Milgram experiment as a familiar example of how, in order to understand experimental outcomes, we have to understand experimental procedures from the subject's point of view.

The argument from demand characteristics got a reputation as a spoiler of experiments, but that's because most people who made the argument failed to appreciate what it was all about. You don't use demand characteristics to discount experimental outcomes; you evaluate them to make your experiments better, more ecologically valid, more convincing.

A nice example of this principle comes from another Orne and Evans gem, "The Case of the Disappearing Hypnotist". Hypnosis is commonly construed as something that the hypnotist does to the subject. This isn't right, but it is a common impression, and it occasionally leads to questions from subjects like "What happens if you drop dead while I'm hypnotized?". Roy Dorcus, a hypnosis researcher at UCLA, contrived an experiment to find out. While in the middle of a hypnosis session, he arranged to be interrupted by his secretary, who loudly reminded him that he had an appointment downtown, at which point Dorcus got up and left the room, without a word to the subject, never to return. Hypnotized subjects remained in the room for an average of 28 minutes, compared to 23 minutes for a relaxation control group. The difference wasn't significant, but in the final analysis it's not clear what conclusions one should draw from the results. This is because the experimenter's unannounced departure violates the experimental contract, and may have been perceived to be part of the experiment.

Orne and Evans performed an elaborate conceptual replication of Dorcus's study, with hypnotic subjects and simulators. The subjects were recruited for a study of the physiology of hypnosis, and when they arrived at the laboratory they were connected to a polygraph, hypnotized by a tape-recorded procedure, and administered a series of suggestions while a research assistant observed their responses. In the middle of one of these suggestions, while the subject's eyes were shut, one arm stuck to the chair, tapping his foot in time to hallucinated music, another assistant threw a silent switch, the desk light went out, the tape slurred to a halt, the assistant muttered "The damn fuse", opened the door, noisily ran down the hall, and never returned. Actually, she tiptoed back into an adjacent room, where she observed the subject through a one-way screen, and reappeared in the experimental room 30 minutes later. The results were striking: on each of four dependent variables, the simulators outlasted the reals. They kept their eyes closed for an average of 25 minutes, continued tapping for 17, kept their arms fixed to the chair for 23; and when the assistant returned to the room, five out of six appeared to be deeply hypnotized. A postexperimental interview clarified what had happened. The room was outfitted with a conspicuous mirror, which was in fact a one-way screen; and while the desk lamp and tape went off, the polygraph and a ceiling light stayed on (some light was required for observing the subject, and the experimenters assumed that subjects would realize that different outlets in the same room might be on different circuits). The simulators clearly smelled a rat, and so they behaved the way they supposed a hypnotized subject would actually behave.

In a later study, Evans and Orne used this information to improve their experimental procedures. This time, when the switch was thrown, everything went off: desk light, overhead light, tape recorder, polygraph, everything (in fact, the polygraph leads were fed out the back of the machine, through the wall, and into a second machine in the observation room). A portable blackboard covered the one-way screen, and the experiment was conducted at night, so that the room was plunged into complete darkness. Well, almost complete: Evans and Orne had arranged for a light fixture to be installed in the parking lot outside, so that enough light filtered through the draperies to permit the subject to be observed through a peephole drilled through the wall, and covered with a translucent silkscreen painting. Under these circumstances, the simulators stopped their performance almost as soon as the assistant left the room, and resumed their hypnotic posture as she re-entered 40 minutes later. This time, the behavior of the simulators clearly indicated that the deception had been successful: the subjects accepted the power failure as real.

While the behavior of the simulators differed dramatically across the two experiments, that of the hypnotic subjects did not, and that is of some theoretical interest. But for present purposes I want to focus on the methodological implications: all experiments have demand characteristics, and subjects can be guaranteed to pick up on them, threatening the ecological validity of our experiments, and we ignore this possibility at our peril as scientists.

"Oh, what a tangled web we weave

When first we practise to deceive." 

This was the epigram, from Sir Walter Scott's Marmion (1808), to Orne's critique of the Milgram experiment. Orne was very suspicious of experimental deceptions and cover stories, and his critique has become conventional wisdom for many social psychologists, who often use deception in their experiments. (This reliance on deception is why social psychology has traditionally been discussed at the end, but not the beginning, of the introductory course: by the time students learn about experimental deceptions, the social psychologists have already gathered their data.)

By contrast, cognitive psychologists haven't worried about this problem all that much. Reid Hastie, asked what made him a social psychologist and not a cognitive psychologist, replied that he lied to his subjects. But it's not true that cognitive psychologists don't deceive to their subjects. Consider the levels-of-processing paradigm in memory research, in which subjects are told that the experiment is about how people make judgments about words, when in fact they are going to be surprised with a memory test. Consider research on explicit and implicit memory, in which experimenters go to great lengths to convince their subjects that the stem-completion test (for example) has nothing to do with the wordlist that the subject studied only moments before. In the final analysis, the problem of demand characteristics isn't just attached to deception experiments. Because every subject is engaged in figuring out the meaning of every experimental situation, demand characteristics are a problem.

And because experimenters and their subjects are always engaged in a conversation, the logic of conversations is also an enduring problem. Consider work on judgment and decision making, in which many of the problems posed to subjects violate Grice's conversational rules. When we ask subjects in a consumer preference survey to indicate which pair of stockings they prefer, given the context have every right to believe that the question is answerable -- in this case, that items are in fact different; and when asked to justify their choices, they have every reason to reject as unresponsive (if not also impolite), the fact that the chosen pair was on the right rather than the left side of the display. When we ask subjects to predict whether a person is an engineer or a lawyer and then describe that person as someone who is uninterested in politics and social issues and likes woodworking and mathematical puzzles, the subject has every right to believe that this individuating personality description is somehow relevant to their task, and to use the information somehow. And when they do so, we have no right to conclude that people are irrational or don't understand normative rules of inference. Maybe they do and maybe they don't, but in their conversation with the experimenter they are only doing what comes naturally: assuming that the experimenter is following the cooperative principle and its four associated maxims, they seek to find common ground, and generate their response from this stance. To assume otherwise, in the absence of an understanding of the experiment from the subject's point of view, is to risk serious misunderstanding of how mind and behavior work.

Orne was concerned with ecological validity, and with the peculiar character of the experimental situation. To a great extent, he thought that demand characteristics were a problem because of motives -- to help the experimenter, to learn about themselves, and to look good -- that were peculiar to research participants. Grice reminds us, though, that there is another motive which subjects display both in the lab and elsewhere in life. Subjects aren't just motivated to guess and confirm the experimenter's hypothesis; as listeners -- that is to say, as people -- they are primarily motivated to make sense of any communicative situation in which they find themselves. In that respect, at least, Orne needn't have worried, for what happens in the laboratory is entirely representative of what goes on in the real world. Because the laboratory is just like the real world after all, it follows that, in our experimental conversations with our subjects, as we establish common ground and collaborate with them in learning about the mind and behavior, we must be careful to follow Grice's maxims:

be cooperative,

be informative,

be true,

be relevant, and

be clear.


Author Notes

Keynote address presented at the 7th annual convention of the American Psychological Society, New York, June 1995. The point of view represented herein is based in part on research supported by Grant #MH-35856 from the National Institute of Mental Health. I thank Marilyn Brewer and Mahzarin Banaji for the invitation to present this material.  A similar argument has been made by Bless, Strack, & Schwarz (1993) in a paper that came to my attention after I had given my talk.

The reading of this paper was dedicated to Martin T. Orne, my principal advisor in graduate school, who died on February 11, 2000.



1Apparently this sentence first appeared in papers by Gordon and Lakoff (19xx) and Searle (1975); according to Herbert Clark (personal communication, June 26, 1995), there also exists a satirical paper entitled "Can you pass the salt?" or somesuch. The earliest psychological reference to this problem is in Clark and Lucy (19xx) and Clark (1979).  Return to text.

The Roots of Orne's Ideas

From time to time I have received inquiries about the origins of Orne's concepts of ecological validity and demand characteristics.

Orne's concept of "demand characteristics" is based on Lewin's (1926) concept of affordance character (my translation of Aufforderungscharaktere or Aufforderungscharacter), in a sense not dissimilar to J.J. Gibson's concept of affordances -- that is, what the perceiver can do with a stimulus (actually, I think that Gibson got his concept of affordances from Lewin as well).

And Orne's concept of "ecological validity" is based on an idea from Egon Brunswik, that perceptual cues are associated with particular attributes of objects. 

Orne also related demand characteristics to the Hawthorne effect, but there are a couple of important differences.

The Hawthorne effect is generally presented as a subject's response to being observed.  It's a "nuisance effect" that contaminates experiments, and which can be controlled by employing unobtrusive measures.  And it's also controversial.  Jones (Am J. Soc. 1992) found little evidence for it in the original study, and Levitt (of Freakonomics fame) and List (NBER Working Paper 15016, 2009) were also critical of the original documentation. 

But, as I argue in the P&T article, the concept of demand characteristics is deeper.  For Orne, subjects don't just react to things.  Rather, they are trying to make sense of things, and in this "effort after meaning " (the phrase is from F.C. Bartlett), they draw on not only the experimenter's (or the hypnotist's) explicit instructions and suggestions, but clues that they can glean from the experimental situation as a whole, including knowledge, expectations, and beliefs that they brought with them into the experimental (or hypnotic) setting.  To use a phrase from Bruner, subjects "go beyond the information given" by the experimenter.  Giving meaning to the situation is an essential human cognitive activity.  For this reason, demand characteristics can't really be controlled; they can only be evaluated. 

Demand characteristics are causal variables, because they contribute to the perception of the situation which, ultimately, controls behavior in that situation.  But, strictly speaking, they're not independent variables, because they can't be controlled by the experimenter. 

And they're not not exactly confounds either, so they can't be controlled like social desirability can (as in the Edwards Personal Preference Schedule).

Demand characteristics can't be controlled because at least some of them are brought by the subject into the experimental setting. This is why, for example, Martin insisted on calling simulators quasi-controls.  Not only are simulators different from reals in many respects, but they don't really control for anything: they only permit demand characteristics to be evaluated.  For a good illustration of the argument, see Orne's chapter in the original Fromm & Shor volume (1972), and the classic work by Orne and Evans on antisocial behavior or "the disappearing hypnotist".

Orne's concept of demand characteristics is not just important methodologically.  It's also important historically.  Martin's p1962 paper was originally delivered at an APA symposium on "The Social Psychology of the Psychological Experiment", along with Rosenthal's pioneering paper on experimenter bias -- which is really just an example of a broader class of expectancy-confirmation effects (think of R.K. Merton's self-fulfilling prophecy).  Taken together, I believe that these two papers mark the beginning of the "cognitive revolution" in social psychology -- which is predicated on the idea that subjects respond to the perceived situation -- a perception that is influenced by "higher" cognitive processes such as thinking and reasoning.



Bless, H., Strack, F., & Schwarz, N. (1993). The informative functions of research procedures: Bias and the logic of conversation. European Journal of Social Psychology, 23, 149-165.

Clark, H.H. (1979). Responding to indirect speech acts. Cognitive Psychology, 11, 430-477.

Clark, H.H. (1985). Language use and language users. In G. Lindzey & E. Aronson (Eds.), Handbook of social psychology, 3rd ed. Vol. 2, Special fields and applications (pp. 179-231). New York: Random House.

Evans, F.J., & Orne, M.T. (1971). The disappearing hypnotist: The use of simulating subjects to evaluate how subjects perceive experimental procedures. International Journal of Clinical & Experimental Hypnosis, 19, 277-296.

Grice, H.P. (1975). Logic and conversation. In P. Cole & J.L. Morgan (Eds.), Syntax and semantics 3: Speech acts (pp. 41-58). New York: Academic.

Grice, H.P. (1978). Some further notes on logic and conversation. In P. Cole (Ed.), Syntax and semantics 9: Pragmatics (pp. 113-128). New York: Academic.

Orne, M.T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776-783.

Orne, M.T. (1970). Hypnosis, motivation, and the ecological validity of the psychological experiment. In W.J. Arnold & M.M. Page (Eds.), Nebraska Symposium on Motivation (pp. 187-265). Lincoln. Nb: University of Nebraska Press.

Orne, M.T. (1971). The simulation of hypnosis: Why, how, and what it means. International Journal of Clinical & Experimental Hypnosis, 19, 183-210.

Orne, M.T. (1972). On the simulating subject as a quasi-control group in hypnosis research: What, why and how. In E. Fromm & R.E. Shor (Eds.), Hypnosis: Research developments and perspectives (pp. 399-443). Chicago: Aldine-Atherton.

Orne, M.T. (1973). communication by the total experimental situation: Why it is important, how it is evaluated, and its significance for the ecological validity of findings. In P. Pliner, L. Krames, & T. Alloway (Eds.), Communication and affect: Language and thought (pp. 157-191). New York: Academic.

Orne, M.T., & Evans, F.J. (1965). Social control in the psychological experiment: Antisocial behavior and hypnosis. Journal of Personality & Social Psychology, 1, 189-200.

Orne, M.T., & Evans, F.J. (1966). Inadvertent termination of hypnosis with hypnotized and simulating subjects. International Journal of Clinical & Experimental Hypnosis, 14, 61-78.

Orne, M.T., & Holland, C.H. (1968). On the ecological validity of laboratory deceptions. International Journal of Psychiatry, 6, 282-293.

Schwarz, N. (1994). Judgment in a social context: Biases, shortcomings, and the logic of conversation. In M. Zanna (Ed.), Advances in Experimental Social Psychology, Vol. 26 (pp. 123-162). San Diego, Ca.: Academic.


This page last revised 12/30/10 10:29:44 AM .