Why Believe That for Which there Is No Good Evidence?
Robyn M. Dawes*
ABSTRACT: Many people believe in the existence of widespread
"repressed" child sexual abuse and organized satanic
cults. Such beliefs occur despite lack of evidence supporting
them, influenced instead by reliance on authorities and social
consensus. In addition, people fail to understand the fallibility
of retrospective memory, erroneously assume that high confidence in a
memory means that it is accurate, and mistakenly believe that more
information necessarily implies a better grasp of reality.
Compounding this problem is the diminution in the scientific training of
licensed therapists. When therapists themselves have not been
inoculated with scientific skepticism, they will not inoculate their
clients and will instead contribute to the epidemic of irrational
beliefs.
I would like to begin this paper with a confession: I believe in the
reality of global warming. But why do I believe that?
First, I cannot assess the phenomenon directly myself. In fact,
the only subjectively compelling evidence I've personally encountered
for the phenomenon consists of the unusually warm winters in Pittsburgh
from 1987 on which is probably about the lousiest bit of evidence
available. My knowledge of better evidence is weak; I've not taken
a chemistry class beyond high school, and while I've taken many graduate
courses in mathematics and statistics (originally to compensate for
linguistic incompetence and a language requirement), I have never
checked out the details of the models themselves and even if I
could, I would have no way of assessing the validity of the data on
which they are based.
What I have done is to trust authority. I read articles
summarizing the models in such sources as Science
and Scientific
American and American Scientist.
When the authors of these articles assure me that the models are
coherent and that recent evidence is confirming the models
I believe them, even though I have no direct knowledge of their data,
their qualifications, or their veracity. I trust.
In addition, all my close personal friends, including the woman with
whom I live, believe in the reality of global warming. None, for
example, has anything but anger and contempt for President Bush's
half-hearted denial that it is real. In my everyday interactions
with these people when the topic of global warming comes up, none has
ever suggested that I retain an "open mind, questioning all
authority" and check out the evidence for myself. In short,
two critical factors in my acceptance that the evidence is
"good" are authority and social consensus.
Now I would like to consider a belief that I don't hold: The reality
of an after-life. I regard the religious authorities who attest to
its existence as, frankly, "silly," and very few of the people
with whom I interact, including the woman with whom I live, believe in
its reality. None has ever suggested that I adopt an "open
mind, questioning all authority" and examine the evidence for an
after-life on my own.
Moreover, just as in global warming (Pittsburgh is getting hotter in
the winter), I find ersatz evidence of my own experience
compelling. People to whom I have been close who suffer severe
neurological impairment from such conditions as Alzheimer's appear to me
to be partially non-existent, as people; the inference that when their
brain totally shuts down their personal qualities will be totally
non-existent is compelling. Further, my own "near death"
experience (or was it just an ether dream?) was nothing like the ones
reported by Kübler-Ross. Every feeling of "losing"
something valuable I have ever experienced before or after in my life
was "rolled into one," although I didn't know what I was
losing. The universe itself was simultaneously disintegrating and
contracting to a single point. I kept screaming "There's such
a thing as life," although I didn't know what the words
meant. A committed theologian would dismiss both my
interpretations of what has happened to others and my own experience as
totally irrelevant to the reality on being resurrected from the dead and
judged by God.
But 80% of the people in the United States believe in an after-life
(Gallup & Castelli, 1989), perhaps even some members of this
audience. Why? Two important factors are the same as those
that I cited for belief in global warming; trust in authority, and
social consensus. People who believe in the after-life are
reassured of its existence periodically by learned ministers, and
interact (at least in churches) with others sharing their belief.
Moreover, they may readily interpret ersatz "evidence" as
supportive.
My first point is to suggest that belief in a high rate of
"repressed" child sexual abuse and the widespread existence of
satanic cults practicing sexual abuse on children is influenced by the
same factors. Authorities have attested to their existence.
Moreover, especially when people attend self-help groups or go into
therapy with authority figures who believe this nonsense, they encounter
a great deal of social consensus concerning such reality. Finally,
they can interpret non-diagnostic experience as compatible
"evidence."
And why shouldn't we be trusting of authority and influenced by
consensus? (at least so far as deciding what it is we don't need to
investigate on our own). As Stich and Nisbett (1980) point out, a
total rejection of authority figures in our life would lead to an
inability to function, totally. We could not buy either a
prescription or a nonprescription pill at a pharmacy if we had no trust
in the authorities of the FDA, or for that matter eat food that was not
grown in our own gardens. (Even raising our own animals involves
some trust in the people who are monitoring what the animals themselves
are fed as the recent popular book and movie Bitter Harvest pointed
out, a trust that is not always warranted.)
Or how could we be here today without trusting a remarkable number of
authorities? beginning with the people who program the computers
used by our reservation agents, continuing through those who design the
airplanes or monitor their safety-and including the architect of this
hotel. (Again, this trust is occasionally misplaced as when
one discovers that a confirmed reservation has been "erased,"
or discovers that the left engine is on fire and hopes that the pilot
will have enough expertise to realize that it is the left one.)
The completely open mind that questions all authorities would reside in
a body that is a blithering mess.
(We might do well to take quite seriously the finding that people
answering authoritarianism scales are inconsistently authoritarian but
consistently anti-authoritarian, rather than to ascribe the result to a
"response bias." [For a critique see Rorer, 1965].
Perhaps authoritarianism comes naturally to people, while the
"syndrome" indicated by consistency of aberrant responding
is anti-authoritarianism; "eternal vigilance" is, after all,
akin to paranoia.)
Moreover, consider what life would be like if we did not accept
consensus belief as a quite valid cue to reality. An enormous
amount of what we "learn" about the world is provided by the
people around us (Simon, 1990), and to be oblivious to their beliefs
would trap us in the position of the hypothetical child raised by an
altruistic wolf.
Now trust in authority and consensual validation are not
"rational" bases for belief. What I am suggesting,
however, is that they are the bases for most of what we believe most of
the time in fact, almost everything we believe almost all of the
time. Is it any wonder then that a contagion of beliefs can occur
even though these beliefs do not withstand critical scrutiny (i.e.,
ours)? We have certainly observed contagion of belief in the past
concerning belief that we and others have come to debunk. And we
will in the future.
But why these particular beliefs in widespread repressed child sexual
abuse and satanic cults? I wish to hypothesize a major factor
later. Now, however, I would like to emphasize the difficulty of
understanding more specifically, believing the evidence
that these beliefs are erroneous.
1. The first evidence concerns the fallibility of retrospective
memory. That has been discussed elsewhere (Loftus, Korf &
Schooler, 1989; Pearson, Ross & Dawes, 1992), and I don't wish to
reiterate all the evidence concerning this fallibility. The point
I wish to make is how difficult it is to believe that what one remembers
is not true. The present is fleeting, the future does not yet
exist; consequently, virtually all our conscious knowledge is based on
our memory. Moreover, we have good reason to believe that our
memory is generally correct (if it weren't, we'd be dead), even memories
involved in "motor programs" such as driving your car or
playing a piano. To ask people to question their own memory is
tantamount to asking them to question their own interpretation of
reality, which at the extreme is close to asking them to consider the
possibility that they are schizophrenic. Just as people couldn't
function with a "perfectly open mind," they couldn't function
if they were constantly to doubt that what they recalled as true was in
fact true.
Moreover, it is perfectly reasonable (Dawes, 1989, 1990) to
"project" on others our beliefs in the validity of our own
memories. Even though we may have been "influenced" to
believe one thing or another, that does not make the memory less real to
us hence less of a basis for making decisions in our life.
So why should we believe that such factors account for the memories of
others?
2. Another important bit of evidence is that confidence is not very
highly correlated with accuracy. But how can I change my confidence
e.g., in a particular judgment or memory on the basis of knowing
that confidence is not a good cue to validity, when in fact confidence
is confidence about how correct I am?
An anecdote: A close colleague has written a brilliant paper about
"outside" versus "inside" judgment (Kahneman &
Lovallo, in press). The exemplar involved asking academics how
long it will take for a particular committee to achieve its goal.
These same academics are then asked to think of similar committees they
have been on with similar goals and how long it took these committees to
achieve their goals. The second time estimates are an order of
magnitude greater than the first. And they are more accurate.
When I was talking to this same colleague about the upcoming
presidential elections, he stated that he was "absolutely
sure" that Clinton could not win, and he gave me a "subjective
probability" close to .98. I couldn't resist asking him how
often in general his political judgments were correct when he was that
certain, and he snapped back "63% of the time." We both
laughed, but we also both agreed that it is extraordinarily difficult to
be uncertain about things when one is certain. Moreover, a
knowledge of the "calibration" literature concerning
confidence (see Fischhoff, 1988), doesn't help much. Think of
individuals "recalibrating" themselves on the basis of this
knowledge: "I believe that on judgments about which I am this sure,
I am correct 95% of the time; therefore, I believe that on judgments
about which I am this confident, I will be correct 79% of the
time."
And once again, when we ourselves treat confidence as an excellent
cue to validity, it is perfectly reasonable to "project" that
it is an excellent cue for others as well. Why should we doubt
someone who claims they are absolutely confident concerning an event
(judgment perhaps being a slightly different matter). If we were
to adopt as a general principle that high confidence about recalled
events is not a very good reason to believe that these events actually
occurred, we'd be faced with a real dilemma about how to run our own
lives. "I am very confident that I know exactly what
happened, but it probably didn't happen that way."
3. Even though we believe that "you can't
teach old dogs new tricks" and that "the generals are always
fighting the last war," we ourselves know that we get wiser as we
grow older (although we may admit senescence in some particular
areas). Why? I propose that in addition to all the
self-serving reasons, there is a compelling cognitive basis for this
belief. When we are older we have all the information we had when
we were younger plus more. It is quite natural to believe that the
more information one has, the better one's grasp of reality. It is
particularly compelling to believe that if information set A subsumes
information set B, then a belief or judgment based on A must be superior
to one based on B. Ellsberg1
pointed this assumption out with respect to the debates on the Vietnam
War between people who did versus those who did not have access to
classified or secret information. Those who had access knew that
they had knowledge others didn't while also having access to the
knowledge that others did have. Hence, their judgment was
naturally superior, only it wasn't.
In fact, additional knowledge can simply cloud judgment, although not
confidence (Oskamp, 1965). But that's hard to believe. Now,
we critics are claiming that when people believed earlier in their lives
that they were not sexually abused they were correct, while they are
incorrect in their later beliefs. How can that be, given they have
access to the same information they had earlier plus more, including the
judgment of the authorities? Certainly, we do not go around
proclaiming that "now that I know more I know less" or
believing that our knowledge is enhanced by literally throwing away
information, and once again we would have little reason not to
"project" that principle on others in general.
In short, asking people to doubt the conclusions concerning
widespread childhood sexual abuse and satanic cults is asking them not
only to reject the usual bases of authority and consensus for
establishing reality, but in addition to accept principles that violate
foundations of everyday functioning.
Now in point of fact we do ask people to accept such principles, and
they do. Few people, for example, believe that the world is flat,
even though it appears to be, or believe that cigarettes and alcohol are
good for them, even though both may have very pleasant effects. We
return once more to the efficacy of authority. People who have no
direct experience of the curvature of the earth believe that it is not
flat, and even the greatest devotees of tobacco and alcohol believe that
these drugs are harming them. We accept what we have been told by
"reputable authorities." (We even accept what has been
communicated by very minor authority figures, such as the person who
draws a map that shows the Suez Canal to be longer than the Panama
Canal.)
Here, we have a very serious influence on the epidemic of the current
belief. Specifically, the last 20 years have provided a veritable
explosion of credentialed psychotherapists, with a simultaneous
diminution in the scientific training that would lead them to know what
they are talking about. I don't want to go into all the
statistical details here, but two statistics are relevant. First,
in the paper presented by Wakefield and Underwager (1992) the
(admittedly non-random) sample yields the conclusion that 33% of the
therapists supporting the validity of the repressed memory syndrome were
psychologists and 8% psychiatrists. While the field of clinical
psychology has been doubling at the rate of once every 10 years and that
of psychiatry has been doubling at the rate of once only 20 years (for a
comparison, the rate of lawyers has been doubling once every 12 years),
there are still only about 75% or so more practicing clinical
psychologists than psychiatrists. Why, then, the 4 to 1 ratio?
The answer I propose is that while the training of psychiatrists
still involves required undergraduate courses in science and the first
two years of medical school in the science of medicine, the scientific
training of clinical psychologists has I'm afraid gone to
hell. Fewer and fewer (now somewhere between 13% and 18%) are
being trained at the top 200 graduate institutions, while more and more
(almost 40%) are being graduated from professional schools. (The
former figure was close to 40% in 1970 and the latter was 0.)
It is, of course, not true that a person obtaining an advanced degree
from an institution of lesser status necessarily has had poorer
scientific training that a person graduating from a place of higher
status, or understands the scientific basis of psychology less
well. There is a great deal of overlap in actual training and
understanding. In general, however, as Lee Sechrest claims,
"we are graduating people with only a peripheral knowledge of
psychology" (Hayes, 1989). (All the statistics supporting
this conclusion are presented in the first chapter of my forthcoming
book: Psychology and Psychotherapy: The Myth of Professional
Expertise [in press] ().)
Without training in scientific psychology, it is little wonder that
so many practitioners do not accept or even know about
principles that violate our naive assumptions concerning everyday
functioning. Given they don't know, the "authoritarian"
basis for holding or diminishing this epidemic is lost. When they
themselves have not been "inoculated" with scientific
skepticism, there is little reason to expect them to inoculate their
clients.
In summary, this epidemic of belief is consistent with our ways of
functioning in the everyday world, and there is little to stop the
epidemic. We may understand the "irrationality" of much
of our everyday functioning, but often only when it leads to a
conclusion that we believe to be untrue. We will have great
difficulty teaching people in general about such irrationality, because
our colleagues them selves do not understand it.
Appendix
I came across the following article in the most recent issue of Science
(May 22, 1992). It was entitled: "Open Season on
Depression" (Vol.256, pg. 1137).
Mental illness may be taking another step out of the closet with a
project recently launched by a Harvard psychiatrist: The first
nationwide free screening program for depression. Douglas Jacobs, who
practices at McLean Hospital in Belmont, Massachusetts, says
depression is a major public health problem, afflicting an estimated
10 million Americans in a given 6-month period, but only one-third of
sufferers get diagnosed or treated for it.
So in 1990, Jacobs started a pilot program at McLean, which was
expanded last year to 90 facilities in 44 states. About 5,000
people attended lectures and discussions on depression last year at
health facilities; 3,000 of them filled out self-report forms and had
short meetings with mental health professionals who told them about
treatment options. Jacobs says half of the people who attended
the screening had never had any treatment for depression, but the
self-report forms indicated that half were probably clinically
depressed. Plans are to expand greatly the next screening,
scheduled for October. With the support of the National
Depressive and Manic Depressive Association, a patient advocacy group,
the organizers hope to reach up to 400 locations-including non-health
facilities like shopping malls and libraries-and screen up to 20,000
people.
"It's time that psychiatry not be behind closed doors,"
says Jacobs. He's already thinking about the next advance: A "national substance-abuse screening day."
That is allegedly science, but how is it really structurally
different from the pseudoscience we have been discussing today? I invite
the reader to think of similarities and differences.
I would also like the reader to consider the following hypothetical
experiment. I am going to screen 5,000 people to determine whether they
suffer from the newly discovered mental illness syndromes of
aslantophelia and aslantophobia. I will ask such questions as the
following:
Do you ever think about the possibility that either you or your
spouse partner/significant other/best friend) might die before the other
does?
Has it ever occurred to you that even though you are doing your job
well, the organization in which you work may be in trouble?
Have you ever wondered whether your personal life might be different if Lee Harvey
Oswald or whoever it was who shot President Kennedy had missed?
When people answer "yes" to such questions, phrased
positively as above, they will be diagnosed as suffering from
aslantophelia, a feeling of great uncertainty and lack of stability in
life. When they answer "no," they will be diagnosed as
suffering from aslantophobia, a denial of the degree to which life is
uncertain and unstable. (In the actual questionnaires, of course,
positive and negative wordings will be balanced.) Both syndromes will
follow from their parents' particularly their mothers'
inability to
provide them as young infants with the necessary sense of security.
The
primary recommendation for people suffering from either of these
syndromes will be prolonged therapy to help them "get in
touch" with those feelings they experienced as an infant so that
they can "work them through." How many takers would I have?
Could I set up support groups of people suffering from these syndromes?
If so, could there be an epidemic?
References
Dawes, R. M. (1989). Statistical criteria for establishing a truly
false consensus effect. Journal of Experimental Social
Psychology, 25, 1-17.
Dawes, R. M. (1990). The potential non-falsity of the false consensus
effect. In R. M. Hogarth. (Ed.), Insights in Decision Making: A Tribute
to Hillel T. Einhorn ()()
(pp.97-110). Chicago: University of Chicago Press.
Dawes, R. M. (in press). Psychology and Psychotherapy: The Myth of
Professional Expertise (). New York: The Free Press.
Fischhoff, B. (1988). Judgment and decision making. In R. J.
Steinberg & E. E. Smith (Eds.), The Psychology of Human Judgment
()
(pp.
153-187). New York: Cambridge University Press.
Gallup, G., Jr., & Castelli, J. (1989). The People's Religion
(). New
York: Macmillan.
Hayes, S.C. (1989). An interview with Lee Sechrest: The courage to
say "We do not know how." APS
Observer, 2(4). 8-10. p.8.
Kahneman, D., & Lovallo, D. (in press). Timid decisions and bold
forecasts: A cognitive perspective on risk taking. Management
Science.
Loftus, E. F., Korf, N. L., & Schooler, J. W. (1989). Misguided
memories: Sincere distortions of reality. In J. C. Yuille (Ed.),
Credibility Assessment ()
(pp.261-174). Dordrecht, The Netherlands: Kluwer Academic
Publishers.
Oskamp, S. A. (1965). Overconfidence in case-study judgments. Journal of Consulting
Psychology, 29, 261-265.
Pearson, R. W., Ross, M., & Dawes, R. M. (1991). Personal recall
and the limits of retrospective questions in surveys. In J. M. Tanur
(Ed.), Questions About Survey Questions: Meaning, Memory,
Expression,
and Social Interactions in Surveys (). New York:
Russell Sage.
Rorer, L. G. (1965). The great response-style myth. Psychological Bulletin,
63, 129-156.
Simon, H. A. (1990). A mechanism for social selection and successful
altruism. Science, 250, 1665-1668.
Stich, S. P. & Nisbett, R. E. (1980). Justification and the
psychology of human reasoning. Philosophy of Science, 47,
188-202.
Wakefield, H.. & Underwager, R. (1992, June 20). Recovered
memories of alleged sexual abuse: Lawsuits against parents. Paper
presented at the Fourth Annual Convention of the American Psychological Society, San Diego, California. (A revised version is in press in
Behavioral Sciences &
the Law.)
* Robyn
M. Dawes is a University Professor of Psychology at Carnegie
Mellon University, Pittsburgh, Pennsylvania. This paper was
presented at the Fourth Annual Convention of the American
Psychology Society, San Diego, June 20, 1992. [Back]
1 "You
have the information, they don't; they don't even have the
wisdom to know that they don't know; therefore they have no
legitimate role. You will become unable to learn from
anyone who does not have these clearances." Testimony
to a Joint Senate Committee. Quoted in T. Fain, K. C. Plant, and
R. Milloy (1977), The Intelligence Community: History,
Organization, and Issues (). New York: R. P. Bowker, 501-514.
[Back] |