Friday, December 6, 2013

Gilman: Is Racism a Psychopathology?


Sander Gilman, Is Racism a Psychopathology?
In the third and final CMBC lunch talk of the 2013 fall semester, Dr. Sander Gilman (Graduate Institute of Liberal Arts, Emory) treated participants to an engaging presentation on the interconnected history of racism and mental illness in Europe and America during the nineteenth and twentieth centuries. The topic of the talk grew out of a CMBC-sponsored undergraduate course and graduate seminar offered by Dr. Gilman last fall, titled “Race, Brain, and Psychoanalysis.”
Gilman opened by citing a 2012 study conducted by an interdisciplinary team of scientists at Oxford. Based on clinical experiments, they reported that white subjects who were given doses of the beta-blocker drug Propranolol showed reduced indicators of implicit racial bias. The authors of the paper wrote that their research “raises the tantalizing possibility that our unconscious racial attitudes could be modulated using drugs.” Time Magazine soon thereafter ran a headline story with the title “Is Racism Becoming a Mental Illness?” Dismissing these claims as unscientific, Gilman instead posed a different set of questions: at what point, historically, does racism come to be classified as a form of mental illness? Why? And what are the implications of such a “diagnosis”?
The strange marriage of racism and mental illness traces back to the development of the so-called “science of man” in nineteenth century Europe. At this time, notably in Germany, the nascent discipline of psychiatry was attempting to win status as a legitimate science. Psychologists turned their attention to the issue of race and sought to clarify the connection between race and morals on the one hand, and mental illness on the other. Across the ocean, the assumption in America was that African Americans’ desire to escape the bondage of slavery was symptomatic of an underlying insanity. In Europe, by contrast, the intellectual discussion about race and mental illness concerned the population of European Jews, who, unlike African American slaves, were able to participate in public intellectual life. The pressing question facing European scientists had to do with the extremely high rates of mental illness among Jews. Scholars debated whether it was due to inbreeding or a consequence of domestication and self-isolation. More interesting, however, is the fact that even Jewish scholars accepted the basic supposition that Jews displayed high rates of insanity. Indeed, some concluded that such illness could only be explained as the result of 2,000 years of persecution. The main point about this period of history, then, is that minority groups such as European Jews and African American slaves were thought to suffer from a universal form of mental illness, and this theory was now supported by a distinctly “scientific” diagnosis.
Then a major shift occurred at the turn of the nineteenth century as scholars began to focus on the oppressor. If mental illness among Jews was indeed caused by a long history of persecution, then what explains racism itself? The question was especially relevant to Jewish thinkers who were trying to understand the factors that might prompt Jews to leave Europe and found their own Jewish state. Thus, in a proto-Zionist pamphlet written in 1882 and titled “Auto-Emancipation,” the Russian Jewish physician Leon Pinsker coined the term “Judeophobia” to designate the mental illness not of the oppressed minority, but of the persecutor. Pinsker claimed that Judeophobia was not unique to any one race, but was instead the common inheritance of all peoples who had ever interacted with the Jews—in other words, nearly everyone. The term was presented as a disease, a psychic aberration that was hereditary and incurable. This new model of race and mental illness therefore inverted the prevailing view: it was no longer the victim, but the racist who was crazy.
Whereas the apparent madness of the Jews and African Americans was based on their status as races or biological entities, the larger global population of Judeophobes was not based in biology. To classify the madness of this diverse collective entity, psychologists at the end of the nineteenth century invented the notion of the “crowd.” A parallel was thereby established between race and the (German) crowd. In the early part of the twentieth century, Freud adopted the idea of the crowd into his theory of group psychology and claimed that racism was a prime example of such crowd madness. Moreover, he argued that it was universal and fuelled by the tendency of the crowd to see itself as biologically different from other crowds. The idea of racism—and anti-Semitism, in particular—as a form of psychopathology, became a common view by the 1920’s and 30’s.
But the pendulum swung back yet again as scholars in central Europe (many of whom were Jewish) expressed renewed interest in the status of the victim. Scientists speculated that, as a consequence of racial discrimination, there must be a residual aspect of self-hatred among the oppressed. Not only was racism itself a sign of psychopathology, but the response to racism also came to be viewed as a form of mental illness. Anna Freud wrote about the tendency of the victim to identify with the aggressor. For example, in studying Jewish children who had escaped to America after World War II, she noticed that when they played the game of “Nazis and Jews”—the German equivalent of “Cowboys and Indians”—all the Jewish kids wanted to be Nazis. This tendency, scholars argued, pointed to underlying psychological damage.
To summarize the story to this point: we started with the notion that certain oppressed, biological minorities are by definition mad; this hypothesis then proceeded to the idea that the racist perpetrators were the ones who are mentally ill; and finally ended up with the suggestion that the perpetrators’ own psychopathology (i.e. racism) is in fact the cause of the victim’s madness.
Meanwhile in the United States, the debate about these questions became a central issue. Social psychologists, in particular, were among the first to pick up the notion of the victim’s self-hatred as the result of exposure to negative race patterns. Two leading figures in this regard were the American Jewish researchers Eugene and Ruth Horowitz, whose work on black racial identity was adopted by the African American psychologists Kenneth and Mamie Clark in the 1940’s and early 50’s. The Clarks are perhaps most famous for their “doll studies,” in which children were presented with plastic diaper-clad dolls identical in appearance except for color. The researchers were interested in who selected what color doll and found that black children in segregated schools in the south consistently chose white dolls. The researchers concluded from these experiments that prejudice and segregation cause black children to develop a universal sense of inferiority and self-hatred.
Gilman noted that while the doll studies are problematic in a number of respects, the important point about this line of psychological research is that it moved the discussion in the United States from looking at the politics of prejudice to the psychology of prejudice—what Gilman referred to as the “medicalization of prejudice.” The movement initiated by the Clarks had several important ramifications. On the one hand, the movement played a positive role in the push to end segregation. On the other hand, the NAACP and other civil rights organizations invariably began to make legal arguments via the doll studies by invoking the universal psychological damage caused by segregation and racism. Such damage, for instance, was behind the reasoning in the famous Brown v. Board of Education decision. In addition, by focusing on the mental injury suffered by the victim, the madness of the perpetrator was forgotten. In short, as psychological evidence was introduced as the primary means to influence jurisprudence in America, the victory of ending segregation came at the cost of defining all African American children raised under segregation as psychologically damaged.     
With his closing remarks, Gilman mentioned a powerful counterargument to the “racism as mental illness” theory, first advanced by the political theorist Hannah Arendt in the 1950’s. Arendt made the simple claim that racists are actually normal people—to be sure, they are bad people, but normal people nonetheless. In agreement with this view, Gilman argued that the medicalization of social phenomena and the intense focus on the damage of the victim is unhelpful because it tacitly exculpates the racists themselves.
During a stimulating discussion following the talk, Gilman went on to highlight the dangers inherent in the claim that victims of racism suffer from universal mental illness. First, the claim is too general; it defines ab initio all members of a group as damaged in exactly the same way, when in reality not all people suffer from psychopathology. Second, the African American researchers who conducted the experiments present an obvious exception to their own rule of universal damage. Lastly, with respect to the doll studies, the theory cannot explain those white children who chose black dolls. In the end, Gilman made a forceful case against psychologically based arguments against racism that invoke the notion of universal damage, for if everyone is “damaged,” then it ceases to be a useful category.

Wednesday, November 20, 2013

Kozhevnikov: Is There a Place for Cognitive Style?


Maria Kozhevnikov, Is There a Place for Cognitive Style in Contemporary Psychology and Neuroscience? Issues in Definition and Conceptualization
The second CMBC lunch talk of the semester featured a presentation by Dr. Maria Kozhevnikov (Radiology, Harvard School of Medicine; Psychology, National University of Singapore), who has been a visiting scholar of the CMBC this fall. Dr. Kozhevnikov offered a critical perspective on the current state of cognitive style research within different research traditions, such as cognitive neuroscience, education, and business management. Kozhevnikov opted to treat the lunch talk as an opportunity for group discussion, which promoted a lively dialogue among the participants.  
Traditional research on cognitive style began in the early 1950’s and focused on perception and categorization. During this time, numerous experimental studies attempted to identify individual differences in visual cognition and their potential relation to personality differences. The cognitive style concept was thereafter used in the 50’s and 60’s to describe patterns of mental processing which help an individual cope with his or her environment. According to this understanding, cognitive style referred to an individual’s ability to adapt to the requirements of the external world, given his or her basic capacities. Researchers tended to discuss cognitive style in terms of bipolarity—the idea that there are two value-equal poles of style dimensions. For example, a host of binary dimensions were proposed in the literature, such as impulsivity/reflexivity, holist/serialist, verbalizer/visualizer, and so on. No attempt was made, however, to integrate these competing style dimensions into a coherent framework. By the late 1970’s, a standard definition referred to cognitive style as “a psychological dimension representing individual differences in cognition,” or “an individual’s manner of cognitive functioning, particularly with respect to acquiring and processing information” (Ausburn & Ausburn, 1978).         
Kozhevnikov questioned the usefulness of such definitions and pointed to a general lack of clarity with regard to how the term “cognitive style” was employed in early research. Moreover, several participants at the lunch talk noted further problems with the idea of cognitive style. For instance, if the notion of “style” is a distinct cognitive category, how is it different from basic abilities or strategies? What does the concept of style add in this respect? While it is obvious that there exist individual differences in cognition, it was notoriously difficult to determine exactly how styles differed from intellectual and personality abilities. On account of these conceptual difficulties, among others, cognitive style research fell out of favor and virtually disappeared after the late 1970’s, to the point where even mentioning the term in psychology and neuroscience settings has become taboo.
The concept of cognitive style lived on, however, in the field of education, where it quickly became associated with the idea of learning styles. Kolb (1974) defined learning styles as “adaptive learning modes,” each of which offers a patterned way of resolving problems in learning situations. The idea of individual learning styles in turn gave rise to the so-called “matching hypothesis”—the suggestion that students learn better when their learning style is aligned with the style of instruction. Although the hypothesis appeared reasonable, it has not found empirical support; studies have not been able to establish that aligning teaching with student styles confers a discernable benefit. It is worth noting, however, that this observation does not rule out the existence of learning style altogether. Kozhevnikov asked us to consider the martial artist Bruce Lee, who when asked what fighting style is best, responded that the best fighting style is no style. The point is that it pays to be flexible, that is, to be able to use different styles—in either fighting or learning—in different situations. Learning style instruments have become popular in education and tend to use a combination of different style dimensions that can be quite complex.
The business world has also adopted the idea of cognitive style, in the form of professional decision-making styles. In management, researchers have been intensely focused on the “right brain-left brain” idea, which is frequently invoked in style categorization. The most popular bipolarity, for example, is that of analytic/intuitive (thought to correspond to left- and right-brain, respectively). Kozhevnikov was quick to point out, however, that this theory has no basis in neuroscience. Lastly, in parallel to education learning style instruments, business has likewise incorporated its own instruments for identifying personal styles. The most famous of these is the well-known Myers-Briggs Type Indicator (MBTI).
Beginning in the late 1990’s and early 2000’s, recent studies from cross-cultural psychology and neuroscience have demonstrated that culture-specific experiences may affect distinct patterns of information processing. Kozhevnikov reported that these “cultural-sensitive individual differences in cognition” have been identified at cognitive, neural, and perceptual levels, and appear to be shaped in part by socio-cultural experiences. Several studies, for example, have explored these transcultural differences in East Asian and Western populations. Researchers identified greater tendencies among East Asian individuals to engage in context-dependent cognitive processes, as well as to favor intuitive understanding through direct perception rather than an analytic approach involving abstract principles. Moreover, these individual differences appear independent of general intelligence. At least one participant expressed initial reservation about such research, remarking that the talk of the East-West binary tends to postulate artificial groups (e.g. what exactly is “Eastern culture”?).
Nevertheless, the finding that cognitive style can be represented by specific patterns of neural activity—independent of differences in cognitive ability measures—lends support to the validity of the cognitive style concept. According to this picture, then, Kozhevnikov redefines cognitive style as “culture-sensitive patterns of cognitive processing that can operate at different levels of information processing.”
Assuming this research is on the right track, the next question becomes: how many cognitive styles are there? As we have seen, early studies on cognitive style proliferated a large number of styles and dimensions, which further multiplied with the introduction of learning and decision-making styles. A unitary structure, such as the analytical/intuitive binary common in business circles, fails to capture the complexity of styles. More recent theories have therefore proposed multilevel hierarchical models, which include both a horizontal (e.g. analytical/holistic) level and a vertical dimension to reflect different stages of information processing (e.g. perception, thought, memory). Thus, different stages in processing reflect different cognitive styles.
Building upon this important theoretical modeling, Kozhevnikov proposed a model of cognitive style families with orthogonal dimensions. According to this proposal, it would be possible to map all the different proposed styles onto a matrix with 4x4 cells. On the horizontal axis are different dimensions that include context dependency/independency; rule-based/intuitive processing; internal/external locus of processing; and integration/compartmentalization. The cells on the vertical axis correspond to levels of cognitive processing, such as perception; concept formation; higher-order cognitive processing; and metacognitive processing. Kozhevnikov suggested that this theoretical framework offers a means of categorizing and unifying the array of style types and dimensions—from traditional styles to learning and decision-making styles—into a single matrix with multiple cells that accentuate both the relevant horizontal and vertical dimensions.

References
Ausburn, L. J., and Ausburn, F. B. 1978. Cognitive Styles: Some Information and Implications for Instructional Design. Educational Communication and Technology 26: 337-54. 
Kolb, D. A. 1974. On Management and the Learning Process. In Organizational Psychology, ed. D. A. Kolb, I. M. Rubin, and J. M. McInture, 239-52. Englewood Cliffs, NJ: Prentice Hall.

Tuesday, October 15, 2013

Otis & Rochat: Unsavory Emotions and Their Developmental Roots


Laura Otis and Philippe Rochat, Unsavory Emotions and Their Developmental Roots

In the first CMBC lunch of the 2013 Fall semester, Laura Otis (English, Emory University) and Philippe Rochat (Psychology, Emory University) discussed some of our “unsavory” emotions from literary, physiological, developmental, and evolutionary perspectives. Dr. Rochat focused on the early origins and expression of certain emotions in childhood development, while Dr. Otis examined the descriptions and metaphors used to reflect these emotions in literature.

Babies, Shame, and Reputation

As a developmental psychologist, Rochat is interested in studying the origins of “self-conscious emotions” in children. One difficulty with such an undertaking, however, is the preponderance of competing definitions of “emotion” (hundreds according to some counts). Taking his cue from the Latin verb emovere (“to move out”), Rochat suggested an understanding of emotions as the early public display of mental states. According to this notion, which emphasizes the external, or public, aspect of emotions, it is significant that such affective states are present very early in gestation. For example, ultrasound images show fetuses in pre-natal development wearing either smiles or frowns on their faces.

The public display of emotions becomes especially salient when one considers the emotion of shame. Consider three fundamental ideas about human development. First, humans are hyper-dependent upon others (notably upon adults for care and feeding during infancy). Second, humans are part of a self-conscious species. That is, we are capable of contemplating ourselves as an object of reflection and evaluation. Third, and most importantly, we care about reputation. Indeed, we are fearful of the gaze of others, a fact about our social psychology that fuels the emotion of shame. This latter feature of human psychology was captured perhaps most succinctly by Darwin in The Expression of the Emotions in Man and Animals, where he wrote that, “To be human is to care about reputation.”

One question then arises: how does one determine when children develop a sense of reputation? Rochat suggested that the minimal pre-requisite for evaluative self-consciousness and a sense of reputation begins with an awareness of the self/world distinction. Although this observation may seem obvious enough, earlier psychologists such as William James and Jean Piaget believed that babies were born into a state of fusion with the environment (what James famously called a “blooming, buzzing confusion”); they were not thought to have a clear distinction between the self and the world. However, more recent research suggests that infants do in fact have an implicit sense of a unified and differentiated self at birth, distinct from the world around them. In particular, Rochat’s research indicates that babies are born feeling differentiated from and situated within the world; they are active agents who are bounded (they feel a body envelope), substantial (they occupy space), and dialogical (they have an interpersonal sense of self).

Rochat presented evidence for the progression of the explicit sense of self at different stages of development. For example, at three and four months of age, children begin to test the limits of their own bodies. They grasp for objects within reach (but not those out of reach) and systematically explore the actions of their own legs, hands, and even voices. This reflects an implicit sense of self whereby children situate their own body in relation to differentiated objects in their environment. By eleven months of age, children begin to identify their own sense of self in others. For example, babies prefer to orient their attention towards an experimenter who is imitating their actions, rather than towards a second experimenter who is not. By 21 months, children pass the familiar mirror-self recognition task by noticing and attempting to remove a mark placed on their faces after viewing themselves in the mirror. The relevance of this finding is not simply that children notice an abnormality on their body, but that they display a sense of embarrassment—they feel stigmatized by this realization. However, in settings where all experimenters don similar stickers (thereby creating a social norm), the tendency to reach for the mark drops significantly. The final stage of development that Rochat explored dealt with the masking of emotions. An interesting shift occurs between three and five years of age, when children learn not only to express their unsavory emotions, such as guilt or shame, but also to mask or conceal them.

“Banned Emotions” in Fiction

Otis first shared a few words about an undergraduate course that she offered in the Fall of 2012, titled “Cognitive Science and Fiction.” Sponsored by the CMBC, this course juxtaposed innovative literature and cutting-edge scientific studies. The goal was twofold: to analyze the ways that scientific observations about the brain can enhance our understanding of how good literature works, and to use literary insights about the mind to think of new scientific experiments to try. Across the semester, a number of distinguished guest speakers—both fiction writers and scientists—visited the class to discuss their work.

While Rochat examined the projection of emotions outward, Otis focused on how the experience of different unsavory emotions is expressed from the inside. One way to address this question is by turning to the world’s rich literary tradition. Otis’ presentation sketched the blueprint for a new project on what she calls “banned emotions.” These include self-pity, prolonged crying, repressed and enduring rage, envy, personal hatred, grudge bearing, refusal to forgive, and refusal to “let go.” Sianne Ngai analyzed some of these emotions in her recent book, Ugly Feelings. What unites these banned emotions is the fact that people indulge in them, much to the disapprobation of a given society. Like Lakoff and Johnson, Otis is interested in how these emotions are represented in language and how language can shed light on human experience. For instance, how does an author’s choice of words reflect both the physical experience and cultural assumptions about certain emotions? How do the two interact to create an emotional experience? A satisfactory account should consider the complex mixture of cultural and physiological factors. In doing so, Otis shared some research questions that will guide her inquiry. For example, are these banned emotions simply negative, or is there potential for them to be viewed as empowering under certain circumstances? Similarly, whose interests does it serve to discourage these emotions? On this last point, there may be significant gender disparities worthy of consideration. Thus, one must be sensitive to the politics of emotions and the strong social pressure that compels individuals to feel certain emotions and not others. Who do social expectations harm and whom do they help?

The research strategy Otis plans to conduct for this project is straightforward: read lots of literature! Included among the preliminary list are writings by Dante, Dickens, Dostoevsky, Kipling, Kafka, and Virginia Woolf. But the list will also include newspaper and scientific articles as well. Moreover, Otis intends to use the tools of the digital humanities to mine a wide array of texts, with an eye toward discovering what words are most commonly paired with so-called banned emotions. The goal of this extensive analysis is to pay attention to the specific word choice and larger metaphors used to describe unsavory emotions.

A long (western) religious tradition of banning disagreeable emotions is epitomized by the proverbial Seven Deadly Sins. These emotions often are associated with a sense of disempowerment. For example, Dante’s Inferno depicts the plight of the wrathful in Hell, who are doomed to writhe in the muddy River Styx for eternity. If we turn to literary characters, interesting candidates for emotion analysis include Dickens’ nasty old Miss Havisham and Kipling’s haunting Mrs. Wessington. In addition to great literature, film offers a repository of emotional narratives. Otis mentioned the lead female character in the movie G.I. Jane, who succeeds precisely because she refuses to feel self-pity. More recently, the central message of the comedy Bridesmaids is “stop feeling sorry for yourself!”

Otis closed her talk by highlighting a handful of recurring metaphors that are employed in connection with banned emotions. One is the notion of immobility or impeded action—being “bogged down” or “chained up” (think of Dante’s poor souls in Hell). This metaphor is frequently accompanied by slime, mud, and filth, all of which hinder one’s motion. Another related example is the common trope of “holding on” or “not letting go.” The social stigma associated with this metaphor is pervasive in self-help magazines today. Other literary descriptions include a sense of confinement, isolation, lack of air, and darkness. According to Otis, there are certain assumptions that underlie this pattern of metaphors. Specifically, the implied psycho-social model of emotions that emerges is one which stresses an ethic of individual responsibility—“if you’re suffering, it’s your fault.” Accordingly, an individual is expected to exercise proper control over his or her emotions. Should one fail to do so, the resulting implication is that you’re disappointing others by failing to act as a responsible member of society. Otis intends to explore further the question of whose interests are served (at the expense of others) by perpetuating this model of emotions.

Many thanks go to Philippe Rochat and Laura Otis for their presentations, which were both stimulating and (if I’m allowed a pun) quite savory.

Tuesday, April 2, 2013

Fivush & Ozawa-de Silva: Narratives, Self-Transformation, and Healing


Robyn Fivush & Chikako Ozawa-de Silva
Narratives, Self-Transformation, and Healing

In the third CMBC lunch in three weeks, Robyn Fivush (Psychology, Emory University) and Chikako Ozawa-de Silva (Anthropology, Emory University) offered differing, albeit complementary, perspectives on the role that telling one’s story can play in healing psychological wounds.

Naikan and Narrative

Ozawa-de Silva spoke about the transformation that patients undergo in Naikan, a Japanese therapeutic practice, which employs a method of eliciting narrative reconstruction of past events.  Naikan stands out from other forms of narrative-based therapy in that it does not involve any active social interaction.  It proceeds entirely between patient and practitioner.  “Naikan” means inner looking or introspection.  A therapeutic session takes place over the course of a week, during which time the patient undertakes an examination of past acts and deeds from the third-person perspective.

The shift in perspective from the first- to the third-person is the crucial component of Naikan therapy.  The shift is initiated by a series of structured questions designed to help patients gain a better understanding of themselves, their relationships, and the fundamental nature of human existence.  The client is asked to choose a significant person in their life to focus on (typically starting with the mother) and then to record answers to the following three questions: (1) what did you receive from this person? (2) what did you give in return? and (3) what trouble did you cause this person?  Note that there is not a fourth question about how this person caused trouble for the client.  That is because focusing on how we have been troubled comes easily to most of us, and runs counter to the shift away from the first-person perspective that Naikan intends to facilitate.

In order to foster deep introspection, during the treatment period the client is not allowed to talk to anyone, read, or watch television.  One signal of transformation over the course of the week is the client’s increasing use of the passive voice.  Instead of saying “I love…,” for example, the client will begin to say “I am loved.”  This shift indicates that the client has become more reflective and begun to think from the other person’s perspective.  Although the shift in perspective does nothing to alter external circumstances, it does alter how the client views those circumstances, and can make them less painful.  Ozawa-de Silva noted that for this reason Naikan is described as a “cure without curing.”

Ozawa-de Silva identified three positive changes typically undergone by the time Naikan therapy terminates.  (1) Increased sense of connectedness with others.  Through their reflections on the roles that other people have played in their lives, clients come to see that they are not as “self-made” as previously thought.  (2) Increased sense of self-acceptance.  Many clients start Naikan therapy suffering from low self-esteem.  Naikan helps them to gain a more balanced perspective on themselves.  People start to accept their imperfections more, because they feel relief that they have friends and relatives who accept them for who they are.  (3) Greater sense of gratitude.  This point is connected to the first two.  Naikan reflections highlight the ways in which other people have played important parts in the client’s life, leading to a sense of appreciation of others’ roles.  According to Ozawa-de Silva, these upshots of Naikan bear a close relation to the three main categories of positive mental health: emotional well-being; psychological well-being; and social well-being.  The positive effects of Naikan are measurable even months after the end of the therapeutic session.

Is Storytelling Always Therapeutic?

Narrative clearly plays a central role in the therapeutic successes of Naikan.  In American culture, too, it is widely held that talking through negative experiences can have therapeutic import.  Fivush’s presentation sought to interrogate this widely held view and to highlight some potential limitations of storytelling for therapy.  She agreed that narratives are instrumental in the formation of identity, but insisted that it should not be taken for granted that storytelling is always therapeutic, especially when the subject’s experiences were especially traumatic or stressful.

Fivush began by looking at some of the evidence in favor of the therapeutic benefits of storytelling.  In the 1970s, David Spiegel (UCSF medical school) designed a study to investigate the potential benefits of support groups for terminally ill women.  At the time it was believed that talking about their symptoms and pain would merely create stress for patients and their families.  On the other hand, there was a growing interest in how best to care for the “whole patient,” including emotional and psycho-social well-being.  In order to test his hypothesis that it would be helpful for terminally ill patients to talk to and provide support to one another, Spiegel assigned 50 or so women to support groups of 10-12 women each, with 30 or so left out to serve as a control.  Initially, no differences in diagnosis or symptoms were detected.  After a year, however, the women in support groups reported doing better in quality of life measures (such as stress and anxiety levels), even though there was no difference in their physical health compared with the control group.  More strikingly, after 10 years, it was possible to conclude that on average women in support groups survived a year and a half longer than those in the control.  This suggested that the support groups worked to increase longevity. 

Unfortunately, these results were a fluke.  Hundreds of similar studies have since been carried out, and none has generated Spiegel’s results.  This is a case, Fivush explained, where a sensational finding that has failed to replicate has become entrenched in the public mind anyway.  Even though there is no good evidence for the claim, it is widely believed that support groups enhance survival rates.  Even if they have no effect on survival, however, it is true that support groups improve quality of life. 

What is it about being in a support group that improves quality of life?  According to Spiegel, there is something critical about sharing your story.  Social psychologist, James Pennebaker, knew about Spiegel’s study and undertook to investigate further the therapeutic power of storytelling.  In particular, Pennebaker was interested in the effects that “expressive writing” might have on well-being.  He randomly assigned a group of college freshmen to an expressive writing assignment where they were asked to write their deepest thoughts and feelings for twenty minutes a day for three days in a row.  Another randomly assigned group of students was given a non-emotional writing assignment, and another group wrote nothing.  At the end of the semester, Pennebaker took base-line measures of psychological and physical health.  Freshmen involved in expressive writing were found not only to report higher levels of psychological well-being (less stress, anxiety, etc.), but also had higher GPAs.  Since Pennebaker’s study, the effects of expressive writing have been widely studied, and the general findings are robust.  Even improvements in physical health, such as t-cell functioning, have been correlated with expressive writing.  Such studies provide strong evidence that talking about our experiences is good for us.

Fivush cautioned, however, that while storytelling is therapeutic on average, there are some individuals who actually get worse when they share their story.  For many people, storytelling contributes to cognitive processing of negative experiences.  It is not that such individuals learn to see their negative experience as a good thing, but rather, as in Naikan, it enables them to take a different perspective on their life, and provides for a sense of growth.  But Fivush stressed that narrative does not facilitate cognitive processing in everyone.  She gave the example of 9/11 first responders, who were required to talk about their experiences as a way of dealing with the trauma they experienced in order to keep their jobs.  For many of these individuals, sharing their stories did not help them at all; on the contrary, it served merely to re-open their wounds.  Certain individuals do not show signs of cognitive processing when reconstructing the story of a traumatic event.  Rather they exhibit what is called “rumination.”  Instead of processing what happened to them in a productive way, narration leads some people to dwell on the negative experience in a counter-productive way.  They wonder, for instance, why it had to happen, or even come to the conviction that life is awful and can never be the same again. 

Although there is solid evidence that storytelling helps most people in most scenarios deal with negative experiences, Fivush’s main point was that this should not be taken for granted in all cases.  For some people, sharing their stories may even have a detrimental effect on their well-being.  It is not clear why some tend to engage in rumination rather than cognitive processing.  It may be that those with worse childhoods tend to be more ruminative.  Whatever the reason, such counter-examples to the benefits of narration need to be taken into consideration.  They indicate that there is more to learn about exactly who benefits from narrative therapy and why. 

On this question of who benefits and why, an interesting discussion ensued in the Q&A on gender differences and the benefits of narrative.  Fivush pointed out that expressive writing has been shown to be more effective in improving the well-being of males.  The reason for this is interesting: females tend to be much more naturally emotionally expressive than males, so the exercise in expressive writing does not have as sharp an effect for them as it does for males, who tend to lack outlets for emotional expression.  In our culture, more value is placed on talking about past experiences if you are female.  By adolescence, girls begin to co-ruminate with girlfriends.  This can actually lead to negative consequences for health, however, since rumination, as indicated above, is a risk factor for depression.

Conclusion

Ozawa-de Silva and Fivush addressed the question of the role of narrative in therapy from very different perspectives, and emphasized contrasting points.  On one hand, Ozawa-de Silva talked about the successes of a very specific form of narrative therapy – Naikan.  On the other, Fivush raised questions about the benefits of narrative in therapy from a general standpoint.  Nevertheless, there was no disagreement about the fact that by and large storytelling is a powerful therapeutic technique.  It is just that, for some people, reconstructing the story of a traumatic event can lead to unhealthy rumination rather than to healthy cognitive processing.  Ozawa-de Silva suggested that the specific focus in Naikan on fostering cognitive processing may therefore be an important factor in its success.  The need to be sensitive to this distinction between situations where narrative leads to unhealthy rumination, and situations where narrative leads to healthy cognitive processing was perhaps, then, the chief take away from the lunch.  

 

Tuesday, March 5, 2013

Luhrmann: Hearing Voices in California, Chennai, and Accra


Tanya Luhrmann
Hearing Voices in California, Chennai, and Accra

The second CMBC lunch of Spring 2013 was a bit unusual in that it featured only one speaker.  Tanya Luhrmann (Anthropology, Stanford University), known for her work on modern-day witches, evangelical Christians, and psychiatrists, shared some of the findings of her recent research on the auditory hallucinations of schizophrenics.  Her guiding question was: does the experience of hearing voices shift across cultural boundaries?

The presentation began with an audio clip designed to approximate the experience of someone hearing distressing voices.  The eerie clip consisted of a background of ambiguous murmurs and whispers which resolved into discernible words and phrases, such as “Don’t touch me”; “Stop it”; “I came for you.”  A majority of schizophrenic people hear voices, but is the experience the same across cultures? 

To answer this question, Luhrmann conducted a comparative study of three groups of twenty schizophrenic people from three culturally distinct places: San Mateo, California; Accra, Ghana; and Chennai, India.  Results were drawn from participants’ responses to standardized questions about the nature of their auditory hallucinations.  Participants were asked questions like: How many voices do you hear?  How real are the voices?  Do you recognize the voices?  Do you have control over the voices?  What causes the voices?  And so on.  On the basis of these interviews, Luhrmann identified some clear differences in the experience of auditory hallucinations across cultures, which she described in her presentation.

Americans with schizophrenia, in general, explained Luhrmann, see themselves as suffering from mental illness, are comfortable with identification as “schizophrenic,” and have a sophisticated understanding of diagnostic criteria.  The voices they hear are most often violent, making commands like “cut off their head,” or “drink their blood.”  Positive experiences of the voices are only seldom reported.  The voices tend to be unknown, unwelcome, and most American participants reported little interaction with them.

The study suggested that Ghanaians, for their part, rarely talk about hearing voices in terms of schizophrenia, but rather tend to give voices a spiritual interpretation.  Many Ghanaians were reluctant to talk about their mean voices, which they often regard in demonic terms, because another voice, identified as that of God, warned them against paying any attention to them.  Luhrmann noted that compared to Americans, Ghanaians exhibited a more interactive relationship with their voices, and in about half the cases, considered their voices to be positive.

Finally, in the case of the Indian participants, two distinctive features stood out.  Firstly, the voices were more often identified as those of kin than was the case with Americans and Ghanaians; and, secondly, the voices, whether they were identified as good or bad, often consisted in mundane practical injunctions, such as not to smoke or drink or to eat this or that food.  In these cases Luhrmann noted even more interaction, which often took the form of a playful relationship with the voices.  Luhrmann described the relationship that one woman had with the voice of the Indian god, Hanuman.  The voice initially would tell her to do despicable things like to drink out of a toilet bowl, but eventually the relationship became such that she would have parties with Hanuman, they would play games together, and she would tickle his bottom.  Luhrmann added that the voices the Indian participants heard would more often talk about sex than in the other cases.

Having outlined some striking differences in the auditory hallucinations of schizophrenics across cultures, Luhrmann went on to speculate about the reasons for the differences.  Her central proposal highlighted differences in local theory of mind.  At some point between the ages of 3 and 5, children come to learn that others’ behavior can be explained in terms of what they understand.  When children come to learn in this way that other people have minds, developmental psychologists say they have developed a “theory of mind.”  Luhrmann explained that while theory of mind per se is universal, there are cultural variations in how mind is understood.  In the case of Americans, Luhrmann suggested, there is a sense that the mind is a place, and that it is private.  The experience of foreign voices is consequently viewed as intrusive and unwelcome.  In Accra, by contrast, according to Luhrmann, the boundaries between mind and world are seen as more porous.  Many Ghanaians believe that thoughts can hurt people whether they are intended to do so or not – a belief consistent with the prevalence of witchcraft in Ghanaian culture, and with the emphasis on keeping thoughts clean, for example, in prayer.  Luhrmann explained how this view of the mind as porous is seen in Chennai as well, where it is commonly believed that seniors should know what juniors should be thinking.

In closing, Luhrmann reflected on the significance of her findings, singling out two upshots in particular.  First, before Luhrmann’s work, most psychiatrists had not considered that auditory hallucinations might differ significantly across cultures.  The second, and perhaps more consequential, contribution concerns the treatment of schizophrenic patients.  Luhrmann mentioned some pioneering clinicians in Europe who argue that the auditory hallucinations of schizophrenics could be rendered less distressing if patients were taught to interact with their voices.  In Chennai, in general, schizophrenia has a more benign course than it does, say, in the United States.  There are a number of possible explanations for this, including the fact that patients generally remain with their families, and that there is little stress on the diagnosis of mental illness.  But part of the explanation could well be the interactive relationship with voices that Luhrmann’s research found to be a feature of Indian schizophrenic experience.  If so, then Luhrmann’s work provides support for the European theory and points the way to an effective form of treatment.  Let it be clear that even if people report a positive experience of voices, schizophrenia is usually still an unpleasant affliction.

As a philosopher, I find the notion that interacting with hallucinatory voices may be palliative and, hence, encouraged by psychiatrists rather curious.  I say “as a philosopher” because philosophers have tended to hold some variation on “the truth sets you free” theme.  Spinoza, for example, thought that when we understand the causes of our afflictions, they afflict us less, because they are more in our power.  To encourage interaction with hallucinatory voices seems, at least, tantamount to encouraging a form of magical thinking.  If Chennaians can have positive experiences of voices and their schizophrenia is more benign as a result, then there may well be reason to fear that any “cure” might be worse than the disease.  It is not so clear that Americans, however, could so easily strike up the kind of interactive relationship with their voices that Chennaians maintain, if, as Luhrmann contended, Americans’ view of their voices is tied to a more fundamental theory of mind, which might be both difficult and undesirable to uproot. 

The reason I imagine it might be difficult is the same as the reason it might be undesirable: alternative theories of mind might simply be false, or at least inferior qua theories.  Even if there is much about the mind we do not know, surely there is something true about the belief that thoughts are only in our heads; and surely the notion that the mind is not something that can influence the external world on its own is not equally tenable as the notion that thoughts by themselves can cause harm to other people.  Perhaps Americans with schizophrenia could come to interact with their voices in the same way that a parent might play along with their child’s personification of a stuffed animal.  And perhaps such a playful comportment would be palliative.  But then the difference is not in theory of mind, since, presumably, parents do not change their conception of how minds work when they play make-believe with their children; rather, the change is in the attitude adopted.  Would pretending that the voices are real, i.e., play-personifying them, suffice to render their presence sufferable, even a positive experience?  Or does such an interactive comportment with the voices only truly work if it is rooted in a theory of mind whereby such pretense is unnecessary?      

There is some prima facie reason to think that merely pretending that the voices are real so as to facilitate a more interactive relationship might actually help.  Even though it would not constitute or effectuate Spinoza’s prescription to understand the causes of an affliction, it might still bring about the desired result of bringing the affliction more within one’s own power.  Treating the voices as objects of play would subject the voices to rules of one’s own making, and this element of control might mitigate the distress of hearing voices.