Philip R. Sullivan

Shortly after the terrorist attacks on Sept. 11, I went to an automobile garage and was greeted by a bearded man of Middle Eastern appearance who asked in a foreign accent how he could be of help. Rather abruptly, I was seized by a disagreeable feeling, along with the urge to say “Forget it.” I doubt that readers will be totally surprised at my less than admirable response, and I mention it now only to emphasize the point that my “ethnic profiling” had evoked an aversive response that was both strong and totally automatic.

Corrective feelings jumped onto the stage a moment later, so I managed to override my initial reaction; but its spontaneous occurrence emphasizes that some forms of “profiling” are unavoidable, because that is the way our brains work. To illustrate the basic mechanism: a small child touching the proverbial hot stove learns as a result of this one highly painful experience to avoid similar situations in the future. The brain has forged a strong aversive link directly without any need to reason to the conclusion.

In fact, even if the child were capable of applying our logical powers correctly to each such problem as it arose, that procedure would not get the job done—the job of adapting the human organism to our surrounding world. Drawing rational conclusions about the world depends on inference, and fully rational inference requires adequate sampling. But ironically, if children were to follow this sort of rational sampling strategy, they would be unlikely to make it through to adulthood.

Instead, when a child’s sensory apparatus registers something extremely painful, the brain’s limbic system earmarks prominent aspects of the situation for retention in memory and connects it with negative feelings and strong aversive tendencies. The result: one experience with a hot stove is sufficient to trigger long-lasting avoidance in a normal child.

This rough and ready warning system is hardly unique to our human species. It is already well developed in creatures as lowly as the sea slug. And by the time one moves up the scale of biological complexity to mammals, the apparatus reacts not only to direct stimulation of pain receptors but to any situation perceived as extremely dangerous. If a rabbit grazing in the middle of a sunny meadow narrowly escapes a dog, it will subsequently avoid the area, munching instead on lesser tidbits near the meadow’s brush border.

If the predatory dog was present only on that one occasion, the rabbit’s subsequent behavior would be suboptimal. But if the dog remained in the neighborhood, and the rabbit sampled the situation one additional time in order to acquire more data, it might not be alive for a third trial. The rough and ready response system has proved superior in terms of overall survival, despite the fact that its conclusions are statistically less valid than those based on numerous trials. The human nervous system has retained this ancient system, and because our brain’s sophisticated computational systems code information of a complex symbolic nature, it has even extended the range of application of this early warning system.

Returning now to my own spontaneous response: I had viewed the horror of the attack on the twin towers and had seen pictures of terrorists like Osama bin Laden and words like “Arab” and “Islamic Jihad” placed in immediate proximity to the scenes of the destruction. Furthermore, while there was but one twin-towers event, electronic imagery provided numerous reruns of the horrendous devastation. The result: my brain directly linked the extremely painful event with a visual profile and associated words.

When animals have been conditioned through one highly painful experience, the behavioral change tends to continue unabated (the rabbit may permanently avoid the sunny center of the meadow). Fortunately for us, however, the human brain’s complexity involves higher-level systems that monitor the activity of lower-level systems, allowing for much greater subtlety of response.

Note, for instance, that my immediate “gut reaction” was quickly countered by my recognition that following its lead was unlikely to be appropriate to the reality of the moment. Further reflection—and there was time for this—buttressed this initial intervention of my self-monitoring systems on a variety of fronts. I knew, for instance, that facial characteristics would be a poor guide to identifying danger in the case at hand (though such characteristics might, under higher risk circumstances, provide a statistical guide with somewhat greater than chance probability). And my learned, innate sense of fairness also sprang into action, so I noted that it would be quite unjust to label a man evil simply on the basis of his facial appearance.

Additionally, my memory contributed instances in which the brain’s tendency to overgeneralize led to initial error. When the Federal Building in Oklahoma City was bombed, for example, there was a strong tendency to think: Middle Eastern terrorists! What we eventually learned, of course, is that very angry people like Timothy McVeigh, who combine their rage with some overarching “principle of justification,” exist in all cultures. Such was no doubt always the case, but modern technology permits the actual execution of grandiose rage fantasies—fantasies that were previously not translatable into real-world activity.

And mention of a “principle of justification” raises another point, one that involves our built-in wariness of strangers. When John Salvi killed two abortion clinic workers in Brookline, Mass., on Dec. 30, 1994, his stated conviction was: “Those who participate in the murder of unborn children deserve to be murdered themselves.” He was a Catholic, and he insisted that his religion justified his terrorist action. But within the Catholic culture of Boston, observers could readily differentiate this “end justifies any means” approach from the moral doctrines actually held by his religion.

Note how different the situation is when an unfamiliar culture is involved. Most people in our country have needed to be newly educated about Islam (until very recently a “foreign” religion) in order to develop the same sort of distinction between that religion’s central doctrines and their deplorable distortion at the hands of murderous extremists. Just as John Salvi was a terrorist using the umbrella of Christianity, the extremists of twin-towers infamy were using the umbrella of Islam. For those who grow up within a given culture, such a distinction comes automatically; for those growing up outside the culture, the distinction must be learned. Further, there exists a lag between learning the distinction conceptually and feeling the distinction in one’s gut, so to speak.

The good news, then, is that our higher-level monitoring systems regularly provide us with the opportunity to modulate the errant tendencies of our rough-and-ready rapid-response systems. The bad news is that none of our computational systems are free from error—and that includes also our meta-level monitoring apparatus.

Let me provide a lightweight illustration. Several years ago, my youngest daughter, while working at the Albert Einstein Medical Center in New York, lived in an apartment in a downscale part of the Bronx. Returning late one evening, she reached her stop near the end of the line and started walking home. On her way, she noticed a group of young African-American men hanging around, and she experienced an immediate gut reaction: change your course so you do not walk directly in their way. Her own self-monitoring systems jumped into action an instant later, and she ended up criticizing herself for this obvious act of “racial profiling,” which in turn enabled her to compensate for her initial gut response by continuing on her regular route. This gave one of the young men an easy opportunity to grab her shoulder bag, and the group took off. Impulsively, she took off after them, yelling at the top of her lungs: “Stop them; they stole my bag!” Fortunately, she was not able to catch up to them.

In this instance, it was, ironically, her higher-level apparatus that focused far too much on one sensory detail of little specific relevance. For if one is walking alone late at night (profile item), in an economically downtrodden district (profile item), in an inner-city area (profile item), and a group of young males (profile item) are hanging around, prudence would dictate that one avoid their close proximity—whether their skin color happened to be red, orange, yellow, green, blue, violet, black or brown. But once my daughter had homed in excessively on the single characteristic of skin color, her sense-of-fairness program jumped into action inappropriately. I speak of the event as lightweight because she learned a valuable lesson for the low price of a few dollars. But if one of the men in that group had been really bad, he probably would have used her yelling as “justification” for doing her physical damage. So in that respect, I felt only relief when I heard about the incident.

To conclude, we not only cannot avoid “profiling,” we should not even want to disrupt totally this useful adaptive mechanism that our species has inherited. We should, however, monitor our rapid-response profiling, and we should also monitor the initial monitoring of our basic response tendency. Applying this notion to the case of our government’s regulatory and policing agencies, we should keep in mind that profiling will always occur. Better then that we apply consciously articulated profiling criteria that can be explicitly regulated to avoid unfairness to individuals—balancing this need, of course, with the need of all one’s fellow citizens for appropriate protection.

Philip R. Sullivan, M.D., is an assistant clinical professor of psychiatry at Harvard Medical School in Cambridge, Mass.