Patricia S. Churchland is a key figure in the field of neurophilosophy, which employs a multidisciplinary lens to examine how neurobiology contributes to philosophical and ethical thinking. In her new book, “Conscience: The Origins of Moral Intuition,” Churchland makes the case that neuroscience, evolution, and biology are essential to understanding moral decision-making and how we behave in social environments.
The way we reach moral conclusions, she asserts, has a lot more to do with our neural circuitry than we realize. We are fundamentally hard-wired to form attachments, for instance, which greatly influence our moral decision-making. Also, our brains are constantly using reinforcement learning — observing consequences after specific actions and adjusting our behavior accordingly.
Churchland, who teaches philosophy at the University of California, San Diego, also presents research showing that our individual neuro-architecture is heavily influenced by genetics: political attitudes, for instance, are 40 to 50 percent heritable, recent scientific studies suggest.
While some critics are skeptical that neuroscience plays an important role in morality, and argue that Churchland’s brand of neurophilosophy undermines widely accepted philosophical principles, she insists that she is a philosopher, first and foremost, and that she isn’t abandoning traditional philosophical inquiry as much as expanding its scope.
For this installment of the Undark Five, I spoke with her about how we develop social values, the similarities between human and animal brains, and what is going on in the brains of psychopaths, among other topics. Our conversation has been edited for length and clarity.
Undark: You write that for years, other philosophers “bashed me for studying the brain.” What kind of resistance did you face when you brought neuroscience into the study of morality? And has that resistance changed over time?
Patricia Churchland: As a philosopher, when I started to study neuroscience, I wasn’t particularly interested in morality. I was interested in the nature of knowledge, perception, decision-making, and consciousness. The resistance at the time was that philosophers largely believed that philosophers were going to use “conceptual analysis,” or words, to figure out the scope and limits of science. And once philosophers had laid down those foundations, maybe science could do its work.
In particular, they thought it was true about the nature of the mind that there were certain things that philosophers understood that scientists — poor things! — did not understand.
Many philosophers thought it was impossible that neuroscience would ever be able to tell us anything about the nature of the self, or the nature of decision-making, or the nature of how we represent the world. They thought that my rather different approach — which was that, in the fullness of time, neuroscience was going to have a lot to say about all of these things — was not only wrong, but it was stupid, and it was undermining philosophy.
I think what has shifted is that the older guys are retiring, and the younger people can see the value of interdisciplinary work.
UD: Can you talk about how you became interested in morality, and found your way to neuroscience?
PC: Several lines of evidence conjointly convinced me that, yes, we could understand something — maybe a lot — about the neurobiological basis for moral behavior.
First, the animal studies of researchers such as Frans de Waal and Shirley Strum clearly showed that other primates were highly social, liked to be together, showed empathy for the suffering of their kin, shared food, and cooperated. Like wolves — and like us. Moreover, this was almost certainly true of ancient hominins such as Homo erectus and Homo neanderthalensis who lived in stable groups, had stone tools and the use of fire. These data strongly imply an evolutionary basis for the foundational features of social behavior in mammals.
Second, anthropologists studying existing hunter-gatherer scavenger groups showed that these groups had stable customs and a morality suited to their ecology, even though they never took a philosophy class.
The third line of evidence involved the study of neurohormones, such as oxytocin, which is central to the regulation of attachment and bonding between individuals. Though many neurochemicals play a role in social behavior, oxytocin turns out to be essential for attachment, which then ushers in a whole range of behavioral features, such as learning local norms and customs.
UD: If human brains are basically wired the same way, how is it that people reach different conclusions when it comes to moral decision-making?
PC: While it’s true that the macro-circuitry is very similar from one person to another — which is why we’re more like each other than any of us is like, say, a vervet monkey — through learning and through very deep differences in temperament we can come to different conclusions about things.
Some of us are extroverted, some are introverted, some are prone to anxiety, some are happy to take risks — and some of those temperamental features also play into how we interact socially and, hence, how we make our moral judgments.
So those things all matter as well as the experiences we have. That changes the microcircuitry, at the level of the single neuron, and how one neuron talks to another. There are really those two domains that are relevant to the differences in moral judgement.
UD: Do animals with smaller, but similar, brains to humans possess a conscience?
PC: It’s a really interesting question. Only a few years ago, many philosophers and some ethologists — people who study animal behavior — would say, “Oh no, they don’t have anything like that!”
But de Waal, a well-known ethologist, has become convinced that when we really look at the behavior of highly social mammals, we can see very clearly that there are things like feeling bad that you did a dreadful thing. Animals do feel a kind of guilt when they do something inappropriate.
There is no doubt that when a dog does something that it knows it’s really not supposed to do, and it’s caught, it does pretty much what we do — its head goes down, it hunches over, its tail goes between its legs, and it skulks off. That guilt behavior persists as it tries to reconcile or ask for forgiveness.
De Waal believes there are no unique human emotions. We share those deep emotions, because they’re part of the subcortical structures, with all mammals. Because humans have language, we may be able to give fancier nuances, but at the same time, the emotions themselves, the circuitry supplying those emotions, is basically all the same.
UD: Some people, like psychopaths, don’t exhibit the typical signs of having a conscience. What do we know by looking at their brains?
PC: The occurrence of psychopaths is deeply disturbing — and, at the same time, fascinating. It’s estimated that about 1 percent of the population has this deficit. And the deficit consists of the absence of feelings of guilt or shame or remorse, and the inability to form long-term attachments, and to form strong attachments, such as the attachments between parents and children, or between mates or friends.
The wiring just isn’t there. What are the structures that are implicated here? We know they’re in the front of the brain and they relate to the reward system and to the emotional structures that are part of the ancient subcortical wiring.
Having said that, what does not show up when you do an MRI scan of a psychopath’s brain — you don’t see any kind of hole, you don’t even see consistent differences in the activity of very particular areas. This suggests that the difficulties are at the micro-structural level — that they have to do with individual neurons and the networks that those neurons form.
It’s going to be hard to access that level in order to understand psychopathy, but I think in the fullness of time, it will be done.
Hope Reese is a writer and editor in Louisville, Kentucky. Her writing has appeared in Undark, The Atlantic, The Boston Globe, The Chicago Tribune, Playboy, Vox, and other publications.