By Mathew Goldstein
Neurologists at the Brain and Creativity Institute at the University of South Carolina (USC) Dornsife College of Letters, Arts and Science watched the brains of 40 self-declared liberal students in a functional MRI. USC neuroscientists compared whether, and how much, people change their minds on non-political and political issues when provided counter-evidence. During their brain imaging sessions, participants were presented with eight political statements that they had said they believe just as strongly as a set of eight non-political statements. They were then shown five counter claims that challenged each statement.
People will flexibly react to changes in their environment. If a sidewalk or road is blocked then we have no difficulty understanding that we need to consider finding a different route to our destination. But we are not consistently rationally flexible, particularly with regard to beliefs that we link to our self-identity. Instead of prioritizing best fit with the overall available evidence, we may negatively react to evidence that conflicts with our self-identity linked beliefs similar to the way we negatively react to a threat.
My response is this: We cannot trust our intuition, or anything mostly rooted in intuition, like faith or hope, to answer the big questions about how the universe functions because the answers to the big questions are mostly non-intuitive and counter-intuitive. So it is a mistake to rule out anything a-priori or to rely only on logic not anchored in evidenced. It is often inconsistent for some assertion to be simultaneously true and false. Therefore it is reasonable to conclude that given that X (naturalism) is true it is probably also the case that the opposite of X (supernaturalism) is impossible. But the impossibility of X being false when it is true is not a proper justification for concluding X is true, we still must justify our conclusion regarding X. To justify the conclusion that it is impossible for X to be false, we paradoxically should consider what is missing that would be required to properly justify a conclusion that X is false.
A-priori ruling out even identifying what qualifies as missing evidence favoring alternative conclusions is bad epistemology. Fairly considering what is needed to justify a conclusion entails also considering what would be needed to justify a contrary conclusion. Our justification for reaching a particular conclusion about how the universe functions is incomplete if we cannot identify missing justifications for concluding otherwise. When a conclusion is consistently supported by an abundance of highly diversified, interconnected, and direct empirical evidence it becomes unlikely that the available evidence will change so drastically as to favor the contrary conclusion, so we need not worry that our beliefs will be unstable if we allow the evidence to dictate. When a conclusion is inconsistently supported by rare, narrow, unconnected, and indirect non-empirical evidence then we should not have a strong commitment to that conclusion. Either way, there is no harm in identifying what evidence is missing that would change our conclusion if it was found.