Why Ask Questions? By Walter Sinnott-Armstrong, Scott Brummel, Joshua August Skorburg, and Jordan Carpenter

Why Ask Questions?

 By Walter Sinnott-Armstrong, Scott Brummel,

Joshua August Skorburg, and Jordan Carpenter (Duke University)


Political polarization is rampant in the United States and around the world. This epidemic undermines community and social progress. Members of different political parties misunderstand, hate, and avoid each other. Our government does little to solve pressing problems.


This complaint is common, but what can we do about it? Love is all you need, the Beatles sang. That answer is too simple, but at least we should not hate everyone who disagrees with us. The real question, however, is what practical steps can we personally take to turn hatred into love or at least respect? People need to escape their echo chambers to encounter new perspectives, but how can we convince them to leave their friends for a while and listen to their enemies? The government needs to escape gridlock, but what can we do to make Congress act? It is easy to complain and hard to solve such large problems.


Although individuals cannot change the government or prevent trolling on the internet, we can improve our own beliefs and actions, and we can affect a few of the few people we meet. That achievement is limited but worthwhile. We can benefit from talking to neighbors with different views, even if we cannot talk to political leaders. We should not give up on reducing polarization in our own lives just because we cannot end it everywhere.


What can we do on a personal level to end polarization? First, we can stop abusing others by calling them crazy, stupid, ignorant, selfish, or ridiculous. Those labels are rarely accurate and hinder our own thinking as well as communication with others. Second, we can stop isolating ourselves and instead seek out people with conflicting political positions and listen charitably to them. It is usually not worth wasting time on rigid extremists on either side, but many moderates are willing to listen and learn as well as teach.


But how can we start a conversation with political opponents? One strategy is to assert your position and let them assert theirs. Bare assertions rarely work with adversaries, however. It is better for each side to give reasons why they hold their positions, but even reasoned arguments can turn people off when injected too early.


A better strategy to get communication going was illustrated by a television discussion many years ago (before YouTube!) between a biologist and a creation scientist. Many biologists disdain creation scientists, and they deserve it, but showing disdain will not fertilize fruitful exchange. If a biologist acts like a know-it-all, creation scientists will counter with their own claims and authorities, so then the discussion goes nowhere. This biologist was smarter than that. She simply asked questions: What do you believe? Are these beliefs based on scientific evidence? Which experiments have you done? How did you control for these confounds? How does that process work? Were your results replicated? Were they published? Which kind of journal? How did opponents respond? What more research is going on now? These questions were asked in a tone of voice that suggested curiosity instead of contempt. As a result, the biologist never came across as aggressive while many problems for creationism became obvious to the audience. The creationist was not convinced, of course, but careful listeners could not have missed the point.


This incident convinced me of the power of questions. If we want to understand and communicate with opponents, especially on controversial issues, it is usually more effective to ask questions than to assert truths.


Admittedly, not every question succeeds. How can we tell which questions produce desired outcomes? Our current project tries to tackle that issue empirically. In our first stage, we asked online participants which questions they would pose if they wanted to win a conversation or refute an opponent (such as “Do you realize how stupid you are?” or “Don’t you care about innocent children?”). We asked other participants which questions they would ask if their goal was to understand an issue or to be respected and liked by an interlocutor (“What do you think we can agree about?”).


In a second stage, we asked a separate sample of participants to identify which of the goals listed above was intended by provided questions from our first stage. One initial lesson from this research is that questions intended to make people like the questioner (such as “Where did you grow up?”) usually do not increase understanding of the opposing argument, whereas questions that are seen as seeking information (such as “Could you please explain your position to me so that I can make sure that I understand it properly?”) do seem to lead to increased understanding. Another lesson is that people who are asked questions often misconstrue the intentions of the person who asks the question. Our future research will try to figure out which kinds of questions are misconstrued and why, so that we can make positive recommendations about which questions lead to mutual understanding and fruitful dialogue.




We hope eventually to build our findings into a training program for middle and high school students. The goal is to help students build good habits of questioning in the right ways at the right times in order to increase respect and to reduce polarization. Successful programs will depend on strong empirical evidence, so we will need to continue our research. In the meantime, we can all learn to assert less and ask more, and we can try our best to introduce the right questions into the right contexts. These skills can enable us to start constructive conversations with political opponents, so we can each personally do our little bit to start to solve some small part of the big problem of polarization that is tearing our society apart.


Measures of self-rated improved understanding of the issues among people who answered questions emerging from each of the four manipulated motivations.  Participants felt like they learned little about the issue from answering questions intended to engender across-the-aisle liking and respect.

Measures of self-rated improved understanding of the issues among people who answered questions emerging from each of the four manipulated motivations.  Participants felt like they learned little about the issue from answering questions intended to engender across-the-aisle liking and respect.



Associations Between Religious Convictions and Intellectual Humility by Liz Mancuso

Associations Between Religious Convictions and Intellectual Humility
by Liz Mancuso


Most people hold convictions in life. Convictions, by definition, involve firmly held beliefs that are often associated with behavioral commitments. My recent work has focused on how convictions and associated commitments relate to intellectual humility. Intellectual humility can be defined as a nonthreatening awareness of one’s intellectual fallibility (Krumrei-Mancuso & Rouse, 2016). This definition assumes the intellectually humble person understands that his or her cognitive faculties are not perfect and that his or her knowledge, perceptions, and beliefs are therefore sometimes incorrect. This is paired with an attitude of acceptance, whereby the person does not feel defensive about his or her mental fallibility.


It is most fitting to study intellectual humility in the context of convictions that are of greatest importance to people. This makes religious convictions a particularly relevant domain, because they offer so much in the way of finding meaning, coping with life’s struggles, terror management, and so forth. My research has been examining whether it is possible to be intellectually humble and simultaneously deeply committed to religious beliefs.


Some may assume intellectual humility to be incompatible with firm commitments. Yet, research has indicated that intellectual humility is unrelated to conformity, social confidence, or low self-regard, and has small, positive links to self-confidence (Krumrei-Mancuso & Rouse, 2016). This suggests that intellectual humility is not associated with modifying beliefs on the basis of social influence or to fit others’ standards and suggest that intellectual humility, therefore, may co-occur with convictions. Yet, this has rarely been examined with regard to specific convictions, such as religious beliefs and commitments.


This line of research has resulted in two particularly interesting findings so far. First, in addition to the emergence of small negative linear relationships between religious variables and intellectual humility, curvilinear relationships are emerging as well (Krumrei-Mancuso, 2018). For example, in the case of religious belief salience, prayer fulfillment, and religious fundamentalism, U-shaped links are suggesting that extreme scores at low and high levels of these religious variables tend to be associated with higher levels of intellectual humility compared to moderate levels of these religious variables. A possible explanation for this finding is that those with moderate levels of religion may be masking their ambivalence about their worldview with a façade of overconfidence that presents as a lack of intellectual humility. Alternatively, these findings could reflect a developmental faith trend, whereby individuals who move from no or very low faith to more substantial levels of faith may initially be encumbered by a more closed-minded approach, whereas those who continue to progress still further into more mature faith move toward greater intellectual humility as they shift away from black and white thinking, gain more appreciation for paradox and mystery, and experience greater acceptance of others (Fowler, 1981). Notably, a number of previous studies have found conceptually similar curvilinear relationships with religious variables and other outcomes such as prejudice toward outgroup members (Batson, Schoenrade, & Ventis, 1993; Gorsuch & Aleshire, 1974) and mental wellbeing (Galen & Kloet, 2011). Taken together, these findings suggest that individuals toward the middle of the continuum of religion may fare the least well on a variety of outcomes, including intellectual humility.


Second, and perhaps more important, is the finding that the links between religious variables and intellectual humility can be almost fully accounted for by right-wing authoritarianism (Krumrei-Mancuso, 2018). This holds in both cross-sectional and longitudinal analyses. Although religious participation, religious belief salience, prayer fulfillment, and religious fundamentalism initially seem to be associated with slightly less intellectual humility, controlling right-wing authoritarianism erases all of these links except for a small association between religious participation and less intellectual humility.


Right-wing authoritarianism is characterized by obedience to authority, conformity to conventional norms, and intolerance of deviance. Previous research has emphasized that right-wing authoritarianism and certain forms of religiosity can promote one another through a mutual emphasis on obedience to authority, conventionalism, and perhaps even feelings of self-righteousness or superiority (Hunsburger, 1995). Yet, right-wing authoritarianism is not religious in nature. The suggestion of the current research is that, for the most part, it is not religious conviction itself that is associated with decreases in intellectual humility, but rather, that sociopolitical attitudes about authority, conformity, and conventionality are associated with less intellectual humility. This parallels a theme in previous research about the links between religion and prejudice, as well (Hunsberger, 1995).


Thus, the emerging picture is that intellectual humility may function independently from many religious beliefs, behaviors, and experiences. It is possible that religious convictions co-occur with openness to improving one’s knowledge and beliefs. Although further research is needed, it seems that intellectual humility, involving an appreciation for the tentative nature of one’s personal knowledge, need not conflict with religious conviction.






Batson, C. D., Schoenrade, P., & Ventis, W. L. (1993).  Religion and the individual:  A social-psychological perspective. New York, NY: Oxford University Press.

Fowler, J. W. (1981). Stages of faith: The psychology of human development and the

            quest for meaning. San Francisco: Harper & Row.

Galen, L. W., & Kloet, J. D. (2011). Mental well-being in the religious and the non-religious:

evidence for a curvilinear relationship. Mental Health, Religion & Culture, 14(7), 673

  1. doi:10.1080/13674676.2010.510829

Gorsuch, R. L., & Aleshire, D. (1974). Christian faith and ethnic prejudice: A review and interpretation of research. Journal for the Scientific Study of Religion13(3), 281-307. doi:10.2307/1384759

Hunsburger, B. (1995). Religion and prejudice: The role of religious fundamentalism, quest, and right-wing authoritarianism. Journal of Social Issues, 51(2), 113-129. doi:10.1111/j.1540-4560.1995.tb01326.x

Krumrei-Mancuso, E. J. (2018, in press). Intellectual humility’s links to religion and spirituality and the role of authoritarianism. Personality and Individual Differences, 130C, 65-75.

Krumrei-Mancuso, E. J., & Rouse, S. V. (2016). The development and validation of the Comprehensive Intellectual Humility Scale. Journal of Personality Assessment, 98, 209-221. doi:10.1080/00223891.2015.1068174





Trespassing Onto Other Experts’ Turf by Nathan Ballantyne and David Dunning

Trespassing Onto Other Experts’ Turf
by Nathan Ballantyne and David Dunning

Experts’ knowledge and skills are a valuable resource for individuals and communities. One economist recently estimated that 75% of the capital in the United States lies in the knowledge and intellectual abilities of its people.1

But non-experts can easily fail to make good use of experts’ superior insights. Thus, one threat to the flourishing of human beings, individually and collectively, is the inability for non-experts to accurately recognize and take advantage of genuine expertise. At times, people even reject the input and advice of experts, a move that lies at the heart of our team’s project.

In a phrase, we explore epistemic trespassing, in which non-experts move into an expert’s field, reject their judgments, and supplant the expert’s perspectives with their own. A quick survey of contemporary news (or maybe anyone’s Twitter or Facebook feed!) will reveal examples in which people replace informed opinion with their own flawed and incomplete knowledge. This behavior is not a reflection of intellectual humility, and so its patterning and prevalence is worthy of careful study to examine how often people trespass and whether they do it out of wisdom or its opposite.

In coordinated projects, Dunning and collaborators at the University of Michigan investigate the psychological and social dynamics that promote or inhibit trespassing behavior, using the methods of empirical psychological research, while Ballantyne at Fordham University investigates normative questions about the appropriate evaluation of trespassing behavior, using the tools of philosophical analysis.

Dunning’s lab has recently completed a study, spearheaded by postdoctoral researcher Stephanie de Oliveira Chen, that explored when non-experts dismiss the conclusions of scientific experts. The key issue is knowledge of scientific method—and believing whether scientists’ claims are constrained by those methods or instead are free to roam wherever they please the scientist.

In the survey study, respondents were quizzed on their familiarity with the constraints built into the scientific method that limit scientific claims. Knowledge of these constraints proved to be important. Many participants expressed a distrust of scientific conclusions about vaccines, climate change, genetically modified organisms, and nuclear power. They expressed the view that scientists were neither trustworthy nor competent enough to make judgments on these topics. Importantly, this distrust of science was traced back to a belief that scientists are free to say anything they wish, that scientific data largely decorates viewpoints scientists already favor.

This dismissive opinion toward scientists was overwhelmingly predominant among subjects who lacked basic knowledge of the scientific method and how it restricts scientific conclusions. In contrast, subjects with an ample working knowledge of the scientific method and its restrictions had a more positive, trusting, and deferential view towards scientific conclusions.

Critically, epistemic trespassing does not just involve untrained novices claiming knowledge about a field they don’t possess. It also involves experts in one field who make claims beyond their actual competence in another. One example is Nobel laureate Linus Pauling, a chemist who championed the health benefits of Vitamin C even though medical experts denied them. In a forthcoming article in Mind, Ballantyne explores this sort of trespassing, arguing it is a commonplace problem in interdisciplinary research.

Researchers in multiple fields often explore shared questions about topics such as human freedom, rationality, the mind, and ethics. Many of these questions have been “hybridized.” They are answered by combining evidence and techniques from two or more distinct intellectual fields. But researchers who answer hybridized questions by drawing exclusively on the resources from one field risk trespassing on cross-field experts. These trespassers may form their opinions without the requisite skills or evidence needed to judge well.

If trespassers are reflective about their overreaching, they will often have reason to either develop cross-field expertise or avoid holding confident answers about hybridized questions. Learning about trespassing can thus be a powerful reason to change our intellectual practices.

Of course, trespassing without apology is sometimes perfectly legitimate. Astrologers have skills that non-astrologers typically lack, for example, but scientifically-informed non-astrologers will believe astrologers lack reliable methods and so they can dismiss astrologers’ claims.

But at times researchers lack defensible reasons to trespass. One standard rationale is that “critical thinking” skills transfer across fields, allowing trespassers to make competent judgments about matters far away from their home turf. Effective transfer of skill, however, also requires familiarity with the domain of use. In other words, the further trespassers stray from “home”, the less likely they are to transfer their skills successfully.

Once researchers recognize they are in danger of trespassing, they do not just have reason to adjust their confidence or develop cross-field expertise. They also have motivation to rethink the design of their research communities. So many important questions are hybridized, and researchers in cognate fields should be encouraged to rub shoulders more than they ordinarily do. However, in doing so, they may make the heads sitting on those shoulders a little more cautious to be better able to address the complex challenges facing contemporary society.

1 Gary Becker, “The Age of Human Capital” in E. Lazear, Education in the Twenty-First Century, 2002.


Technological Seduction and Self-Radicalization by Mark Alfano

alfanoTechnological Seduction and Self-Radicalization

by Mark Alfano

The 2015 Charleston church mass shooting, which left 9 dead, was planned and executed by Dylann Roof, an unrepentant white supremacist. According to prosecutors, Roof was found not to have adopted his convictions ‘through his personal associations or experiences with white supremacist groups or individuals or others,’ but instead to have developed them through his own efforts online. The same is true of Jose Pimentel, Tamerlan and Dzhokhar Tsarnaev, Omar Mateen, and others.

The specific mechanisms by which the Internet facilitates self-radicalization are disputed. One popular view is that the Internet generates an echo-chamber effect: people interested in radical ideology tend to communicate directly or indirectly only with each other, reinforcing their predilections.

Surrounding the relatively rare problem of lone-wolf terrorism lies the much more common problem of self-radicalization that results in less dramatic but still worrisome actions and attitudes. The “Unite the Right” rally in Charlottesville, Virginia in 2017 brought together neo-Nazis, white supremacists, and white nationalistic sympathizers for one of the largest in-person hate-themed meetings in the United States in decades. Many of the participants in this rally were organized and recruited via the Internet. Even more broadly, white supremacists and their sympathizers met and organized on r/The_Donald (a Reddit community), 4chan, and other online spaces during the 2016 American presidential campaign that resulted in the election of Donald Trump.

In my work on public discourse, I’ve developed a theory of online radicalization that I call the seduction model. According to John Forrester (1990, p. 42), “the first step in a seductive maneuver could be summed up as, ‘I know what you’re thinking.’” This gambit expresses several underlying attitudes. First, it evinces “the assumption of authority that seduction requires.” The authority in question is epistemic rather than the authority of force. Seduction is distinguished from assault by the fact that it aims at, requires, even fetishizes consent. The seducer insists that he is better-placed to know what the seducee thinks than the seducee himself is. Second, ‘I know what you’re thinking’ presupposes or establishes an intimate bond. Third, ‘I know what you’re thinking’ blurs the line between assertion, imperative, and declaration. This is because human agency and cognition are often scaffolded on dialogical processes. We find out what we think by expressing it and hearing it echoed back in a way we can accept, and by having thoughts attributed to us and agreeing with those.

The seduction model encompasses two ways in which information technologies play the functional role of the seducer by telling users of the Internet, “I know what you’re thinking.” What I call top-down technological seduction is imposed by technological designers, who, in structuring technological architecture, invite users to accept that their own thinking is similarly structured. In so doing, designers encourage or ‘nudge’ the user towards certain prescribed kinds of choices and attitudes. Top-down technological seduction needn’t be a matter of saying or implying, “I know that you’re thinking that p.” It can instead be structural. When Netizens are invited to accept that their own thinking is structured in socially dangerous or hateful ways — as opposed to overly simplistic and misguided ways, choice architects can seduce users to embrace prejudiced and hateful attitudes.

Using a digital humanities methodology, I compare the semantic tags on all stories published in 2016 by Breitbart with the tags used by other news organizations. It quickly becomes clear that the world will look very different to a Breitbart reader than it does to an NPR reader or a Huffington Post reader (Figures 1-3).

Figure 1. Network of the top 200 semantic tags on Breitbart in 2016. Layout = ForceAtlas. Node size = PageRank. Node color = semantic community membership. Edge width = frequency of co-occurrence.

Figure 2. Network of the top 100 semantic tags on NPR’s “The Two Way” news section in 2016. Layout = ForceAtlas. Node size = PageRank. Node color = semantic community membership. Edge width = frequency of co-occurrence. Only 100 semantic tags were chosen for NPR due to its overall sample size which is one order of magnitude less than Huffington Post and Breitbart.

Figure 3. Network of the top 200 semantic tags on the Huffington Post in 2016. Layout = ForceAtlas. Node size = PageRank. Node color = semantic community membership. Edge width = frequency of co-occurrence.

Breitbart consumers see in a world in which the most important news is that Mexican cartels commit atrocities in Texas, Muslim terrorist immigrants rampage through Europe, Barack Obama conspires with the United Nations and Jews, Russia and Turkey intervene in Syria to help fight the Islamic State, Donald Trump prepares to put a stop to illegal immigration to the United States, and Milo Yiannopoulos campaigns against Islam with his “Dangerous Faggot Tour.” Readers of other sites encounter a host of other phenomena that fit less easily into Breitbart’s Manichean worldview, such as right-wing attacks against the LGBT community, Dylann Roof’s sentencing, law enforcement agencies wrestling with technology companies over consumer privacy, and NASA’s exploration of the solar system.

Whereas top-down technological seduction plays out through the agency of designers, bottom-up technological seduction can occur without the involvement of anyone’s agency other than the seducee. It creates suggestions based either on aggregating other users’ information or by personalizing for each user based on their location, search history, and other data. It takes a user’s own record of engagement as the basis for saying, “I know what you’re thinking.” Bottom-up seduction can say, “I know what you’re thinking” because it can justifiably say, “I know what you and people like you thought, and what those other people went on to think.” It occurs when profiling enables both predictive and prescriptive analytics to bypass a user’s capacity for reasoning. I understand reasoning as the iterative, potentially path-dependent, process of asking and answering of questions. Profiling enables online interfaces to tailor both search suggestions (using predictive analytics) and answers to search queries (using prescriptive analytics) to an individual user.

Consider a simple and familiar example: predictive analytics will suggest, based on a user’s profile and the initial text string they enter, which query they might want to run. For instance, if you type ‘why are women’ into Google’s search bar, you are likely to see suggested queries such as ‘why are women colder than men’, ‘why are women protesting’, and ‘why are women so mean’. And if you type ‘why are men’ into Google’s search bar, you are likely to see suggested queries such as ‘why are men jerks’, ‘why are men taller than women’, and ‘why are men attracted to breasts’. These are cases of predictive analytics, which not only predicts but also suggests queries to users based on profiling, with or without aggregation.

Prescriptive analytics in turn suggests answers based on both the query someone actually runs and their online profile. In response to ‘why are women colder than men’, Google suggests a post titled “Why are Women Always Cold and Men Always Hot,” which claims that differences between the sexes in the phenomenology of temperature are due to the fact that men have scrotums. In response to ‘why are men jerks’, Google suggests a post titled “The Truth Behind Why Men are Assholes,” which contends that men need to act like assholes to establish their dominance and ensure a balance of power between the sexes.

Google suggests questions and then answers to those very questions, thereby closing the loop on the first stage of an iterative, path-dependent process of reasoning. If reasoning is the process of asking and answering questions, then the interaction between predictive and prescriptive analytics can largely bypass the individual’s contribution to reasoning, supplying both the question and the answer to it. When such predictive and prescriptive analytics are based in part on the user’s profile, Google is in effect saying, “I know what you’re thinking because I know what you and those like you thought.”
Forrester, John. (1990). The Seductions of Psychoanalysis: Freud, Lacan, and Derrida. Cambridge University Press.