Research
As we transition to a digital — and now an AI-powered — age, we are flooded with ever increasing amounts of information. Reviews, advertisements, news articles, and social media posts compete for our attention and cognitive resources. My research explores how people — constrained by finite attention, knowledge, and processing capacity — form beliefs and make decisions in complex information environments.
I use lab experiments, computational models, and field data to understand these processes. In the first stream of my research, I study how cognitive processes adapt to properties of the information landscape, pairing adaptive rationality and insights from cognitive science to explore how cognition adjusts to aid in interpretation and decision-making — and how these generally effective processes also produce systematic, predictable errors. In a second stream, I explore how people construct mental representations of ambiguous information to infer meaning, make predictions, and allocate scarce cognitive resources.
Published
Replicability and generalizability of the repeated exposure effect on moral condemnation of fake news
Orchinik, R., Bhui, R., Rand, D. G.
Nature Communications, 2025
Abstract
Repeated exposure to misinformation reduces moral condemnation of those falsehoods, as shown by Effron & Raj (2020) — and moral condemnation may play an important role in stopping the spread of online misinformation. In this registered report, we conceptually replicate previous findings on the effect of repetition and moral condemnation and investigate the generalizability of the findings, using an updated and larger set of false headlines. We also investigate whether asking for accuracy evaluations of the headlines, a type of accuracy prompt that is standard in repeated exposure tasks, alters the effect of repetition on moral condemnation, as inattention to the veracity of headlines may decrease outrage and thus moral condemnation. We find a clear conceptual replication of the negative effect of repetition on moral condemnation, and insufficient evidence for a relationship between accuracy prompts and the effect of repetition.
Learning from and about scientists: Consensus messaging shapes perceptions of climate change and climate scientists
Orchinik, R., Dubey, R., Gershman, S. J., Powell, D. M., Bhui, R.
PNAS Nexus, 2024
Abstract
Despite overwhelming scientific consensus on the existence of human-caused climate change, public opinion among Americans remains split. Directly informing people of scientific consensus is among the most prominent strategies for climate communication, yet the reasons for its effectiveness and its limitations are not fully understood. Here, we propose that consensus messaging provides information not only about the existence of climate change but also traits of climate scientists themselves. In a large (n=2,545) nationally representative survey experiment, we examine how consensus information affects belief in human-caused climate change by shaping perceptions of climate scientist credibility. In the control group (n=847), we first show that people learn both from and about climate scientists when presented with consensus and that perceived scientist credibility (especially skill) mediates up to about 40% of the total effect of consensus information on climate belief. We demonstrate that perceptions of climate scientists are malleable with two novel interventions that increase belief in climate change above and beyond consensus information.
Orchinik, R., Dubey, R., Gershman, S., Powell, D., Bhui, R.
Proceedings of the Annual Meeting of the Cognitive Science Society, 2023
Abstract
Informing people of the overwhelming consensus among climate scientists that human-caused climate change is occurring increases belief in the proposition and the importance of policy action. However, consensus may not be interpreted in the same way; it could emerge from skilled experts converging on the truth, or a biased cabal working for their own gain. We show that the weight that an individual places on the skill and bias of experts affects whether they are persuaded by strong consensus. We demonstrate that beliefs about the skill and bias of pro-consensus scientists (those who express that climate change is occurring) and anti-consensus scientists (those who do not) are central components of a belief system about climate change, determining what individuals learn from climate scientists. However, these characteristics are not fixed as individuals also learn about scientists from consensus. In this way, people learn both from and about climate scientists given consensus.
Hattersley, M., Orchinik, R., Ludvig, E. A., Bhui, R.
Proceedings of the Annual Meeting of the Cognitive Science Society, 2023
Abstract
Good explanations can be distinguished from bad ones in different ways, for instance by how much of the available information they can explain (i.e., maximise the likelihood of) the available data. Here, we consider two different components of likelihood: descriptiveness (the likelihood of the individual data points) and co-explanation (the likelihood of the specific subset of data under consideration). We consider whether people prefer explanations that are high in descriptiveness vs. co-explanation. Moreover, we consider whether people who endorse conspiracy theories prefer explanations for either quality. In a medical diagnosis task, participants make binary choices between two fictional disease variants: one higher in descriptiveness versus another higher in co-explanation. Overall, participants displayed a weak preference for descriptiveness. This preference, however, did not vary across increasing levels of descriptiveness. Moreover, such preferences were unrelated to conspiracy mentality. Thus, both explanatory virtues may play a role in the appeal of likely explanations.
Working Papers: Heuristics, Intuitions, and Adaptive Rationality
Orchinik, R., Martel, C., Rand, D. G., Bhui, R.
Minor revisions at Management Science
Abstract
Belief in misinformation has been attributed to digital media environments that promote intuitive thinking, which is thought to foster uncritical acceptance of content. We propose that this intuitive "truth bias" may be an ecologically rational adaptation to environments where information is typically accurate. Across a large-scale pre-registered survey experiment and an incentivized replication, we test whether intuitions indeed adapt to the base rate of true versus false content. Participants viewed news feeds composed primarily of either true or false headlines. We find that individuals make more — and faster — errors when encountering the less frequent headline type, and fewer errors with the more common type. Computational modeling of the deliberative process reveals these effects are driven by intuitive responses that function like Bayesian priors about content accuracy, which exhibit some persistence. Our findings suggest that susceptibility to misinformation may not merely reflect a cognitive failure, but rather a rational byproduct of learning from statistical regularities in digital environments.
Orchinik, R., Rand, D. G., Bhui, R.
Revise & Resubmit at Psychological Science
Abstract
The illusory truth effect - the finding that repeated statements are believed more - is understood as a cognitive bias at the core of the psychology of beliefs. Here, we propose that the effect, rather than representing a flaw in human cognition, is a rational adaptation to generally high-quality information environments. Using a formal model, we show that increasing belief in repeated statements improves belief accuracy when a source is credible (i.e., likely to tell the truth) but sometimes makes errors. The theory unifies four key findings in the literature while predicting a testable edge case for the illusory truth effect: when a source is likely to convey falsehoods. Using a large (N = 4,947) pre-registered online experiment, we show that the illusory truth effect is substantially smaller in a low-quality (mostly false) relative to a high-quality (mostly true) information environment. In fact, a majority of participants in the low-quality condition do not demonstrate any illusory truth effect. We identify the deployment of an alternative strategy in the low-quality condition where participants decrease their belief given repetition. Three process-level indicators - response times, cognitive reflection, and the prior plausibility of items - confirm an adaptively rational interpretation. In sum, we suggest the illusory truth effect may not be purely illusory, highlighting its adaptive foundations and the ability of people to efficiently navigate complex environments.
Orchinik, R., Rand, D. G., Pennycook, G., Fazio, L.
Revise & Resubmit at Cognition
Abstract
The finding that repetition causes increased belief — termed the illusory truth effect — has raised important concerns about the possibility of repetition leading to widespread belief in falsehoods. For this concern to be founded, the illusory truth effect must act meaningfully on statements that are initially thought to be implausible. Using a large (N=974) online experiment, we test the relationship between the illusory truth effect and prior belief in trivia statements. In contrast to previous work, we measure the prior belief that an individual assigns to a specific statement to allow for a precise estimation of this relationship. Concerningly, we find that not only does the illusory truth effect appear meaningful for statements where participants have low prior belief, the effect is also stronger for these initially implausible statements. Our findings suggest that repetition can be particularly harmful when statements are initially seen as implausible.
Orchinik, R., Bhui, R., Rand, D. G.
Working paper
Abstract
Repeating falsehoods is a common political tactic, and a large body of research on the illusory truth effect suggests that such repetition should increase belief in these claims. This repetition effect is generally thought to be a low-level cognitive bias that applies broadly across scenarios and people, making it a powerful force in political persuasion. In contrast, we adopt the theoretical framework of adaptive rationality and argue that repetition should not increase belief in claims if they are made by distrusted sources. We test this prediction in a large (N = 2,484) pre-registered experiment in which American partisans are shown real claims made by Donald Trump and Joe Biden, and a randomly selected subset of claims are repeated. Consistent with our predictions, we find that repetition does not increase belief if the participant distrusts the politician making the claim. By showing that source credibility is a powerful moderator of the illusory truth effect, we demonstrate an important limitation on the power of repetition for inducing belief.
Levari, D. E., Martel, C., Orchinik, R., Bhui, R., Seli, P., Pennycook, G., Rand, D. G.
Working paper
Abstract
What are the consequences of encountering blatant falsehoods and "fake news"? Here we show that exposure to a high prevalence of very implausible claims can increase belief in other, more ambiguous false claims, as they seem more believable in comparison. Participants in five preregistered experiments (N=5,476) were exposed to lower or higher rates of news headlines that seemed blatantly false, as well as some more plausible true and false headlines. Being exposed to a higher prevalence of extremely implausible headlines increased belief in unrelated headlines which were more ambiguous (or even plausible), regardless of whether they were true or false. The effect persisted for headlines describing hypothetical events, as well as actual true and false news headlines. It occurred whether people actively evaluated the headlines or read them passively, among liberals and conservatives, and among those high or low in cognitive reflection. We observed this effect in environments where the plausibility of a claim was a reliable and useful cue to whether it was true or false, and in environments where plausibility and truth were unrelated. We argue that a high prevalence of blatantly implausible claims lowers the threshold of plausibility for other claims to seem believable. Such relative comparisons are a hallmark of the brain's tendency towards efficient computations in perception and judgment. Even when consumers can reliably identify and disregard blatantly false news content, encountering such content may make subtler falsehoods more likely to be believed.
Working Papers: Persuasion
Political narratives in the 2024 election cycle
Orchinik, R., Howland, B., Watts, D., Rothschild, D.
Working paper · Available upon request
Abstract
It is not feasible for citizens to continuously process new political information and update their beliefs accordingly. Instead, people rely on narratives — simplified causal models of the world — that help them interpret political developments and assimilate or dismiss new information. To understand how such narratives are produced in the news media, we track every article appearing in the top 20 positions on the landing pages of six major U.S. news outlets from January 1, 2024, through the 2024 presidential election. Using a natural language processing pipeline that combines text embeddings, clustering, and large language models, we identify and characterize the narratives produced by these publications. We analyze narratives along six dimensions related to their potential impact: quantity, repetition, temporal breadth, coherence, sentiment, and partisanship. We document large asymmetries in narrative landscape. Right-leaning outlets produce substantially more narratives than center or left-leaning outlets, and these narratives are repeated more frequently, applied across a wider range of events, and expressed with greater internal coherence despite their breadth. In addition, narratives from right-leaning outlets are more partisan while center outlets skew toward anti-incumbent narratives and left-leaning outlets are politically mixed. Together, these patterns reveal a substantial Republican advantage in narrative production and dissemination in the lead-up to the 2024 election, with important implications for political understanding in contemporary information environments.
LLMs as Scalable Tools for Interactive Consumer Behavior Experiments: Comparing Persuasion Strategy Effectiveness
Nam, J., Orchinik, R., Rand, D. G.
Submitted to Journal of Marketing Research
Abstract
Recent advances in large language models (LLMs) present an opportunity to address methodological limitations in traditional online experimental approaches. Here, we demonstrate the value of LLMs as experimental tools for consumer behavior research using a case study on persuasion. We present two studies in which LLMs engage participants in back-and-forth conversations with the goal of shifting consumption-related attitudes. Study 1 examines which strategies are naturally employed by the LLM, and the effectiveness of these strategies, when persuading participants about smartphones (iPhone versus Android). Study 2 experimentally manipulates which persuasive strategy the LLM uses when persuading about meat consumption or online shopping (Amazon versus Walmart.com). Our results suggest that conversations with LLMs can effectively shift both self-reported attitudes and actual behavioral choices, with factual and informational appeals proving particularly effective across outcomes. Conversely, overtly manipulative strategies — including social consensus appeals and bias framing — were less effective. Moderation analysis reveals that pre-existing trust in AI technology significantly enhances persuasive effectiveness, while mediation analysis identifies perceived AI manipulation and expressed skepticism as key psychological mechanisms underlying consumer resistance. Beyond these substantive findings, our research demonstrates the value of LLMs in experiments allowing for the efficient testing of multiple theoretical predictions within single experimental frameworks and bridges observational and experimental paradigms through automated content analysis.
Czarnek, G., Orchinik, R., Lin, H., Xu, H. G., Costello, T., Pennycook, G., Rand, D. G.
Revise & Resubmit at Nature Communications
Abstract
We ask whether facts and evidence - tailored by an AI model to address each person's specific concerns - can address climate skepticism and inaction. Participants first described their main climate change reservation. The most prevalent were the belief that climate change has natural causes (15%), feeling overwhelmed by the problem (10%), and concerns about the economic consequences of climate policies (8%). Participants were then randomized to (1) have a conversation with a Large Language Model (LLM) that was given the goal of addressing their climate reservations, (2) discuss an irrelevant topic with the LLM (i.e., control), or (3) receive static information about the scientific consensus around climate change (i.e., "standard-of-care"). The LLM treatment significantly and substantially reduced participants' conviction in their specific reservations, while consensus messaging did not. Both treatments had significant, albeit small, effects on general pro-climate beliefs/attitudes. Critically, however, the LLM treatment was significantly more effective - particularly for increasing willingness to make sacrifices to address climate change and donations to a pro-climate charity. The LLM primarily presented facts, evoked positive emotions, reduced psychological distance, and fostered motivation to act. It rarely invoked values or ingroup sources, and when it did, their use was associated with reduced belief change. The treatment substantially reduced Republicans' reservations (although less than for Independents or Democrats), and roughly 35% to 40% of the LLM treatment effect persisted after one month. These findings demonstrate that it is possible to reach many climate skeptical or hesitant people with the right facts and evidence.
Working Papers: Economics and Finance
Remer, M., Orchinik, R.
Working paper
Abstract
We study the US airline merger wave from 2008 through 2013, which included mergers between Delta/Northwest, United/Continental, Southwest/AirTran, and American/USAir. We first show these mergers occurred between airlines with complementary networks and very little head-to-head competition on overlap, nonstop routes. Consequently, each merger led to minimal changes, on average, in route-level HHI but large increases in multimarket contact. We analyze the causal impact of the mergers on prices using synthetic difference-in-differences and the synthetic control method. We find that merger-induced increases in multimarket contact led to higher prices, especially in the latter two legacy mergers. We therefore find that these mergers led to coordinated price effects. In contrast to the previous literature, we implement econometric methods that match on pre-merger price trends, and we do not find a significant impact on overlap routes in legacy airline mergers, suggesting that a primary channel through which mergers affect prices is an increase in multimarket contact.
Olsen, A., Orchinik, R., Remer, M.
Working paper
Abstract
We conduct a meta-analysis of U.S. merger retrospectives, a large and growing literature that investigates ex post merger outcomes — typically price — through reduced-form methods. We first document that mergers studied are unrepresentative of the universe of U.S. mergers but are observably similar to mergers that receive antitrust scrutiny. We then synthesize the literature by computing a summary price effect using a Bayesian hierarchical model. For the subset of papers in our sample that estimate log-price effects, we find a mean price increase of 7.21%. We also develop a new meta-analytic approach that allows us to combine and analyze different outcomes affected by mergers. Using this methodology, for the entire sample of papers that study price, we find that anticompetitive outcomes are 1.7 times more likely than procompetitive outcomes. We also find that price effects are higher for healthcare and airline mergers.