Work with Gülden Ülkümen (University of Southern California)
Accurate forecasts of task completion times are essential for effective planning and coordination across all domains of human activity. Research on time estimation has predominantly focused on underestimation, with unpacking tasks into constituent steps commonly prescribed as an intervention. However, empirical evidence for unpacking remains surprisingly mixed. While some studies demonstrate its effectiveness in reducing optimistic bias, others find no effect or even backfiring, with unpacking increasing underestimation. This paper proposes a novel theoretical framework that reconciles these conflicting findings by recognizing that forecasters may engage in two distinct modes of reasoning to generate time estimates. Singular reasoning, associated with epistemic (knowable) uncertainty, prompts mental simulation of how long a particular action may take, whereas distributional reasoning, associated with aleatory (random) uncertainty, draws on knowledge of how long comparable classes of actions may take. Through a series of six studies, we demonstrate that unpacking's impact varies systematically with these uncertainty perceptions. When tasks are characterized by predominantly epistemic uncertainty, unpacking decreases completion time estimates by enhancing mental simulation fluency. Conversely, when tasks involve predominantly aleatory uncertainty, unpacking increases estimates through aggregation of positively skewed time distributions. Theoretically, this research delineates two distinct cognitive pathways through which unpacked information influences time estimates and provides a unifying framework that reinterprets previously conflicting findings. Practically, our findings offer implications for individuals, project managers, and organizations seeking to mitigate the costly consequences of inaccurate time estimates.
Work with Ignacio Riveros (University of Southern California) and Stephanie Tully (University of Southern California)
Under review, preprint available on SSRN
Social media shapes how people connect, communicate, and consume information. As generative artificial intelligence (AI) becomes an increasingly common tool for content creation, many platforms have introduced disclosure requirements to inform consumers when content has been created or significantly edited by AI. Yet, little is known about how such AI-generated content (AIGC) disclosures influence consumer engagement—a key metric for creators, platforms, and brands—in part due to the unique setting of social media relative to other examinations of responses to AI. This research examines whether and why AIGC disclosures affect engagement on social media. Analysis of engagement behavior on TikTok following the introduction of AIGC disclosures and six preregistered experiments find that disclosures reduce consumer engagement. This reduction does not stem from content-related explanations such as lower perceived quality or concerns about manipulation. Instead, we identify a novel process: AIGC disclosures reduce parasocial connection—one-sided emotional bonds between consumers and creators—by signaling reduced effort from the creator. As such, disclosures that signal greater effort can mitigate reductions in engagement. We discuss the implications of these findings for platform policy, content creator strategy, and the future design of AI disclosure practices.
Work with Mary Steffel (Northeastern University) and Nora Williams (Washington University in St. Louis)
at the Journal for the Association for Consumer Research
We investigate whether increasing the subjective ease with which medical information may be processed increases participation in medical decisions. In two experiments, consumers were more likely to participate in medical decisions (versus delegate to a healthcare provider) when information about their options was presented in an easy-to-process format. Participation was driven by consumers’ self-confidence in their own decision-making abilities, rather than confidence or trust in their doctor’s judgment. The effect of fluency was strongest among consumers with inadequate health literacy and persisted regardless of past experience with a particular health condition and even when fluency had no effect on comprehension.
Work with Ananya Oka (Northeastern University) and Mary Steffel (Northeastern University)
Advances in artificial intelligence (AI) technology have allowed for the integration of self-driving features into consumer cars, but under what conditions do people delegate control of the vehicle to AI? Prior work suggests people delegate difficult choices to others to avoid blame, but often exhibit widespread aversion to AI. This work investigates whether desire for control over the vehicle is dependent on who is in the car. In two pre-registered experiments (N=804), we manipulated whether parents imagined driving to the airport alone or with their families. Our first study revealed that parents exhibit more discomfort with self-driving AI and have a greater desire for control when imagining driving their families than when imagining driving themselves. Study two found this effect regardless of whether the choice was between using self-driving AI vs. driving one’s own vehicle or riding in a self-driving vs. human-driven rideshare car, suggesting this effect is specific to AI: people can blame another driver but feel less inclined to blame an insentient AI. This work contributes to our understanding of when we recruit others, including AI, to complete tasks on our behalf.
Work with Laurence Ashworth (Queen's University)
Online search is likely one of the first places consumers look for product information. Not surprisingly, marketers frequently attempt to elevate their own information through sponsored listings. This research investigates how consumers react to such non-organic search content. In one experiment, we investigate whether consumers avoid online search results when disclosed as an “ad”, and whether this disclosure threatens their perceived competence. We identify that persuasive content may threaten consumers’ self-perceived competence specifically, undermining their ability to make a good decision and causing them to opt for another source of information altogether.