Work with Ignacio Riveros (University of Southern California) and Stephanie Tully (University of Southern California)
Under review, preprint available on SSRN
Social media shapes how people connect, communicate, and consume information. As generative artificial intelligence (AI) becomes an increasingly common tool for content creation, many platforms have introduced disclosure requirements to inform consumers when content has been created or significantly edited by AI. Yet, little is known about how such AI-generated content (AIGC) disclosures influence consumer engagement—a key metric for creators, platforms, and brands—in part due to the unique setting of social media relative to other examinations of responses to AI. This research examines whether and why AIGC disclosures affect engagement on social media. Analysis of engagement behavior on TikTok following the introduction of AIGC disclosures and six preregistered experiments find that disclosures reduce consumer engagement. This reduction does not stem from content-related explanations such as lower perceived quality or concerns about manipulation. Instead, we identify a novel process: AIGC disclosures reduce parasocial connection—one-sided emotional bonds between consumers and creators—by signaling reduced effort from the creator. As such, disclosures that signal greater effort can mitigate reductions in engagement. We discuss the implications of these findings for platform policy, content creator strategy, and the future design of AI disclosure practices.
Work with Mary Steffel (Northeastern University) and Nora Williams (Washington University in St. Louis)
at the Journal for the Association for Consumer Research
We investigate whether increasing the subjective ease with which medical information may be processed increases participation in medical decisions. In two experiments, consumers were more likely to participate in medical decisions (versus delegate to a healthcare provider) when information about their options was presented in an easy-to-process format. Participation was driven by consumers’ self-confidence in their own decision-making abilities, rather than confidence or trust in their doctor’s judgment. The effect of fluency was strongest among consumers with inadequate health literacy and persisted regardless of past experience with a particular health condition and even when fluency had no effect on comprehension.
Work with Gülden Ülkümen (University of Southern California)
Consumers often underestimate task completion times, leading to a consistent prediction error referred to as the planning fallacy. Prior literature suggests that unpacking tasks into multiple steps can sometimes decrease this error (Kruger & Evans, 2004) and sometimes increase it (Buehler & Griffin, 2003). Reconciling these mixed findings, we show in three pre-registered studies that when decision-makers perceive task uncertainty as aleatory (i.e., perceived as random or due to chance), they buffer their time estimates for unpacked projects with more steps. In contrast, perceiving task uncertainty as epistemic (i.e., uncertain due to lack of knowledge or expertise) does not lead to longer time estimates.
Work with Ananya Oka (Northeastern University) and Mary Steffel (Northeastern University)
Advances in artificial intelligence (AI) technology have allowed for the integration of self-driving features into consumer cars, but under what conditions do people delegate control of the vehicle to AI? Prior work suggests people delegate difficult choices to others to avoid blame, but often exhibit widespread aversion to AI. This work investigates whether desire for control over the vehicle is dependent on who is in the car. In two pre-registered experiments (N=804), we manipulated whether parents imagined driving to the airport alone or with their families. Our first study revealed that parents exhibit more discomfort with self-driving AI and have a greater desire for control when imagining driving their families than when imagining driving themselves. Study two found this effect regardless of whether the choice was between using self-driving AI vs. driving one’s own vehicle or riding in a self-driving vs. human-driven rideshare car, suggesting this effect is specific to AI: people can blame another driver but feel less inclined to blame an insentient AI. This work contributes to our understanding of when we recruit others, including AI, to complete tasks on our behalf.
Work with Laurence Ashworth (Queen's University)
Online search is likely one of the first places consumers look for product information. Not surprisingly, marketers frequently attempt to elevate their own information through sponsored listings. This research investigates how consumers react to such non-organic search content. In one experiment, we investigate whether consumers avoid online search results when disclosed as an “ad”, and whether this disclosure threatens their perceived competence. We identify that persuasive content may threaten consumers’ self-perceived competence specifically, undermining their ability to make a good decision and causing them to opt for another source of information altogether.