Research

Work with Ignacio Riveros (University of Southern California) and Stephanie Tully (University of Southern California)

Under review, preprint available on SSRN

Artificial intelligence-generated content (AIGC) is revolutionizing how media is created and consumed. Calls for transparency have led media platforms to introduce disclosures that identify the use of AIGC. However, the use of AIGC disclosures may have consequences beyond transparency. This research examines how AIGC disclosures impact consumer engagement, a critical success metric for content creators, platforms, and brands. Engagement behavior on TikTok following the introduction of AIGC disclosures and preregistered experiments show that disclosures reduce consumer engagement with content containing AIGC disclosures. This reduction does not stem from differences in the real or perceived quality of the content, nor because of perceived differences in inauthenticity or deception. Moreover, our results suggest differences in disclosure design or implementation are unlikely to mitigate these effects. We provide evidence that lower engagement results from a reduced sense of connection with the content creator. We discuss the implications of these findings for content creator compliance, and their downstream consequences for media platforms, marketers, and policymakers.

Work with Mary Steffel (Northeastern University) and Nora Williams (Washington University in St. Louis)

at the Journal for the Association for Consumer Research

We investigate whether increasing the subjective ease with which medical information may be processed increases participation in medical decisions. In two experiments, consumers were more likely to participate in medical decisions (versus delegate to a healthcare provider) when information about their options was presented in an easy-to-process format. Participation was driven by consumers’ self-confidence in their own decision-making abilities, rather than confidence or trust in their doctor’s judgment. The effect of fluency was strongest among consumers with inadequate health literacy and persisted regardless of past experience with a particular health condition and even when fluency had no effect on comprehension. 

Work with Gülden Ülkümen (University of Southern California)

Consumers often underestimate task completion times, leading to a consistent prediction error referred to as the planning fallacy. Prior literature suggests that unpacking tasks into multiple steps can sometimes decrease this error (Kruger & Evans, 2004) and sometimes increase it (Buehler & Griffin, 2003). Reconciling these mixed findings, we show in three pre-registered studies that when decision-makers perceive task uncertainty as aleatory (i.e., perceived as random or due to chance), they buffer their time estimates for unpacked projects with more steps. In contrast, perceiving task uncertainty as epistemic (i.e., uncertain due to lack of knowledge or expertise) does not lead to longer time estimates. 

Delegating to Self-Driving Technology for Self vs. Others

Work with Ananya Oka (Northeastern University) and Mary Steffel (Northeastern University)

Advances in artificial intelligence (AI) technology have allowed for the integration of self-driving features into consumer cars, but under what conditions do people delegate control of the vehicle to AI? Prior work suggests people delegate difficult choices to others to avoid blame, but often exhibit widespread aversion to AI. This work investigates whether desire for control over the vehicle is dependent on who is in the car. In two pre-registered experiments (N=804), we manipulated whether parents imagined driving to the airport alone or with their families. Our first study revealed that parents exhibit more discomfort with self-driving AI and have a greater desire for control when imagining driving their families than when imagining driving themselves. Study two found this effect regardless of whether the choice was between using self-driving AI vs. driving one’s own vehicle or riding in a self-driving vs. human-driven rideshare car, suggesting this effect is specific to AI: people can blame another driver but feel less inclined to blame an insentient AI. This work contributes to our understanding of when we recruit others, including AI, to complete tasks on our behalf. 

Work with Laurence Ashworth (Queen's University)

Online search is likely one of the first places consumers look for product information. Not surprisingly, marketers frequently attempt to elevate their own information through sponsored listings. This research investigates how consumers react to such non-organic search content. In one experiment, we investigate whether consumers avoid online search results when disclosed as an “ad”, and whether this disclosure threatens their perceived competence. We identify that persuasive content may threaten consumers’ self-perceived competence specifically, undermining their ability to make a good decision and causing them to opt for another source of information altogether.