IMPACT OF DEEP-FAKE ADVERTISING DISCLOSURE ON PURCHASE INTENTION WITH MEDIATING ROLES OF PERCEIVED REALITY, TRUST, PERCEIVED ETHICALITY, AND IRRITATION
Keywords:
Deep-fake advertising, Disclosure, perceived reality, Trust, Perceived ethicality, Irritation, Purchase intentionAbstract
Deep-fake technology, which is a product of Generative Artificial Intelligence (GAI), is the tool that simplifies the creation of hyper-realistic videos. In general, this technology has been used in identification thefts, pornographic, propaganda, and spreading misinformation. This technology in advertising has ignited a heated debate on its ethical implications and psychological effects on consumer behavior. The unsettling realism of deepfake advertisements—where AI-altered videos of influencers hawk products they’ve never touched—has turned digital marketing into a minefield of epistemological uncertainty. Prior studies have mainly focused on its technological capabilities and malicious applications. This thesis, by contrast, investigates a pressing question; when consumers discover that the smiling celebrity endorsing a product is a synthetic puppet, does that revelation kindle skepticism or morbid curiosity? By dissecting how Deep-fake advertising disclosures of synthetic media alter purchasing intentions, the research illuminates the fragile relation between technological awe and ethical unease that defines modern consumerism. Building from Mehrabian and Russell’s Stimulus-Organism-Response paradigm (1974) and Barnett’s categorizations of advertising deception (2014), the study adopts a quasi-experimental approach involving 200 participants in Islamabad—a city emblematic of Pakistan’s uneven digital adoption, where viral content often outpaces regulatory scrutiny. One cohort encountered a deep- fake advertisement for a fictional skincare product starring a meticulously engineered Ryan Reynolds avatar, forewarned of its artificial origins; the other viewed identical content devoid of context. Results reveal a paradox: while disclosures heighten perceptions of ethicality (β = 0.323, p < 0.05), they simultaneously erode perceived reality (β = -0.239, p = 0.017) and trust (β = - 0.370, p = 0.003), while amplifying viewer irritation (β = 0.448, p = 0.008). Mediation analyses uncover a stalemate—ethical gains partly offset distrust, but diminished realism and frustration anchor reactions in skepticism. Cultural and generational fissures further complicate outcomes. Older participants likened undisclosed deepfakes to “bazaar-grade deception”—a metaphor rooted in Pakistan’s informal economy—whereas younger audiences dismissed disclosures as redundant in a digitally manipulated world (“If Instagram filters lie, why wouldn’t ads?”). These findings challenge the universal efficacy of disclosure mandates, revealing how cultural memory and generational desensitization mediate responses. The study advances the Persuasion Knowledge Model by introducing synthetic skepticism—a state where transparency fuels doubt rather than trust—and reframes the Stimulus-Organism-Response framework to prioritize cultural and emotional mediators. Practically, the results advocate for participatory disclosure strategies, where audiences co-design synthetic norms, and culturally adaptive regulations that balance transparency with narrative immersion. For marketers, the takeaway is stark: in contexts where distrust is culturally ingrained, ethical transparency must be delicately woven into storytelling rather than stamped as an afterthought.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.