The Intersection of Deepfakes and Intellectual Property Law: Applying the Concept of Passing Off
Introduction
The advent of artificial intelligence (AI) has led to groundbreaking developments in various fields, among which the creation of deepfakes stands out. Deepfakes, a portmanteau of "deep learning" and "fake"i, are hyper-realistic digital manipulations of audio and video content, often indistinguishable from the real thingii.
As this technology becomes more accessible and sophisticated, it raises significant legal questions, particularly in the realm of intellectual property law. A crucial aspect to consider is whether the impersonation of persons through deepfakes falls under the definition of "passing off" in intellectual property law. This article delves into this complex intersection, exploring the legal, ethical, and technological nuances of deepfakes in the context of passing off.
Understanding Deepfakes
Technology Behind Deepfakes
Deepfakes are created using advanced AI algorithms, particularly those based on machine learning and neural networksiii. These algorithms analyze a vast amount of data from real images or videos of a person to learn how to replicate their appearance and voice convincingly.
The process involves feeding the AI system with large datasets of an individual’s images or voice recordings. The AI then synthesizes this data to create new content that mimics the person’s appearance, movements, and voice with startling accuracy.
Common Uses and Misuses
Deepfakes have found applications in various sectors, including entertainment, where they are used to create realistic effects or resurrect deceased actorsiv. However, they have also been misused for creating false propaganda, manipulating public opinion, and even for personal harassment. In fact, China’s Spamouflage network utilised deepfakes to disseminate pro-PRC propaganda throughout various social media platformsv.
Passing Off
Historical Context
Passing off is a common law tort used to enforce unregistered trademark rights. The concept originated in the United Kingdom and has evolved over time to address various forms of unfair competitionvi. The essential elements of passing off arevii:
- The existence of goodwill attached to goods or services; and
- Misrepresentation by the defendant leading to confusion; and
- Resultant damage to the plaintiff’s goodwill.
Deepfakes and Passing Off
Comparing Deepfakes to Traditional Passing Off Cases
Deepfakes represent a paradigm shift in the landscape of passing off. Traditional cases of passing off involve situations where one party misrepresents their goods or services as those of another, leading to consumer confusion. Hence, in traditional cases, the harm typically centres around damages resultant from consumer confusion and potential financial loss.
With deepfakes, the scope of harm expands to include severe damage to an individual's reputation and personal rights. For instance, a deepfake could depict a public figure engaging in illegal or immoral conduct, causing irreparable harm to their public image and personal brand. This is not merely a case of mistaken identity or brand confusion – it's the creation of a false reality that can have profound personal and professional consequences.
Moreover, the ease and accessibility of deepfake technology add another layer of complexity. In the past, passing off required a certain level of effort and intent. Deepfakes, by contrast, can be created and disseminated with alarming ease, often by individuals without significant resources or traditional commercial motivesviii. This democratization of the means to create such content raises questions about intent, harm, and the appropriate legal remedies in passing off cases.
Challenges in Applying Existing Laws
The advent of deepfakes poses substantial challenges to existing intellectual property (IP) laws, which were largely devised in an era before the emergence of advanced AI technologies. These laws are predicated on principles that do not neatly apply to AI-generated content, particularly content that so closely mimics real human characteristics. Deepfakes blur the lines between reality and fiction in ways that traditional IP laws are ill-equipped to handle.
One of the primary challenges is categorising the nature of the wrong committed by deepfakes. While passing off traditionally involves misrepresenting one's goods or services as those of another, deepfakes involve impersonating an individual. This raises the question: does creating and distributing a deepfake amount to passing off? If so, who is the injured party – the individual whose likeness is used, or the public that is deceived?
Furthermore, existing laws on passing off require the demonstration of goodwill and reputation, misrepresentation, and resultant damage. In the context of deepfakes, proving these elements can be particularly challenging. For instance, proving misrepresentation in a deepfake scenario isn't straightforward, as the deepfake itself might not directly promote goods or services but still causes reputational damage. Additionally, the notion of 'damage' in the context of deepfakes can extend beyond financial loss to include psychological and social harm, aspects that traditional IP laws may not adequately address.
Moreover, the rapid development and dissemination of AI technology exacerbate these legal challenges. Laws typically evolve much slower than technological advancements, leading to a legal grey area where deepfakes proliferate. This disparity between legal frameworks and technological capabilities means that victims of deepfakes might find it difficult to seek recourse under existing laws, necessitating a re-evaluation and potential reform of IP and privacy laws to effectively address the unique challenges posed by deepfakes, as is clear from the example of image rights and passing off below.
Image rights and Passing Off
Image rights are “…an individual’s proprietary rights in their personality and the ability to exploit, and to prevent unauthorised third parties from making use of, an individual’s persona, including their name, nickname, image, likeness, signature and other indicia that are inextricably connected with that individual”ix.
In English law, while there are no specific image rights, celebrities often rely on the tort of passing off to protect their image. This approach, however, faces challenges due to the strict requirements of establishing goodwill, as seen in the Starbucksx case.
The court's decision in Starbucks emphasised that mere reputation is insufficient for a passing off action; claimants must demonstrate goodwill in the form of customers within the jurisdictionxi. This ruling has significant implications for celebrities with global reputations, as they must now demonstrate local goodwill through activities like endorsements or commercial presence in the specific jurisdiction to succeed in passing off actions.
Clearly, if there is no recognition of image rights, it will be difficult (if not impossible) for an individual to initiate a civil action against a person/entity creating deepfakes that involve their person.
In Malaysia, however, some authorsxii comment that the Communications and Multimedia Act 1998 could be applicable to deepfakes, in that S211(1) of the same prohibits content which is indecent, obscene, false, menacing, or offensive in character with intent to annoy, abuse, threaten or harass any person. While this may in theory be possible, so far as our research shows, it has yet to be tried in any Malaysian court.
Global Response to Deepfakes
The global response to the rise of deepfakes is varied, with different countries enacting laws and proposing legislation to address the challenges posed by this technology.
In the European Union, a proactive approach has been adopted with the proposal of the Digital Services Act, which came into force in November 2022xiii. This act focuses on increasing the monitoring of digital platforms for misuse, including deepfakes. It requires social media companies to remove deepfakes and other disinformation from their platforms, with penalties for violators that can reach up to 6% of global revenue. Additionally, the EU's AI Act, which is moving closer to becoming reality, aims to regulate the use of AI technology and includes guidelines for how AI can be used in various sectors, including the creation of deepfakesxiv.
South Korea, known for its technological advancements, has also taken significant steps. It has implemented a law that makes it illegal to distribute deepfakes that could "cause harm to public interest”xv. Violators face up to five years in prison or fines up to 50 million won.
In the United Kingdom, the government is exploring a law that would mandate the labelling of all AI-generated photos and videos to combat deepfakesxvi. This proposed legislation aims to enhance transparency and accountability within the AI industry. The UK is also seeking to establish national guidelines for the AI industry and has established the British AI safety agency to assess powerful AI modelsxvii.
In Malaysia, legislating AI falls within the purview of the Ministry of Science, Technology and Innovation, currently helmed by Chang Lih Kang. Horizon 2 of this Ministry’s Artificial Intelligence Roadmap for 2021 – 2025 does state a review of existing “…laws, policies, regulations and guidelines” is to be performed from 2023-2024xviii. However, there is currently no law specifically governing deepfakes, nor any proposals to implement the same. However, some authors have commented that the Communications and Multimedia Act 1998 may be applicable.
These varied responses highlight the global recognition of the challenges posed by deepfakes and the urgency to address them through legal and regulatory frameworks. The need for international cooperation and standardized practices is increasingly recognized as deepfakes become a more significant part of our digital landscape. Traditional IP laws, primarily designed to protect tangible creations and direct infringements, may struggle to address the complex issues arising from AI-generated content that blurs the line between reality and fabrication. The ease of creating and disseminating deepfakes amplifies this challenge, as IP infringement can occur on a global scale within a matter of seconds.
The Future of Deepfakes and Intellectual Property Law
Emerging Challenges
Addressing the challenges posed by deepfakes in the realm of IP law requires a proactive and forward-thinking approach from lawmakers and legal experts. There is a pressing need for the legal system to adapt and respond to these technological advancements. This may involve revising existing laws or introducing new legislation specifically targeting deepfake technology and its implications for IP rights. Key considerations include defining the scope of rights and protections for individuals' likenesses and performances, especially in the context of AI-generated content.
One emerging challenge is the determination of liability and enforcement. Identifying and prosecuting the creators of deepfakes can be arduous, especially given the anonymity and decentralized nature of the internet. Moreover, the current IP laws may not adequately cover the non-commercial use of deepfakes, which can still cause significant harm to the IP rights of individuals.
Another aspect is the need for international cooperation and harmonization of laws. Given the global reach of the internet, deepfakes created in one jurisdiction can easily affect IP rights in another. International agreements and collaborations could play a crucial role in establishing universal standards and effective cross-border enforcement mechanisms.
Additionally, there is a growing discourse around the ethical implications of deepfakes and the responsibility of platforms and technology providers in regulating this content. This includes discussions on transparency, consent, and the right to privacy, which intersect with IP rights and require careful consideration in legislative developments.
Conclusion
As deepfakes continue to advance, they will undoubtedly pose complex challenges to the field of IP law. The legal framework will need to evolve, balancing the protection of IP rights with the realities of technological innovation, to effectively address these challenges. The future of IP law in the era of deepfakes will likely involve a multifaceted approach, combining legislative action, international cooperation, and ethical considerations.
_______________________________________________
i Hazel Baker, ‘Making a “Deepfake”: How Creating Our Own Synthetic Video Helped Us Learn to Spot One’ Reuters (11 March 2019) <https://www.reuters.com/article/idUSKBN1QS2F1/> accessed 5 February 2024.
ii Thanh Thi Nguyen and others, ‘Deep Learning for Deepfakes Creation and Detection: A Survey’ (2022) 223 Computer Vision and Image Understanding 103525.
iii Gourav Gupta and others, ‘A Comprehensive Review of DeepFake Detection Using Advanced Machine Learning and Fusion Methods’ (2024) 13 Electronics 95.
iv Hang Lu and Haoran Chu, ‘Let the Dead Talk: How Deepfake Resurrection Narratives Influence Audience Response in Prosocial Contexts’ (2023) 145 Computers in Human Behavior 107761.
v Agence France-Presse, ‘Research: Deepfake “News Anchors” in Pro-China Footage’ (Voice of America, 8 February 2023) <https://www.voanews.com/a/research-deepfake-news-anchors-in-pro-china-footage/6953588.html> accessed 5 February 2024.
vi Stavroula Karapapa and Luke McDonagh, ‘8. Passing Off’, Intellectual Property Law (Oxford University Press) <https://www.oxfordlawtrove.com/display/10.1093/he/9780198747697.001.0001/he-9780198747697-chapter-8> accessed 5 February 2024.
vii Skyworld Holdings Sdn Bhd & Ors v Skyworld Development Sdn Bhd & Anor (2022) 5 CLJ 74 (Federal Court - Malaysia) [22].
viii Lutz Finger, ‘Overview Of How To Create Deepfakes - It’s Scarily Simple’ (Forbes) <https://www.forbes.com/sites/lutzfinger/2022/09/08/overview-of-how-to-create-deepfakesits-scarily-simple/> accessed 5 February 2024.
ix ‘Image Rights Definition | Legal Glossary | LexisNexis’ <https://www.lexisnexis.co.uk/legal/glossary/image-rights> accessed 5 February 2024.
x Starbucks (HK) Ltd & Anor v British Sky Broadcasting Group PLC & Ors (Rev 1) [2015] UKSC 31 (2015) 2015 UKSC 31 (UKSC).
xi ibid 52.
xii Zec Kie Tan and others, ‘Individual Legal Protection in the Deepfake Technology Era’ (Atlantis Press 2023) <https://www.atlantis-press.com/proceedings/icld-23/125995062> accessed 5 February 2024.
xiii ‘The Digital Services Act Package | Shaping Europe’s Digital Future’ <https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package> accessed 5 February 2024.
xiv ‘Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI | News | European Parliament’ (12 September 2023) <https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai> accessed 5 February 2024.
xv Amanda Lawson, ‘A Look at Global Deepfake Regulation Approaches’ (RAI Institute, 24 April 2023) <https://www.responsible.ai/post/a-look-at-global-deepfake-regulation-approaches> accessed 5 February 2024.
xvi Ramsha Khan, ‘UK Considers Clear Labelling Law to Combat AI Deepfakes’ (Open Access Government, 26 June 2023) <https://www.openaccessgovernment.org/uk-considers-clear-labeling-law-combat-ai-deepfakes/161861/> accessed 5 February 2024.
xvii ‘Introducing the AI Safety Institute’ (GOV.UK) <https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute> accessed 5 February 2024.
xviii ‘Artificial Intelligence Roadmap for 2021 – 2025 - Ministry of Science, Technology and Innovation’ <https://airmap.my/wp-content/uploads/2022/08/AIR-Map-Playbook-final-s.pdf> accessed 5 February 2024.