Categories: Journal


VOLUME:-10 ISSUE NO:- 10 , APRIL 15 , 2024

ISSN (ONLINE):- 2584-1106

Website: www.the lawway with



Authored by:- Ankit Burman 

MBA, B. Com, LLB(Final Yr.) 


This research paper delves into the complex legal challenges posed by deepfake technology,  which has blurred the line between reality and fiction. The paper explores the nature of  deepfake technology, the challenges it presents in the legal sphere, and how existing laws and  regulations fall short of effectively combatting these digital manipulations. The research also  examines potential solutions to address these challenges, including developing more  specialized legal frameworks and strengthening existing laws related to defamation, privacy,  intellectual property, and cybercrime. 


Deepfakes, Legal Challenges, Artificial Intelligence, Privacy, Cybercrime, Intellectual  Property, Defamation, Regulation, Intellectual Property, Identity Theft, Obscenity, Public  Awareness, International Cooperation. 


Meaning and Background

Deepfakes, a portmanteau of ‘deep learning’ and ‘fake,’ are AI-driven manipulation or  alteration techniques that can alter audio, video, or images in such a way that they convincingly  depict someone saying or doing something they never did1. This technology leverages  powerful machine learning algorithms and a wealth of training data to create nearly  indistinguishable digital manipulations, raising grave concerns in the realms of privacy,  security, and misinformation. 


Deepfake technology utilizes machine learning algorithms to create highly convincing  synthetic media by mimicking voices, swapping faces, and altering body language in videos,  which poses significant legal challenges in India2. The lack of a stand-alone law against  deepfake technology has brought the issue into the limelight, with several provisions in the  

1 Khalid Khan, “The Legal Landscape of Deepfakes: Challenges and Potential Solutions,” LinkedIn, November  8, 2023, 0umdf 

2Jessica Ice, “Defamatory Political Deepfakes and the First Amendment,” Case Western Reserve Law Review,  2019.

existing acts being charged, such as Section 66D of the Information Technology Act, 2000, and  Section 66E of the IT Act, 2000. These sections punish cheating by personation using  communication devices or computer resources and the intentional or knowing capture,  publication, or transmission of the image of a private area of any person without their consent,  respectively3

Legal Challenges

  1. Privacy Invasion: 

Deepfake technology can be used to invade an individual’s privacy by capturing, publishing,  or transmitting their image without their consent, leading to potential violations of their right  to privacy under Article 21 of the Constitution of India, 1950. 

  1. Identity Theft: 

Deepfake technology can also be used for identity theft, which is punishable under Section 66C  of the IT Act, 2000, with imprisonment of either description for a term that may extend to three  years and a fine with may extend. 

  1. Obscene Materials: 

Deepfake technology can be used to create obscene materials, which are punishable under  Section 294 of the Indian Penal Code, 1860, with imprisonment of either description for a term  that may extend to three months or with a fine or with both. 

The India Position: 

In India, the doctrine of fair dealing4under Section 52 of the Indian Copyright Act,  1957 (ICA)[23]5deals with what works are excluded from being considered as infringing  works. While the US judges fair use on a case-by-case basis, India’s copyright law offers a  clearer path through its fair dealing provisions, which explicitly outline non-infringing acts. 

While the Indian position on fair dealing is often criticized for being rigid,6it proves convenient  in tackling deepfake technology created with malicious intent, as the use of this technology  does not fall under any of the acts mentioned in Section 52 of ICA7. However, the provision  might not protect the use of deepfake technology for authentic purposes. 

Section 57 of ICA8provides for the right to paternity and integrity in compliance with the moral  rights requirement under the Berne Convention, 1886. While considering deepfakes, the right  

3 The Deepfake Menace: Legal Challenges in the Age of AI,” Research Centre, TRT World, March 19,  2024, of-ai. 

4 Using Copyright and Licensed Content: Copyright & Fair Use’ (Indian Institute of Management)  <> accessed 31 August 2020. 

5ICA 1957, s 52. 

6 Ayush Sharma, ‘Indian Perspective of Fair Dealing under Copyright Law: Lex Lata or Lex Ferenda?’ (2009)  14 Journal of Intellectual Property Rights 523, 529. 

7ICA 1957, s 52 

8ICA 1957, s 57

to integrity provided under Section 57(1) (b) of ICA9plays an essential role, since deepfakes  can be regarded as distortion, mutilation, or modification of a person’s work. There exist provisions for civil and criminal liability under Section 5510 and Section 63 of ICA11 which  provide damages, injunctive relief, imprisonment, and fines against infringers. These  provisions arguably provide adequate cautions to tackle deepfakes created for despiteful purposes but fail to extend protection to deepfakes created with lawful purposes. 

Existing Legal Framework: 

India lacks a specific law for deepfake technology; however, several provisions in the existing  acts are charged. Section 66D of the IT Act, 2000, punishes cheating by personation using  communication devices or computer resources, while Section 66E of the IT Act, 2000, states  that whoever intentionally or knowingly captures, publishes, or transmits the image of a private  area of any person without their consent under circumstances violating the privacy of that  person shall be punished. Section 51 of the Indian Copyright Act, of 1957, covers the conditions  of infringing copyrights without any license granted by the owner. 

Deepfake in the context of Right to Privacy

Rampant use of deepfake videos is bound to infringe on the privacy of individuals whose fake  videos are made. The right to Privacy has been recognized as a fundamental right by the  Supreme Court judgment of Justice KS Puttaswamy (Retd.) v Union of India.29 The nine-judge  bench stated that privacy safeguards the autonomy of individuals and gives them the right to  control various aspects of their lives. Privacy was stated as a facet of dignity for human beings,  since life enjoyed with dignity is liberation in the actual sense. The Apex Court went on to state  that privacy is central to a democratic state12

The Court highlighted that an implicit part of the Right to Privacy could also choose what  information about oneself is released into the public space. Individuals are the primary  decision-makers in such cases13. Therefore, it is clear that deepfake videos violate the Right to  Privacy of individuals in almost all aspects. Even though there is no law expressly banning it,  claims against it would succeed in the Court of law. 

The Personal Data Protection Bill 2019. 

This section will examine whether the Personal Data Protection Bill 2019 (“Bill”) prohibits the  circulation of deepfake videos. The Bill provides for the protection of the personal data of  individuals. The application of this Bill is both territorial and extraterritorial, which means that  it shall apply even to those who are situated outside India but are involved in the creation of  deepfakes of Indians. 

9ibid s 57 (1)(b). 

10 ibid s 55 

11 ibid s 63 

12 S. 67, Information Technology Act 2000 

13 Ibid [132]

Section 8 lays down that such data should be processed in a manner that is not misleading.  Further, individuals’ data have to be processed after their consent is obtained, and any  unauthorized disclosure of an individual’s data can lead to a breach in integrity or privacy of  that person shall amount to a personal data breach. 

Section 20 of the Bill has an important provision regarding the ‘right to be forgotten.’  According to this provision, if upon unauthorized use of personal data, such data is processed  and circulated in the public domain, it can be stopped and erased from the public domain  altogether by order of the Court. This provision is particularly vital in cases of revenge porn.  Penalties are also laid down under this Bill in case its provisions are contravened. 

Further, the Government and other regulatory bodies should take steps that ensure that videos  circulated in the public domain are authentic. For example, the Election Commission should  make it mandatory for all political parties to use Digital Signatures as defined under section 3  of the IT Act on any video they use for campaigning. This shall ensure that authentic videos  are being circulated in the public domain. 

Currently, under rule 3 of the Information Technology (Intermediary Guidelines) Rules 2011,  intermediaries are supposed to exercise due diligence while discharging their duties14. The  intermediaries, which include social media 

Intermediary liability under Section 79 of the Information Technology Act, 2000 (IT Act)15 is  imposed for copyright infringement post the judgment of Myspace Inc. v. Super Cassettes  Industries Ltd.16 The Delhi High Court extended a harmonious interpretation to the provisions  of the ICA and the IT Act and laid down that in case of copyright infringement, intermediaries  have a responsibility to take down infringing content when notified by private parties, even  without a court order. However, issues may still arise concerning the detection of deepfakes as  the technology remains infirm, and it challenges the content moderation policies of  intermediaries while taking down deepfake content. 

Suggested legislative amendments

There have been instances of creating programs that discover various characteristics of human  faces and generate their versions17

This means that by using such programs, the face of such persons can be created, which does not exist! This technology is very likely to be used to create deepfake videos for nefarious  purposes and through fake accounts of such persons, these videos can be uploaded on various  social media sites. This act shall not be covered under the offense of personation under the  Indian Laws. Personation is penalized under section 416 of the Indian Penal Code wherein the  said offense is said to occur if ‘a person cheats by pretending to be some other person, or by  knowingly substituting one person for another, or representing that he or any other person is a  person other than he or such other person really is’.18 

14 Rule 3, Information Technology (Intermediary guidelines) Rules 2011 

15 The Information Technology Act 2000 No.21 Acts of Parliament 2000 (India). 

16 Myspace Inc v Super Cassettes Industries Ltd (2016) SCC OnLine Del 6382. 

17 Udit Verma, ‘Creepy! This website creates human faces of people who don’t exist’ (Business Today, 18  February 2019) accessed 3 October 2020

18 S. 416, Indian Penal Code 1860.

In such cases of creating fake persons themselves, no real person has been substituted by them,  therefore this section will not be attracted. Personation is also criminalized under the  Information Technology Act 2000 under section 66D, which states that Whoever, through any  communication device or computer resource cheats by personation, shall be punished.[..]’. 

It is clear that since the said act is not covered under personation, it shall also not be covered  under section 66D. 

Therefore, currently, there are no laws in the country that address the issue of this threat. A  possible step in doing so could be to amend Rule 3 of the Information Technology  (Intermediary Guidelines) Rules 2011 and specifically instruct the intermediaries to add clauses  in their privacy policy, prohibiting the users from uploading deepfake videos of any kind. 

However, as mentioned earlier, the Indian Legal Authority has to set up deepfake detection  mechanisms for such prohibition to be effective. Till the time there is no flawless mechanism  for detection of deepfakes, the Government can instruct these intermediaries to inform them as  soon as there is a video uploaded on their platform that has contents capable of causing unrest  in the society so that it can be stopped from further circulation till it is verified whether it is  deepfake or not. 

Further, currently, there are no provisions related to the protection of data of deceased  persons19

Such provisions are important because there is a possibility that deepfake videos of spiritual  leaders, political leaders, or public figures who have died under mysterious circumstances have  been made to manipulate the masses. This could also be detrimental to the system of electronic  evidence in criminal trials as deepfake videos of deceased persons can be made and presented  to sabotage the trial. Thus, there is a need to incorporate provisions under the Bill which extend  the protection of personal data of deceased persons after their death. Under the current Bill,  only data principals who are natural persons can institute a suit for breach of privacy. Therefore,  there is no provision to bring suit for unauthorized use of data of deceased persons. It should be made mandatory to seek the consent of the heirs of the deceased before making use of any  of their data meant for circulation in public. Hence, along with protecting the personal data of  deceased persons, such provisions must be added to the Bill, which authorizes the heirs of such  deceased or persons interested in the protection of data of such deceased to be able to file a  suit. Such provisions exist in the Privacy Act of Hungary under section 2520. The Spanish Data  Protection Act21 gives the right to heirs to modify, or erase the data in the public domain unless  it is established that the deceased would have prohibited such erasure or modification. 

Potential Solutions

  1. Developing Specialized Legal Frameworks

19 Simran Jain and Piyush Jha, ‘Deepfakes in India: Regulation and Privacy’ (South Asia @ London School of  Economics, 21 May 2020) < privacy> accessed 

20 Act CXII of 2011 on the right to informational self-determination and on the freedom of information 2019 21 The Spanish Data Protection Act (Organic Law 3/2018).

Specialized legal frameworks can be developed to address the unique challenges posed  by deepfake technology, considering its constantly evolving nature and the difficulty in  determining its definition and scope. 

  1. Strengthening Existing Laws:  

Existing laws related to privacy, identity theft, and obscenity can be strengthened to  better address the challenges posed by deepfake technology. 

  1. Promoting Transparency and Accountability

Government regulatory measures can be developed to promote transparency,  accountability, and ethical use of deepfake technology, requiring companies to disclose  their use of deepfake technology and implementing measures to prevent deepfake related fraud. 

  1. Encouraging Technological Solutions

Technological solutions can be developed to detect and mitigate the impact of  deepfakes, such as developing more sophisticated detection algorithms and promoting  awareness about deepfake technology. 

  1. Collaboration and Dialogue

Collaboration and dialogue between legal professionals, policymakers, and technology  companies are crucial to addressing the challenges posed by deepfakes. This can  include sharing best practices, promoting awareness, and fostering a culture of  responsibility and ethics in the development and use of deepfake technology. 

Recent Cases and Incidents in India

  1. In 2020, a series of deepfake videos featuring Indian politicians were circulated on  social media, causing widespread outrage and highlighting the potential for deepfakes  to be used for political manipulation1
  2. In 2019, a morphed video of an Indian actress was circulated, raising concerns about  the potential harm of deepfakes and the lack of a stand-alone law against deepfake  technology1


Deepfake technology poses significant legal challenges that necessitate a proactive and  adaptable approach. By developing more specialized legal frameworks, strengthening existing  laws, promoting transparency and accountability, encouraging technological solutions, and  fostering collaboration and dialogue, we can mitigate the harmful impacts of deepfake 

technology and preserve the trustworthiness of our digital world. Legal professionals,  policymakers, and technology companies must work together to address these challenges and  ensure that deepfake technology is used ethically, responsibly, and safely. 


This opinion is confidential and intended solely for The Lawway With Lawyers. It may not be  disclosed or used for any other purpose as mentioned in the internship terms and conditions without prior consent. If further assistance is needed or any additional details are provided by  the client then please contact us promptly. 

Thanks, and regards. 

Ankit Burman


Related Post

Leave a comment

Your email address will not be published. Required fields are marked *