
Welcome to
ONLiNE UPSC
Deepfakes are AI-generated audio, video, or images that convincingly mimic real people or events. Using large language models (LLMs) and multimodal AI systems, they can fabricate political speeches, fake news clips, or celebrity endorsements. The growing sophistication of such content poses serious risks to individual reputation, electoral integrity, and public trust in digital information.
The Ministry of Electronics and Information Technology (MeitY) has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 in response to the rising misuse of generative AI for misinformation, impersonation, and fraud. Recent incidents — including AI-cloned voices, fabricated videos of public figures, and fake news anchors — highlighted the urgent need for stronger regulatory safeguards.
Mandatory Disclosure: All AI-generated or altered content must carry a clear label or disclaimer indicating that it is synthetic.
Platform Responsibility: Social media platforms and digital intermediaries must identify, restrict, and remove harmful or deceptive AI-generated content.
Transparency Obligation: Platforms must disclose when AI tools have been used to create or modify visual or audio material uploaded by users.
Accountability Clause: Misuse of AI for impersonation, defamation, or electoral interference will attract penalties under the IT Act.
The IT Rules, 2021 already require intermediaries to remove unlawful or harmful content. The new draft expands this framework by including AI-generated and synthetic media under its ambit. This aligns India’s approach with emerging global frameworks, such as those in the United States and European Union, aimed at regulating synthetic content and ensuring digital accountability.
Detection Complexity: Identifying deepfakes requires sophisticated forensic tools and continuous monitoring, which most platforms currently lack.
Freedom of Expression: Over-regulation may restrict legitimate uses of AI, such as satire, art, or education.
Jurisdictional Limits: Deepfake content often circulates across borders, making enforcement difficult.
Public Awareness: Many users share manipulated media without verifying its authenticity, worsening misinformation.
Users will be required to self-declare AI-generated uploads, while platforms such as X, Meta, and YouTube must install automated detection systems, maintain transparency logs, and display visible disclaimers. Repeated non-compliance may invite fines, penalties, or even blocking of services under provisions of the IT Act.
Experts recommend the following measures to strengthen implementation:
• Establish a dedicated “Deepfake Monitoring Cell” within MeitY.
• Develop public AI verification tools to help identify manipulated content.
• Encourage collaboration between government, academia, and AI companies for developing reliable detection technologies.
• Launch digital literacy campaigns to educate citizens about identifying and reporting deepfakes.
India’s draft deepfake rules amend the IT Rules, 2021 to ensure AI-generated content is properly identified, platforms are held accountable, and malicious misuse is penalised. While the amendments mark a crucial step toward digital responsibility, enforcement challenges persist — including detection limitations, cross-border circulation, and public unawareness. A balanced approach between regulation and freedom of expression remains vital as AI tools increasingly shape digital narratives.
Kutos : AI Assistant!