
Welcome to
ONLiNE UPSC
India has unveiled governance guidelines for Artificial Intelligence (AI) to balance innovation with accountability and growth with safety. The approach favours agile, sector-specific regulation over an immediate new AI law. It proposes an India-specific risk framework, an AI incident database, content authentication tools to tackle deepfakes, and techno-legal safeguards built into system design—marking a major policy step ahead of the India–AI Impact Summit 2026.
India seeks to use AI for inclusive growth and global competitiveness while managing risks to people and society. As AI applications—especially large language models—expand rapidly, clarity is needed on responsibility, safety research, and risk classification.
The government’s stance is to avoid premature over-regulation. The goal is to nurture innovation while setting guardrails such as risk assessment methods, voluntary frameworks, and grievance redress mechanisms.
The rise of synthetic media and deepfakes has prompted draft amendments to the IT Rules mandating content authentication. Uploaders must declare AI-generated content; platforms must verify and visibly label it. Non-compliant platforms risk losing safe-harbour protection.
AI use in the public sector poses privacy and inference risks. Prompts may inadvertently reveal policy priorities or operational data. India is debating safeguards against using foreign AI services and the protection of anonymised government datasets from misuse by global firms.
The central ethic is “Do No Harm.” Innovation is encouraged within regulatory sandboxes that allow experimentation with built-in risk mitigation and adaptive oversight.
India’s framework is human-centric, relying on existing laws like the IT Act and the Digital Personal Data Protection Act, while filling policy gaps through targeted amendments instead of a standalone AI statute.
India’s AI governance vision identifies six key pillars:
Expand access to data and compute infrastructure, attract investments, and leverage Digital Public Infrastructure (DPI) for innovation, scale, and inclusion.
Launch education, training, and skilling programs to build AI literacy, trust, and awareness of both risks and opportunities.
Adopt balanced, agile, and flexible frameworks that support innovation and mitigate AI risks. Review laws and address gaps through targeted legal amendments.
Develop an India-specific risk assessment framework based on real-world evidence. Encourage voluntary compliance through techno-legal measures and apply extra obligations for sensitive or high-risk AI applications.
Implement a graded liability system based on risk level and due diligence. Ensure transparency across the AI value chain and enforce compliance through existing laws supported by practical guidelines.
Adopt a whole-of-government approach involving ministries, regulators, and public bodies. Establish the AI Governance Group (AIGG) supported by a Technology & Policy Expert Committee (TPEC) and a National AI Safety Institute (AISI) for technical expertise.
| Timeframe | Key Priorities |
|---|---|
| Short-term |
|
| Medium-term |
|
| Long-term |
|
India’s AI governance journey is people-first and risk-based, anchored in the principle of “Do No Harm.” It focuses on sector-specific regulation through existing legal frameworks, prioritises deepfake detection and content authentication, and operationalises an India-specific risk system. With graded accountability, institutional coordination (AIGG–TPEC–AISI), and initiatives like AIKosh for compute and dataset access, India is positioning itself to scale trustworthy, inclusive, and globally competitive AI.
Examine how India’s AI Governance Guidelines aim to balance innovation with accountability while addressing key risks associated with artificial intelligence.
Kutos : AI Assistant!