India's New IT Rules Clamp Down On AI-Generated Content and Rampant Misuse India seeks intermediaries to implement strict labelling, traceability, and rapid takedown protocols for AI-generated content (SGI)

By Kul Bhushan

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

pixabay

India is making the social networking platforms and individuals more accountable on AI-generated content.

Earlier this week, the Electronics and Information Technology (MeitY) notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These amendments extensively focus on what the government describes as "synthetically generated information" such as deepfakes. These rules are set to come into effect on February 20, according to MediaNama.

As expected, these amendments will have a big impact on the way social media functions right now. Even as the AI has come under the IT Rules ambit, here's what the MeitY describes 'synthetically generated information' (SGI) as:

SGI "means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event."

In simpler terms, the amendments formally bring SGI, including AI-generated audio, visual and audio-visual content such as deepfakes, within the statutory framework of intermediary due diligence obligations. Platforms are now required to clearly label AI-generated and synthetic content, capture user declarations at upload, and embed persistent identifiers or metadata to trace such content. These amendments expand intermediary compliance duties and strengthen mechanisms for identifying and managing artificial content under the rules, taking effect from 20 February 2026.

Moreover, the government is pushing for a relatively faster turnaround time on takedown requests. According to the notification, 36 hours has been reduced to within three hours under Rule 3(1)(d) and so on for different levels of escalation.

Impact and Implementation Challenges

As mentioned above, the sweeping changes to IT rules will have a big impact on social networking companies. As of now, generative AI content has flooded these social networks. With time, they have become more realistic, and there are multiple instances of misuse such as spreading misinformation and fake news. Most recently, X (formerly Twitter) came under the scanner for posting obscene photos of individuals through its Grok AI.

For large platforms, industry watchers say, the inclusion of synthetically generated information (SGI) in the amended rules means that disclosure and traceability requirements are now statutory obligations rather than optional measures. Under the framework notified intermediaries must prompt users to declare whether content is AI-generated, deploy systems to assess the nature of uploaded material, apply visible labels where content is flagged as synthetic, and embed metadata or identifiers that cannot be removed or suppressed.

These, however, are not without challenges, especially implementation at scale.

Platforms must implement these obligations across vast volumes of audio-visual uploads and multiple languages, while distinguishing between deceptive synthetic media and exempt categories such as routine editing, translation, or academic and illustrative material. The rules also require verification beyond user self-declaration, which implies greater reliance on automated detection systems—an area that remains technically complex for partial or heavily edited content.

According to a SFLC spokesperson, key challenges in the implementation include the technological feasibility of having permanent labelling which cannot be suppressed. Watermarks can be digitally removed, labeling in metadata can be circumvented through actions such as screenshotting.

"This infeasibility had been brought to light before the Committee constituted on the issue of Deepfakes by Meta, during the Stakeholder Discussions (Appendix II). Without adequate technological backing, it is likely that 'permanent' metadata might not be so. The Amendments must specify the specific standards for watermarking and labelling, accounting for the inherent limitations of such tools, which it does not," the spokesperson told Entrepreneur India.

"The rules call upon intermediaries to deploy reasonable and proportionate technical measures to verify the correctness of user-declarations and to ensure that no synthetically generated information is published without such declaration or label. The rules also state that an intermediary is deemed to have failed to exercise its due diligence obligations either if it "fails to act upon" synthetically generated content. This high penalty means that the intermediary will have to verify all content that is to be published, regardless of the user declaration, to see if it is synthetically-generated. However, the existing verification tools are not reliably accurate, and even C-DAC's Deepfake Detection Tool has a maximum accuracy of 89%. The Amendments do not take into account the possibilities of false positives and false negatives. While the 2026 SGI rules mention technical feasibility as the extent to which such measures need to be deployed, considering the consequences on intermediaries, there must be specified standards and methods prescribed by the Government."

Alisha Butala, research consultant at FutureShift Labs, further explains the reduction of response windows to as little as three hours fundamentally alters compliance architecture for large platforms such as Meta, Google, and X.

It necessitates continuous monitoring systems, high-confidence automated detection tools, multilingual review capacity, and round-the-clock legal escalation teams. Operational costs will rise sharply, particularly for cross-format analysis of audio-visual deepfakes and provenance checks across billions of uploads. There is also heightened regulatory exposure: errors in either direction—over-removal or delayed action, carry legal and reputational consequences.

"On the other hand, Faster takedowns directly target the virality mechanics of online disinformation. Synthetic videos and audio clips often derive maximum impact in the first few hours of circulation—especially during elections or crises. Compressing enforcement windows increases the likelihood that manipulated content is disrupted before it reaches mass audiences, thereby limiting narrative entrenchment and reducing the downstream costs of correction," Butala added.

That said, is the government taking a stricter position against the intermediaries on AI-based content? Harsh Walia, Partner at Khaitan & Co told Entrepreneur India that the rules use a mix of consequences to push compliance.

"For platforms, not following the required safeguards can mean losing safe harbour protection, being forced to take down content or suspend accounts, and having to share user details with authorities or affected individuals. There could also be reputational fallout. For individuals, the immediate consequences are mostly on the platform side, as their content can be taken down, accounts can be suspended or shut. Where the misuse is serious, such as deepfakes involving sexual harm, fraud, or impersonation, it lead to action under existing criminal laws, depending on the type of offence committed," Walia explained.

To sum it up, the updated IT Rules signal India's seriousness on the misuse of AI as well as making the intermediaries bear more accountability. Deploying labeling, traceability, and rapid takedowns, the rules aim to prevent disinformation virality, which is becoming a big menace. While changes come with noble intentions, one cannot

Business Ideas

70 Small Business Ideas to Start in 2025

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2025.

Branding

Creating a Brand: How To Build a Brand From Scratch

Every business needs good branding to succeed. Discover the basics and key tips to building a successful brand in this detailed guide.

Innovation

It's Time to Rethink Research and Development. Here's What Must Change.

R&D can't live in a lab anymore. Today's leaders fuse science, strategy, sustainability and people to turn discovery into real-world value.

Marketing

How to Better Manage Your Sales Process

Get your priorities in order, and watch sales roll in.

Business News

AI Agents Can Help Businesses Be '10 Times More Productive,' According to a Nvidia VP. Here's What They Are and How Much They Cost.

In a new interview with Entrepreneur, Nvidia's Vice President of AI Software, Kari Briski, explains how AI agents will "transform" the way we work — and sooner than you think.

Starting a Business

Passion-Driven vs. Purpose-Driven Businesses — What's the Difference, and Why Does It Matter?

Passion and purpose are both powerful forces in entrepreneurship, but they are not the same.