The government plans stricter AI and deepfake regulations; proposes platform liability

The government plans stricter AI and deepfake regulations; proposes platform liability

3 minutes, 58 seconds Read

The government on Wednesday proposed changes to IT rules, mandating clear labeling of AI-generated content and increasing the responsibility of major platforms such as Facebook and YouTube to verify and flag synthetic information to limit harm to users due to deepfakes and misinformation.

The IT ministry noted that deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create “persuasive falsehoods,” where such content can be “weaponized” to spread disinformation, damage reputations, manipulate or influence elections, or commit financial fraud.

The proposed changes to the IT rules provide a clear legal basis for labelling, traceability and accountability regarding synthetically generated information.

In addition to clearly defining synthetically generated information, the draft amendment, which invited comments from stakeholders by November 6, 2025, mandates labeling, visibility, and metadata inclusion for synthetically generated or modified information to distinguish such content from authentic media.

The stricter rules would increase the responsibility of key social media intermediaries (those with 50 lakh or more registered users) in verifying and flagging synthetic information through reasonable and appropriate technical measures.

The draft rules require platforms to label AI-generated content with prominent marks and identifiers, covering at least 10 percent of the visual representation or the first 10 percent of an audio clip’s duration.

It requires major social media platforms to obtain a user statement as to whether uploaded information is synthetically generated, take reasonable and proportionate technical measures to verify such statements, and ensure that AI-generated information is clearly labeled or accompanied by a notice indicating the same.

The draft rules further prohibit intermediaries from altering, suppressing or removing such labels or identifiers.

“There are demands in Parliament and many forums to do something about deepfakes, which are harmful to society. People using the image of a prominent person, which then affects their personal lives and their privacy. The steps we have taken are aimed at ensuring that users know whether something is synthetic or real. It is important that users know what they are seeing,” said IT Minister Ashwini Vaishnaw, adding that mandatory labeling and visibility will enable clear distinctions. between synthetic and authentic content.

Once the rules are finalized, failure to comply with the rules could result in the loss of the safe harbor clause that major platforms enjoy.

With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information (deepfakes), the potential for misuse of such technologies to harm users, spread disinformation, manipulate elections or impersonate individuals has increased significantly, the IT ministry said.

Accordingly, the IT Ministry has prepared draft amendments to the IT Rules, 2021, with the aim of strengthening due diligence obligations for intermediaries, especially for key social media intermediaries (SSMIs), as well as for platforms that enable the creation or modification of synthetically generated content.

The draft introduces a new clause that defines synthetically generated content as information that is artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.

An IT ministry note said that policymakers worldwide, including in India, are increasingly concerned about fabricated or synthetic images, videos and audio clips (deepfakes) that are indistinguishable from genuine content, and are brazenly used to produce non-consensual intimate or obscene images, deceive the public with fabricated political or news content, commit fraud or to imitate for financial gain.

The latest move assumes significance as India is among the top markets for global social media platforms such as Facebook, WhatsApp and others.

A senior Meta official said last year that India has become the largest market for Meta AI usage. In August this year, OpenAI CEO Sam Altman said that India, which is currently the company’s second-largest market, could soon become the largest in the world.

Asked whether the changed rules would also apply to content generated on OpenAI’s Sora or Gemini, sources said that in many cases videos are generated but not distributed, but the obligation comes into effect when a video is posted for distribution. The responsibility in such a case would lie with intermediaries who display the media to the public and users who host media on the platforms.

On the handling of AI content on messaging platforms like WhatsApp, sources said that once it is brought to their attention, they will have to take steps to prevent its virality.

India has witnessed an alarming increase in AI-generated deepfakes, prompting judicial interventions. The latest viral cases include misleading advertisements depicting Sadhguru’s fake arrest, which the Delhi High Court has ordered US digital giant Google to remove.

Earlier this month, Aishwarya Rai Bachchan and Abhishek Bachchan sued YouTube and Google in a lawsuit seeking damages of Rs 4 crore over alleged AI deepfake videos.

– Ends

Published on:

October 22, 2025

#government #plans #stricter #deepfake #regulations #proposes #platform #liability

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *