SFLC.in’s Comments on the Draft Amendments to the IT Rules, 2021 relating to Synthetically Generated Information

Comments on the Draft Amendments to the IT Rules, 2021 relating to Synthetically Generated Information

SFLC.in welcomes the Ministry of Electronics and Information Technology (“Ministry”) for circulating the Draft Amendments to the IT Rules, 2021 relating to Synthetically Generated Information (“Draft Amendments”) to invite comments, objections and suggestions for a wider consultation process with the public. These regulations are part of a larger attempt to curb misuse of advanced tools to cause digital harms to citizens, and are doubly impactful in light of the vast proliferation of AI systems.

It must be stated that the aim of the Ministry is commendable, and that the increasing harms of deepfakes and misuse of AI-generation tools in spreading misinformation and misleading users require urgent regulation. However, the solution offered by the Draft Amendments is impractical and overbroad, when it needs to be grounded and targeted.

Legally Infeasible and Invalid: At the outset, it is important to understand whether the Draft Amendment is legally feasible at all.

The Information Technology Act, 2000, defines intermediaries as:

Section 2(1)(w)– Any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes.

Section 2(1)(za) defines Originator as ―

originator means a person who sends, generates, stores or transmits any electronic message or causes any electronic message to be sent, generated, stored or transmitted to any other person but does not include an intermediary;

Section 79 of the IT Act then goes on to define the liability of Intermediaries as-

  1. Exemption from liability of intermediary in certain cases. – (1) an intermediary shall not be liable for any third party information, data, or communication link made available or hosted by him.

(2) The provisions of sub-section (1) shall apply if–

(a) the function of the intermediary is limited to providing access to a communication system over which information made available by third parties is transmitted or temporarily stored or hosted; or

(b) the intermediary does not–

(i) initiate the transmission,

(ii) select the receiver of the transmission, and

(iii) select or modify the information contained in the transmission;

(c) the intermediary observes due diligence while discharging his duties under this Act and also observes such other guidelines as the Central Government may prescribe in this behalf.

To further lay down the due diligence requirements mentioned, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 were passed in 2021, which are the subject of the Draft Amendments.

The intent of the Draft Amendments is to regulate intermediaries which allow users to create, modify, generate, or alter synthetically generated information. However, can platforms such as ChatGPT, Perplexity, or Claude even be defined as intermediaries? From the definitions of intermediaries and originators provided under the IT Act, it is clear that an intermediary cannot generate content on its own. However, the platforms in question clearly generate content on the basis of prompts entered by users, rather than simply hosting or transmitting such content. Therefore, Generative AI platforms as it stands, would not fall under the definition of intermediaries under the IT Act, and would concurrently not be subject to regulation under the IT Rules, 2021 either.

If the intent of the Government is to retroactively include Generative AI platforms under the definition of intermediaries, an amendment to the parent legislation- the IT Act, 2000, would be required, rather than an amendment to a subordinate legislation.

Overbroad Definition: Coming to the text of the Draft Amendments, the definition of “synthetically generated information” used is overbroad and could lead to over-censorship, as it includes all instances where a computer resource had any role in generating or modifying the information. This could bring even benign use of computer resources, such as to edit, enhance or upscale content, or even rewrite and summarise text, under the purview of “synthetically generated content.” Without sufficient clarity in this overbroad definition, it could lead intermediaries to over-censor the content published on their platforms, regardless of context and intent, potentially suppressing the fundamental rights to freedom of speech & expression of its users.

Rigid Requirements: In addition to the vague definition, the newly inserted Rule 3(3) states that intermediaries that offer users the ability to create, generate, modify or alter information to be synthetically generated have to label or embed such information with a “permanent unique metadata or identifier” in a prominent manner:

  • In case of visual content, covering at least ten percent of the surface of the visual display;
  • In case of audio content, during the initial ten percent of its duration.

The label needs to identify that the information is synthetically generated using that specific computer resource of the intermediary. The rule mandates that such an intermediary “shall not enable the modification, suppression, or removal of such label, permanent unique metadata, or identifier.” However, the particular measures to be taken are not specified, which could result in an arbitrary requirement, as it is not technically feasible to have such permanent identifiers always remain permanent across platforms.

The newly inserted Rule 4(1A) provides additional due diligence obligations for Significant Social Media Intermediaries (SSMIs) when displaying, uploading, or publishing any information. SSMIs now have to obtain a declaration from the user as to whether such information is synthetically generated, as well as verify the accuracy of such declaration. If it is verified that the information is synthetically generated, the SSMIs have to indicate the same with a label clearly and prominently. The proviso to the rule states that “where such intermediary becomes aware, or it is otherwise established, that the intermediary knowingly permitted, promoted, or failed to act upon such synthetically generated information in contravention of these rules”, then the intermediary is deemed to have failed to exercise due diligence. This is problematic as “failing to act upon” synthetically generated information is deemed a failure to exercise due diligence, which could lead the intermediaries to over-monitor, excessively interfere, and pre-censor the posts and publications on its platforms.

Absence of Harm-Specific Regulation: Although the Draft Amendments have been formulated to combat “deepfakes, misinformation, and other unlawful content,” there are no provisions specifically addressing these harms, placing additional requirements to prevent such harms from coming about, or remedying the damage caused to users through these harms. Instead of addressing the creation and distribution of harmful synthetically generated information, the proposed amendments will tend to regulate the entire spectrum of it. In contrast, the ECI advisory dated 24th October requiring political parties, candidates, and their campaign managers to adopt similar labelling and disclosure mandates when using synthetically generated or altered information for campaign purposes, provides a stronger framework. The advisory states that content that is unlawful and misrepresents the identity, appearance, or voice of any person shall not be used without their consent, and that any instance of content that fails this requirement should be taken down within 3 hours of being noticed or reported. The ECI also requires that political parties maintain internal records of all AI-generated campaign materials, including creator details and timestamps, ensuring the use of such tools remains accountable. Juxtaposing the two texts clearly show that the Draft Amendments use vague language and would place arbitrary, disproportionate obligations onto intermediaries.

 

In light of the vast proliferation of AI tools and the potential for pre-censorship and overregulation, SFLC.in has issued the following comments, namely: