Event Note: Second IT Rules Stakeholder Consultation, 24 April 2026

Event Note: Second IT Rules Stakeholder Consultation, 24 April 2026

On 24th April, 2026, SFLC.in organised a second stakeholder consultation on the second draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Draft Amendments”). The consultation brought together legal experts, journalists, civil society members, and platform representatives to examine the proposed changes. The closed-door roundtable discussion was structured around a series of questions on how the IT Rules 2026 amendments alter the current regulatory framework, and how those changes are likely to play out in practice.

 

What will actually change in the law?

 

One set of participants approached the amendments by separating the overarching substantive law from the underlying enforcement architecture. From this perspective, it was said that the substantive definition of unlawful speech in India remains unchanged and what has shifted is the distribution of enforcement power. Earlier, takedown obligations were mediated through intermediaries, which could apply some level of review, but under the amendments, that same enforcement can now extend directly to individuals who post or share content that is considered “news and current affairs.”

 

However, this framing was challenged by other participants who argued that who is regulated is as important as what is regulated. Expanding enforcement to individuals fundamentally alters the system by increasing the number of actors exposed to legal risk and lowering the threshold for governmental intervention. This is particularly significant given that “news and current affairs” is broadly and vaguely defined in the Rules themselves. As one participant pointed out, political opinion, commentary, and even casual discussion of public events on platforms like X could fall within that category.

 

A further concern was how enforcement through advisories and guidelines can shift the window of acceptable speech over time in unpredictable ways. Any expression that may have been clearly acceptable when it was originally posted could later be flagged as unlawful speech and acted upon, creating uncertainty not just about what is allowed, but about how long content remains safe from scrutiny.

 

A number of participants explicitly framed the amendments as marking a shift in the regulatory burden from platforms toward direct regulation of users. One view was that the framework now allows the government to initiate action without relying on intermediary-triggered processes, and even without waiting for a complaint. This changes intermediaries from being primary points of accountability, to just one of several enforcement channels. Another participant extended this argument, noting that the amendments blur the distinction between publisher and user. If an individual posting a video explaining a policy issue can be treated as a publisher of news and current affairs content, then regulatory obligations traditionally applied to institutions are being pushed onto individuals. This was linked to a broader concern that these regulatory categories could keep expanding, so that almost any online activity could eventually fall within the scope of regulation.

 

What role will advisories and executive directions play?

 

There was detailed discussion on the implications of requiring compliance with advisories, SOPs, and similar instruments. One participant described this as a shift from a law-based system to a direction-based system, where obligations are shaped dynamically by executive instructions rather than formal rule-making. Examples were cited of past advisories, particularly the advisory on AI , that imposed strict requirements but were later rolled back after pushback. While this allows for greater flexibility in terms of regulation, the concern is that if such instruments become binding, platforms will have to treat evolving or even unstable directions and advisories, which traditionally serve an interpretive role alone, as enforceable obligations.

 

Another issue raised was the lack of procedural safeguards. Advisories may not be publicly notified, may not go through consultation, and may not clearly define scope or applicability. This makes it harder for different actors to know what is expected of them or to respond consistently. Participants noted that this system encourages anticipatory compliance by platforms, which would make them act not only on formal orders but on perceived expectations of regulators.

 

How do the amendments interact with the reduced compliance timelines?

 

Participants examined how shorter timelines under the SGI Rules, 2026 are already affecting decision-making in practice, and that the draft Amendments would exarcebate these ill-effects. One participant noted that even the difference between a 24-hour and a 3-hour window significantly affects whether any meaningful review can take place. At a 3-hour timeline, decisions become effectively automatic. There is little time to assess whether a complaint is made in good faith, whether the content meets legal thresholds, or whether there are competing considerations such as public interest. As one example, malicious complaints could be used to target individuals, with platforms unable to distinguish them from legitimate reports within the timeframe.

 

Another participant argued that prioritising speed of compliance may actually undermine victim protection. Systems built around rapid response may lack consistency and reliability, and harmful content may shift to less regulated platforms while legitimate content is removed on mainstream ones. There was also concern about evidence preservation. Rapid takedowns may remove content before it can be documented or used in legal proceedings, particularly in cases involving harassment or abuse.

How does this affect platform behaviour in practice?

 

Platform representatives and advisors described compliance as becoming increasingly unstable. One participant characterised it as a “moving goalpost,” where each new advisory or directive requires changes to moderation policies and internal architecture. This is especially difficult for platforms operating globally. Another key issue is risk management: even if directives, advisories, etc. can be legally challenged, the immediate risk of non-compliance, including blocking or loss of safe harbour, pushes platforms toward over-compliance.

 

Participants working with smaller organisations highlighted a different set of pressures. One point made was that the regulatory environment already includes multiple layers of enforcement including takedowns, blocking, and even criminal proceedings. The amendments add another layer, but do not replace the existing ones. For smaller platforms, unclear, broad definitions and the potential for retrospective enforcement make it difficult to assess risk. Content posted months earlier may suddenly be flagged, and smaller platforms may not have the capacity to build systems that can respond within strict timelines or adapt to frequent regulatory changes. This creates a structural disadvantage and may push them out of the market, leading to consolidation.

 

A detailed intervention from the perspective of collaborative platforms which rely on decentralised, community-driven moderation systems, highlighted how regulatory pressure may push them toward centralised decision-making, and undermine their core model. Contributors may avoid topics that are legally risky, especially those related to current events, leading to gaps in knowledge production. An additional point raised was the downstream impact on AI systems, which rely on such knowledge bases. Reduced participation and coverage could affect the quality of data used in training and, in turn, AI outputs.

 

How are users likely to be affected by these changes?

 

Participants described user impact primarily in terms of uncertainty and behavioural change. If users cannot clearly determine whether their content falls within “news and current affairs,” they may avoid posting on certain topics altogether. There were also concerns about access to grievance mechanisms. In many cases, users already need to go through law enforcement channels to initiate takedowns. If the state becomes the primary issuer of orders, individual users may be further sidelined.

 

Transparency was another key issue that was raised. Users may not know why content was removed, what standard was applied, or how to challenge the decision. Without visibility into takedown processes, accountability remains limited. This may become exacerbated if the amendments come into force.

 

Participants working on gender-based abuse and online harm raised specific concerns. One example highlighted how reporting content to a platform led to the perpetrator discovering the complaint, resulting in further harassment. Systems that prioritise speed or centralised enforcement, they argued, may not adequately safeguard sensitive information. There was also the concern that when the state becomes the dominant actor in issuing takedown orders, individual complaints may be deprioritised. Complainants who do not want to engage with law enforcement may have fewer options, and the combined effect of uncertainty and exposure risk is likely to increase self-censorship among already vulnerable groups.

 

What uncertainties remain, and what are the possible remedies?

 

Several unresolved questions were identified. One participant asked whether the amendments create a parallel regime alongside existing safe harbour provisions, or whether they are intended to operate within that framework. Another issue is the interaction with ongoing litigation, as Part III of the existing rules are under judicial challenge, and it is unclear how the new amendments relate to those proceedings. There were also questions about procedural clarity, i.e., how requests will be processed, what standards will be applied, and how different authorities will coordinate.

 

Participants acknowledged a degree of fatigue with repeated cycles of amendments and consultations, but identified several approaches to. One was to engage directly with government objectives as part of the advocacy, highlighting contradictions, for example, in increasing criminal exposure while broader policy efforts aim toward decriminalisation. Another was to revive institutional mechanisms under the IT Act, such as the Cyber Regulations Advisory Committee which has historically met infrequently, to enable more structured consultations. Participants also suggested exploring alternative engagement channels, including international forums, and increasing transparency by publishing takedown orders.

 

There was also an emphasis on addressing underlying speech regulation standards, rather than focusing solely on procedural issues and reframing advocacy to focus more clearly on user impact and the broader implications for the digital ecosystem.

 

Conclusion

 

The Amendments suggest a broader shift toward centralised, executive-driven control over online content, accompanied by increased compliance burdens and reduced procedural safeguards. Participants differed on whether this represents a continuation of existing trends or a more fundamental shift. However, there was broad agreement that the framework is becoming harder to interpret, harder to comply with, and more difficult to challenge especially for those without institutional resources.