Graphic 'Defending Free Speech and Privacy on the Internet'

Summary Report: ‘Defending Free Speech and Privacy on the Internet’ A Round Table Discussion on the Future of Intermediary Liability in India

SFLC.in organized a round table discussion on the Future of Intermediary Liability in India titled ‘Defending Free Speech and Privacy on the Internet’. The round table took place on 13th February, 2019 at India International Centre, New Delhi. Attendees included members of civil society, industry organizations, academia, representatives from tech companies, intermediaries, industry and media. The round table included two sessions and following is a report on major discussions that took place.

Session 1: Discussing key recommendations made by various stakeholders

The discussions during the first session of the round table focused on:

  1. Classification of intermediaries:

  2. Rule 3(4) of the proposed Information Technology (Intermediaries Guidelines) Rules, 2011 (“Draft Rules”) which requires intermediaries to inform users about privacy policy at least once every month;

  3. Rule 3(9) of the Draft Rules which directs intermediaries to deploy automated filters for proactively filtering unlawful information or content;

  4. Rule 3(5) of the Draft Rules which mandates enabling tracing out of information; and

  5. Incorporation of intermediaries under Rule 3(7) of the Draft Rules.

Participants reasoned that Draft Rules should be framed according to the role of intermediaries and their control over the content. The present situation is that the proposed Draft Rules apply to all intermediaries, from online platform to cyber cafes. This one size fits all approach is problematic. There is a need to identify the categories that Draft Rules would apply to and tailor the conditions for each category.

The requirement to inform users at least once every month is counterproductive as it will cause consent fatigue. As an alternative, the requirement should be that platforms upload the changes in their privacy policies so that the users are kept meaningfully abreast of the relevant updates in policies and do not get lost in a barrage of messages.

On the use of automated content filtering, the participants agreed that this requirement will not only effectuate pre-censorship but will also have a chilling effect on free speech. When read with Draft Rule 3(2)(b) which contains a broad category of terms such as “blasphemous”, “grossly harmful” and “harassing” that are vague1, deployment of technologies to proactively monitor and weed out such content will violate the right to freedom of speech and expression. This also implies that the intermediary will be applying its judgement to take decisions about removing content posted by third parties. This abrogates the basic conditions of safe harbour for intermediaries under Section 79 of the Information Technology Act, 2000 (“IT Act”).

Taking the discussions further, the participants discussed the issue of traceability. Traceability depends on the kind of information the intermediary collects and has nothing to do with encryption. There are challenges in fulfilling this requirement as the intermediaries will be required to maintain impossibly vast database of information and this affects the principle of data minimisation, an essential component for implementation of right to privacy. Moreover, creating backdoors or deliberate vulnerabilities in end to end encryption threatens security of users, exposing them to hackers. To facilitate tracing, applications like ShareChat and Tiktok include a watermark with the handle of the user which doesn’t reveal the identity of the user but ties the messages to it.

Further, mandatory requirement for incorporation under the Companies Act, 2013 of India cannot be introduced by a subordinate legislation under the IT Act. This is excessive delegation and goes beyond the scope of the proposed Draft Rules. The condition will impose onerous burden on start-ups and dis-incentivize them from growing. Alternative options such as opening representative office and designating an officer for constant coordination with law enforcement agencies can be explored.

Session 2: Safe Harbour Is Essential, But How Can We Ensure Accountability

Second session of the round table focused on balancing safe harbor protection for intermediaries while making then accountable for unlawful content. The discussion broke out with an admission that intermediaries themselves want the unlawful content such as hate speech, misinformation, child pornography, copyright infringement being taken down at the earliest but free speech should not be made a casualty in the process.

The Draft Rules could be seen as excessive delegation under the IT Act and a form of state censorship. Section 69A of the IT Act already allows issuing directions for ‘blocking for public access of any information through any computer resource’. Section 79 of the IT Act on the other hand relates to safe harbour protection only. It is an exception clause and imposing elaborate content take down requirements under it is outside its purview. This view was supported by Supreme Court in Shreya Singhal v. Union of India [WP (Crim) 167 of 2012] and Delhi High Court in Myspace Inc. V. Super Cassettes Industries Ltd. [FAO(OS) 540/2011] where the courts found that Section 79 was an exception and shouldn’t be used as a substantive provision for content take down. Also Section 69A has procedural safeguards which must be followed before issuing content take down notices. However, Section 79 being exception clause lacks adequate safeguards and the Draft Rules dilutes necessary checks needed for content take down. So, it was suggested that Section 69A should be invoked for the purpose instead of Section 79.

Concerns were raised on stringent response time to take down notices imposed under the Draft Rules. In some cases the government notice is in a vernacular language and translating these or getting a legal opinion on validity of order takes time.

Attendees realized that the government has bona-fide concerns that there has been a rise in misinformation, terrorist propaganda and other forms of unlawful content on digital space. Intermediaries should be made responsible but rules and regulation should be proportionate. Suggested solutions include IAMAI’s self-regulatory model, intermediaries making contract with users not to upload such content (T&C) and updating community guidelines. The knowledge gap should be filled by regular interaction between a consortium of technology companies and government. Also, more transparency is needed from the tech-companies.

On the fake news issue, it was suggested that fake news should be clearly defined and applied equally across intermediaries. A common list benchmark on fake news could be created. Social media entities could work with fact checking organizations to collate a list. Predictability of user behavior by the social media platforms has been criticized as biased.

There is a social and literacy gap that needs to be filled. Low literacy rates and large population is a challenge while dealing with fake news. A holistic approach including capacity development, educating the users on ‘netiquettes’ and generating awareness should be used. Efforts being taken in some Kerala schools to train students to identify fake news could be replicated at regional and national levels. Correcting fake news with right information was suggested. Police could counter fake news by using the same social medial platforms. Police in Kolkata runs a Facebook page to generate awareness. Similarly, police in Maharashtra is using technology to take down copyright infringing content.

On incorporation requirement in the Draft Rules, an issue was raised that tech-companies do not cooperate with law enforcement agencies. It was suggested that standard operating procedures should be established and process gaps need to be addressed.

Use of fake news in election campaigns was also discussed. Government being a major stakeholder in the election process has a larger role to play. It should work with the Election Commission of India (“EC”) in taking out advertisements against fake news. Simple steps like asking for fact checking could bring a huge change. The recent deliberations at EC on Section 126 of Representation of People Act, 1951 could have been broader and more transparent.

Certain clarifications in IT Act were suggested. It is unclear what consequences would ensue on violation of safe harbor protection. Whether intermediaries would be exposed the entire list of offences under the IT Act and other laws like IPC. Some attendees were of the view that intermediaries should not be given a blanket safe harbor protection especially for content or fake news that leads to ethnic cleansing or lynchings. The recent expose by New York Times that Facebook knew about the unprecedented Russian advertisement spending was discussed. It was reiterated that platforms must be made responsible and transparent.

In summary, suggestions included building transparency and accountability in industry and tech-companies; training and capacity building in digital literacy and security; government should be involved in the process; addressing challenges in implementation by building a centralized complaint mechanism on fake news and online harassment, and building a stronger encryption policy. Policy intents and recitals are good options that could be incorporate to better understand the draft rules.

1In the Shreya Singhal v. Union of India (WP (Crim) 167 of 2012) the Supreme Court declared that Section 66A suffered from the vice of vagueness and had struck it down. Terms used in Section 66A such as “grossly harmful” and “harassing” are still used in the Draft Rules.

 

Related Posts