Beyond Safe Harbor: The Rise of Personal Liability in Platform Regulation

In recent times, online platform owners and representatives have faced personal liability for user generated content, be it the arrest of Telegram CEO Pavel Durov or the suspension of social media platform X (previously Twitter) in Brazil. Digital platforms are being compelled by authorities to take measures of content takedown/account blocks to avoid criminal liability including imprisonment and personal liability for actions of platform users. In a recent proposal, the U.K. government has proposed to hold senior executives of online companies personally liable for failing to remove specific pieces of content including advertisements for sale of weapons, within two days. The senior executives could face a fine up to £10,000 as determined by the court. These instances resemble a “hostage taking” situation, as intermediaries are forced to choose between safeguarding employee safety and upholding freedom of speech. This blog post explores how freedom of speech on social media platforms is impacted in an era where personal liability and accountability is being imposed on platform owners or their employees.

 

Analysis

The term “hostage-taking” is symbolic in nature, describing how governments/ legal authorities use statutory provisions to hold local employees of global companies responsible to ensure compliance with government demands. As a general principle, users should be held responsible for the content they post on social media. Intermediaries are also granted safe harbor protections and generally, founders and employees are shielded from personal or criminal liability unless they are directly involved or have personal knowledge of the dissemination of harmful content. However, there has been an increasing trend of employees being threatened with personal liability for content moderation decisions. Compelling employees to bear the brunt of legal responsibility indirectly causes a chilling effect on free speech, as platforms tend to takedown content that may be valid, out of fear of sanctions under law. This pre-emptive censorship by intermediaries excessively policing content or complying too quickly with government requests to avoid legal risks, directly undermines users’ right to free speech.

Moreover, social media giants are increasingly withdrawing from certain countries, citing government content moderation requests that violate free speech and privacy and conflict with international law. Examples include Google’s exit from China, countries that have banned Telegram and the temporary withdrawal of X (formerly Twitter) from Brazil, all of which highlight serious concerns about free speech.

Stakeholders such as shareholders, directors and employees are considered distinct from the Company and are generally not personally liable for their company’s actions under the ‘corporate veil’ doctrine. Under this doctrine, the company is considered as a separate and distinct entity from its stakeholders, thus shielding individuals from being sued for the company’s acts. The corporate veil structure enables the stakeholders of the company to take business risks without worrying about losing their personal assets. However, if such stakeholders of the corporate entity were to engage in illegal activities or serious wrongdoings like fraud or misconduct, courts may pierce the corporate veil and hold individuals personally responsible. If personal liability must indeed be imposed, the threshold for doing so should be placed on high grounds, such as instances where stakeholders of the company deliberately ignored concerns raised on platform content, promoted illegal activities on the platform or for non-cooperation with authorities for following due process.

 

India

The need for safe harbour protection was first felt in India in the case of Avnish Bajaj v. State, wherein the CEO of Bazee.com was arrested and charged for sale of illegal content by a user, pursuant to it the Information Technology Act 2000 was widened to include safe harbor protections for intermediaries. Currently, under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“the Rules”), intermediaries are required to designate a grievance office. Further, significant social media intermediary i.e. an intermediary which has fifty lakh users shall designate a resident grievance officer, a nodal contact person for 24×7 law enforcement coordination with law enforcement agencies, and a chief compliance officer, all of them need to reside in India. The chief compliance officer must be key managerial personnel or a senior employee from the company, and can be punishable for a period of 5 years. According to the Information Technology Act, the intermediary or any person who fails to assist the agency shall be punishable with 7 years or fine. These provisions can subject individual employees to personal, criminal liability which constitutes an overreliance on stringent statutory mechanisms for third/private party compliance on government action. Eventually, such practices erode public trust in the online ecosystem.

European Union and the Digital Services Act (DSA)

The DSA requires platforms to assess and mitigate systemic risks related to the spread of illegal content, disinformation, and the impact on users’ rights, such as freedom of expression. While these safeguards are essential in addressing the global challenge of disinformation, such regulations may push platforms towards over-compliance, resulting in the over-removal of legitimate content to avoid fines and penalties. The consequences of this are felt more acutely when platforms are inclined to adopt a ‘better safe than sorry approach’ so as to avoid fines by adopting over removal/restricting user-generated content, at the cost of free expression. The DSA has provisions for fines up to 6% of a company’s annual turnover for non-compliance, adding significant financial risks to non-conformance. Such a high level of accountability raises concerns about the chilling effect on freedom of speech, as platforms may become more conservative in allowing certain types of speech or content to mitigate their legal exposure. However the liabilities at present are platform-centric, with no personal liability for representatives.

United States: Section 230 and Legislative Pushbacks

In the United States, platform liability has centered around Section 230 of the Communications Decency Act (CDA). Section 230 shields platforms from liability for user-generated content, while also granting them the flexibility to moderate content in good faith. This provision has been regarded as the legal foundation of free speech online, allowing platforms to host a wide variety of content without fear.

However, there have been increasing calls to reform or repeal Section 230, especially in light of concerns about the spread of disinformation, hate speech, and other harmful content on social media platforms.

 

Brazil: Court Orders and Platform Suspensions

X (previously Twitter) in Brazil recently faced a suspension and temporary withdrawal of services by a court order due to a failure to nominate a legal representative in the country. Read about it in detail in our blog post here.

This episode exemplifies the broader trend of governments attempting to hold platforms and their employees accountable for content moderation failures, particularly in the context of elections and political misinformation. Brazil’s approach, like India’s, involves using personal liability as a tool to ensure compliance, often threatening imprisonment of employees if platforms fail to comply with takedown requests or provide user data.

 

Russia and the Escalating Battle with Global Platforms

Russia has escalated its control over digital platforms through creating restrictions on digital platforms by using a combination of fines, takedowns, and arrests of local employees to enforce compliance with its increasingly restrictive digital laws. The introduction of Russia’s Sovereign Internet Law allows the government to isolate the country from the global internet, further amplifying concerns about censorship and surveillance. In the past few years, platforms like Google, Twitter, and Facebook have been fined and compelled to remove content deemed illegal by the Russian government.

 

China: A Closed Digital Ecosystem

China has long maintained tight control over its digital ecosystem, with strict censorship laws that demand platforms comply with the Great Firewall—a vast system of surveillance and content regulation. Global platforms like Google and Facebook have exited China due to the impossibility of operating within China’s strict censorship regime while adhering to international norms on free speech.

Platforms that operate within China, like WeChat, supposedly censor and surveil texts shared within the platform. Platforms also need to train personnel to conduct human reviews of uploaded content. If they don’t fulfill their monitoring responsibilities, they risk consequences like warnings, fines, service suspension, and the cancellation of permits or licenses for conducting business.

 

The Rise of Disinformation and AI-Generated Content

Another key reason for the increase of personal liability for platforms is due to the rise of disinformation proliferating through AI generated content, particularly during elections and crises. Governments are enacting new regulations to tackle the spread of AI-generated disinformation, such as deepfakes, and the challenges posed by the rapid dissemination of false narratives, such laws expand the scope of liability to intermediaries and limit applicability of safe harbour provisions. For instance, Germany passed the Network Enforcement Act to tackle false information. Investigations were also raised against Mark Zuckerberg and other Facebook executives in Germany for failing to react to hate speech on their platform.

 

Way Forward

As governments worldwide tighten regulations around digital platforms, the tension between enforcing laws and upholding free speech continues to grow. Imposing personal liability on employees for content moderation introduces a troubling dynamic, as intermediaries may resort to over-censorship and over-comply with takedown requests even if they may violate fundamental rights.

A path forward is to establish stronger procedural safeguards, transparency mechanisms, boosting media literacy, and providing better tools to factcheck. Digital platforms can implement robust systems to flag and act upon illegal content without overreaching with guidelines for harmful content. For example, WhatsApp’s introduction of limits on forwarded messages has shown that platforms can take technical steps to limit the spread of harmful content like disinformation, while still preserving the principles of free speech. AI generated content must be identified and labeled by platforms to avoid confusion between real and synthetic. Platforms can also be mandated to publish transparency reports to track government takedown requests and how they were handled, which will help in building trust between governments, platforms, and users.

 

Conclusion

The imposition of personal liability on platform employees is a reflection of the broader regulatory challenges posed by social media and digital content platforms. While the need to regulate harmful content is undeniable, holding employees personally responsible is not the solution.

Rather than relying on punitive measures that target individual employees, governments and digital platforms must work together to find balanced solutions that prioritize transparency, due process, and procedural safeguards. Regulations should focus on clear, fair systems for identifying and acting on illegal content, while preserving the corporate veil that enables platforms to operate without excessive fear of sanctions. By embracing these principles, we can ensure that digital platforms remain open spaces for free expression, while still addressing the legitimate concerns of content moderation in the modern era.