Platform Governance & AI: Global Majority Perspectives| Event Summary

On 18 February 2026, SFLC.in convened a closed-door roundtable in partnership with Global Partners Digital titled Platform Governance & AI: Global Majority Perspectives. The session took place in New Delhi on the sidelines of the IndiaAI Impact Summit 2026, bringing together a diverse group of policymakers, researchers, technologists, and civil society advocates working at the intersection of AI, platform governance, and human rights.

The event comprised of two round tables:

1. The Landscape of Challenges for Platform Governance coming from A1 

2. International Guidance to Navigate Platform Governance & AI Challenges

Why This Conversation Matters

Artificial intelligence is rapidly transforming the architecture of digital platforms. From content moderation and recommendation systems to targeted advertising, generative AI, deepfakes, and emerging agentic services, AI is no longer a peripheral tool, it is becoming embedded in the core governance mechanisms of online ecosystems.

These developments raise urgent questions:

  • Who holds power in AI-mediated platform environments?
  • How is accountability enforced when decision-making becomes automated and opaque?
  • What happens to human rights protections when AI systems scale harm across borders?
  • And crucially, whose perspectives shape global governance standards?

Despite being deeply affected by platform policies, voices from the Global Majority remain underrepresented in international platform governance debates. This round table sought to address that gap by creating a focused, candid space for exchange and strategy-building.

Session I: The Landscape of Challenges for Platform Governance Coming from AI

The first session mapped the evolving challenges AI introduces into platform governance frameworks.

Participants examined:

  • The opacity of algorithmic systems and limited transparency around automated moderation decisions
  • The expansion of AI-driven content ranking and its implications for political discourse and public interest information
  • Cross-border enforcement inconsistencies and linguistic bias in moderation systems
  • The scaling of harm through generative AI, including synthetic media and misinformation
  • Growing concentration of infrastructural and data power among a small number of dominant actors

Discussions underscored that Global Majority contexts often experience heightened vulnerability: weaker regulatory leverage, limited access to platform accountability mechanisms, reduced investment in local language moderation, and disproportionate exposure to disinformation and digital repression.

Participants also reflected on the increasing automation of governance itself,  where AI not only moderates content but shapes visibility, monetization, and access, fundamentally restructuring online power dynamics.

Session II: International Guidance to Navigate Platform Governance & AI Challenges

The second session turned toward pathways for accountability and rights-based governance.

Participants engaged with international human rights guidance frameworks, including the OHCHR B-Tech Project, and discussed how these standards can be operationalized in diverse legal and political contexts.

Key themes included:

  • Human rights due diligence obligations for technology companies
  • Embedding transparency and explainability requirements into regulatory frameworks
  • Strengthening grievance redressal and remedy mechanisms
  • Ensuring that AI governance does not erode intermediary accountability safeguards
  • The importance of cross-regional peer learning and coalition-building

There was broad agreement that global standards must not remain abstract principles. Instead, they must be translated into enforceable regulatory mechanisms and advocacy strategies that reflect local realities and democratic values.

Building Sustained Collaboration

The round table emphasized that AI governance is not merely a technical issue, it is fundamentally about power, rights, and institutional accountability. As AI systems become embedded within both private platforms and public sector infrastructures, governance frameworks must evolve to address structural asymmetries and prevent the entrenchment of dependency.

Participants expressed a shared commitment to strengthening Global Majority representation in international governance debates, deepening cross-regional collaboration, and building sustained advocacy networks capable of responding to rapidly evolving AI systems.

SFLC.in remains committed to advancing transparent, accountable, and rights-respecting approaches to AI and platform governance. This convening marked an important step toward centering Global Majority perspectives in shaping the future of digital power and human rights in the age of AI.