The rise of deepfake technology presents unparalleled challenges to the integrity of our electoral systems, allowing for the creation of convincingly altered media that is hard to distinguish from reality. On March 7th, 2024, we convened a panel of experts for a discussion titled ‘Ballots and Bots: Elections 2024 In a Digital World’, focusing on digital interference in India’s upcoming national elections. During this session, experts explored the looming threats posed by political parties’ IT cells, Misinformation Campaigns, Deep fakes, and AI-driven manipulation of audio and video content in shaping the outcomes of the impending general elections in India.
As a follow-up to our event, SFLC.in, along with other civil society organisations and concerned citizens, has written to the Election Commission of India and platform companies to add stringent measures regarding generative AI and manipulated media as our nation gears up for National Elections, which will be held between 19 April 2024 and 01 June 2024.
As stakeholders committed to upholding democratic principles, we believe proactive measures are essential. In light of this concern, we urge ECI and the platform companies to take immediate and decisive action to ensure policies and practices against deepfakes and manipulative content.
The letter addressed to the Election Commission of India highlights the urgency of intervention to uphold electoral integrity in the face of threats posed by deep fakes and manipulated media. We urge ECI to direct intermediaries and platforms to reinforce internal processes and implement effective measures against misinformation tailored to electoral risks.
Similarly, we wrote to platform companies underscoring the need for robust policies and practices to counter the menace of Deep fakes, Generative AI, and Manipulated Media Content. We ask them to implement measures tailored to mitigate the risks associated with the creation and dissemination of such content, including access to official information on electoral processes, media literacy initiatives, fact-checking labels, and more.
● Letter to ECI:
To,
The Honourable Chief Election Commissioner
Election Commission of India
Nirvachan Sadan, Ashoka Road,
New Delhi – 110001
Sir,
Subject: Urgent need for intervention on generative AI and manipulated media content to uphold electoral integrity
The emergence of deepfake technology, which enables the creation of highly realistic and difficult-to-detect manipulated media, poses unprecedented threats to the sanctity and integrity of our electoral processes. We recently held an experts’ discussion on digital interference in India’s forthcoming national elections – ‘Ballots and Bots: Elections 2024 In a Digital World’ on March 7th, 2024. Here, experts deliberated on the potential influence of IT cells of political parties, misinformation, deep fakes, and AI-powered audio and video manipulation during the upcoming general elections in India.
The Election Commission of India (ECI), as the apex body mandated to conduct free and fair elections, has a constitutional obligation to safeguard our democracy from any form of external manipulation or influence. Under Article 324 of the Constitution of India, the ECI is endowed with plenary powers to direct, control, and supervise all electoral processes, ensuring their conduct in a free and fair manner. This includes the authority to make necessary interventions to address emerging threats that can undermine the fairness of elections.
In light of the recent surge in instances involving deep fakes, which can be exploited to spread misinformation, impersonate political figures, and manipulate voter perception, the ECI must take decisive and urgent action. Such manipulated media distort public discourse and threaten to erode trust in our democratic institutions and processes.
Therefore, we respectfully urge the Election Commission of India to:
1. Direct intermediaries and platforms to identify systemic risks related to electoral processes. They must reinforce internal processes to identify and implement reasonable, proportionate, and effective mitigation measures.
2. Intermediaries and platforms must set up an internal task force for elections-specific risk mitigation measures. The team should cover areas relating to cybersecurity, threat disruption, content moderation and disinformation.
3. Intermediaries and platforms must implement reasonable, proportionate, and effective mitigation measures tailored to risks related to the creation and dissemination of generative AI and manipulated media content.
These measures may include:
a. access to official information on electoral processes,
b. media literacy initiatives,
c. fact-checking labels and providing more contextual information,
d. tools and information to help users assess the provenance of information sources.
e. ensure that generative AI and manipulated media are clearly distinguishable for users and labelled as such.
The Election Commission’s proactive engagement in directing platform companies to enhance their policies against manipulated media is crucial in maintaining the integrity of our electoral processes. Such measures are essential to ensure that elections in India remain free from manipulation, thereby upholding the democratic principles that are the cornerstone of our nation.
Thank you for considering this urgent and important matter.
Sincerely,
[in alphabetical order]
Organisations:
Access Now
Internet Freedom Foundation
Software Freedom Law Center, India
Citizens:
Abhishek Baxi
Abhishek Bhatt
Amar Kumar
Amit Kumar Ukey
Ashish Asgekar
ASHISH GUPTA
Asif
Gurumurthy Kasinathan
Jahnabi Mitra
Kanika Saxena
Kuldeep Nakra
Mayank
Nimitt Dixit
Osama Manzar
R Swarnalatha
Radhe Shyam
Rahul Batra
Rai Vikrant
Ratna Singh
Rishi Seth
Ritobhash
Romil
Shrikrishna Kachave
Shuvam
Vickram Crishna
Vishal Singh
1. Google:
To,
Mr. Sreenivasa Reddy
Head- Government Affairs and Public Policy, India
No 3, RMZ Infinity – Tower E, Old Madras Road, 4th & 5th Floors, Bangalore
Bangalore KA 560016 IN
Email: srinivasareddy@google.com
Subject: Joint letter on concerns of generative AI and manipulated media content on your platform
We are writing to address the critical issue of generative AI and manipulated media content and their potential to undermine the sanctity of elections in India. The emergence of deepfake technology presents a profound challenge, enabling the creation of highly realistic and manipulated media that can distort truth, manipulate public perception, and jeopardise the foundation of trust upon which our democracy rests.
As you are likely aware, generative AI and manipulated media content represent a profound and emerging threat to the integrity of information shared online. These highly sophisticated manipulations of media content can undermine the public’s trust in digital platforms and, more critically, in democratic processes themselves. With this technology’s increasing pervasiveness, it has become imperative to address the potential for misuse, especially in the context of elections.
Apart from ethical concerns, generative AI and manipulated media content entail legal risks as well, which may lead to instances of defamation, impersonation, identity theft, and sexual harassment, amongst other harms. Rule 3(2)(b) mandates due diligence concerning artificially morphed images impersonating an individual in any sexual act or conduct. This requires intermediaries to take all reasonable measures to remove or disable access to such content hosted, published, and transmitted on their platform within 24 hours of receiving a complaint. Failure to expeditiously remove or disable access upon knowledge of such content would strip intermediaries from the safe harbour protection under Section 79 of the Information Technology Act, 2000. This may also attract penalties under Sections 66C, 66E, and 67 of the Act, as also Sections 354A, 354C, and 499 of the Indian Penal Code, 1860.
We recently held an experts’ discussion on digital interference in India’s forthcoming national elections – ‘Ballots and Bots: Elections 2024 In a Digital World’ on March 7th, 2024. Experts deliberated on the potential influence of IT cells, misinformation, deepfakes, and AI-powered audio and video manipulation during the impending general elections in India.
In the face of this growing concern, we call upon Google to take immediate and decisive action to ensure that the policies and practices robustly counter the menace of deepfakes, generative AI and manipulated media content. It is imperative that we
work together to safeguard our electoral processes from external manipulations that threaten the democratic rights and freedoms of our citizens.
We urge you to:
1. Identify systemic risks related to electoral processes – Google must reinforce internal processes to identify and design reasonable, proportionate, and effective mitigation measures.
2. Elections-specific risk mitigation measures: Google must set up an internal task force for elections-specific risk mitigation measures. The team should cover all relevant areas: cybersecurity, threat disruption, content moderation and disinformation.
3. Google must implement reasonable, proportionate, and effective mitigation measures tailored to the risks associated with creating and disseminating generative AI and manipulated media content.
These measures may include:
a. Access to official information on electoral processes,
b. media literacy initiatives,
c. fact-checking labels and providing more contextual information,
d. tools and information to help users assess the provenance of information sources.
e. ensure that generative AI and manipulated media are clearly distinguishable for users and labelled as such.
The integrity of our electoral processes is a cornerstone of our democratic society. As such, it is incumbent upon all stakeholders, including large platforms such as Google, to actively protect discourse from emerging threats. The measures we propose are crucial for the upcoming elections and imperative for preserving the long-term health and vibrancy of our democratic discourse.
We look forward to your positive response and are keen to discuss this matter further with you. Together, we can ensure that the digital landscape remains a space for free, fair, and truthful engagement, reflecting India’s democratic spirit.
Sincerely,
[in alphabetical order]
Organisations:
Internet Freedom Foundation
Software Freedom Law Center, India
Citizens:
Abhishek Baxi
Akshay Deshmane
Amar Kumar
Amit Kumar Ukey
Angela Thomas
Animesh Narayan
Ashish Asgekar
Ashish Gupta
Gurumurthy Kasinathan
Jahnabi Mitra
Jahnabi Mitra
Kanika Saxena
Kuldeep Nakra
Osama Manzar
Rahul Batra
Ratna Singh
Ritobhash
Romil
Selvaraj
Shrikrishna Kachave
Shuvam
Vickram Crishna
Vishal Singh
2. Meta:
To,
Mr. Shivnath Thukral
Director and Head of India Public Policy, Meta
DLF ATRIA, Gulmohar Marg, DLF Phase 2, Sector 25, Gurugram, Haryana 122002
Email:sthukral@fb.com
Subject: Joint letter on concerns of generative AI and manipulated media
content on your platform
We are writing to address the critical issue of generative AI and manipulated media content and their potential to undermine the sanctity of elections in India. The emergence of deepfake technology presents a profound challenge, enabling the creation of highly realistic and manipulated media that can distort truth, manipulate public perception, and jeopardise the foundation of trust upon which our democracy rests.
As you are likely aware, generative AI and manipulated media content represent a profound and emerging threat to the integrity of information shared online. These highly sophisticated manipulations of media content can undermine the public’s trust
in digital platforms and, more critically, in democratic processes themselves. With this technology’s increasing pervasiveness, it has become imperative to address the potential for misuse, especially in the context of elections.
Apart from ethical concerns, generative AI and manipulated media content entail legal risks as well, which may lead to instances of defamation, impersonation, identity theft, and sexual harassment, amongst other harms. Rule 3(2)(b) mandates due diligence concerning artificially morphed images impersonating an individual in any sexual act or conduct. This requires intermediaries to take all reasonable measures to remove or disable access to such content hosted, published, and transmitted on their platform within 24 hours of receiving a complaint. Failure to expeditiously remove or disable access upon knowledge of such content would strip intermediaries from the safe harbour protection under Section 79 of the Information Technology Act, 2000. This may also attract penalties under Sections 66C, 66E, and 67 of the Act, as also Sections 354A, 354C, and 499 of the Indian Penal Code, 1860.
We recently held an experts’ discussion on digital interference in India’s forthcoming national elections – ‘Ballots and Bots: Elections 2024 In a Digital World’ on March 7th, 2024. Experts deliberated on the potential influence of IT cells, misinformation, deepfakes, and AI-powered audio and video manipulation during the impending general elections in India.
In the face of this growing concern, we call upon [X/Meta/Google/Snap/OpenAI] to take immediate and decisive action to ensure that the policies and practices robustly counter the menace of deepfakes, generative AI and manipulated media content. It is imperative that we work together to safeguard our electoral processes from external manipulations that threaten the democratic rights and freedoms of our citizens.
We urge you to:
1. Identify systemic risks related to electoral processes – [X/Meta/Google/Snap/OpenAI] must reinforce internal processes to identify and design reasonable, proportionate, and effective mitigation measures/
2. Elections-specific risk mitigation measures: [X/Meta/Google/Snap/OpenAI] must set up an internal task force for elections-specific risk mitigation measures. The team should cover all relevant areas: cybersecurity, threat disruption, content moderation and disinformation.
3. [X/Meta/Google/Snap/OpenAI] must implement reasonable, proportionate, and effective mitigation measures tailored to the risks associated with creating and disseminating generative AI and manipulated media content.
These measures may include:
a. Access to official information on electoral processes,
b. media literacy initiatives,
c. fact-checking labels and providing more contextual information,
d. tools and information to help users assess the provenance of information sources.
e. ensure that generative AI and manipulated media are clearly distinguishable for users and labelled as such.
The integrity of our electoral processes is a cornerstone of our democratic society. As such, it is incumbent upon all stakeholders, including large platforms such as [X/Meta/Google/Snap/OpenAI], to actively protect discourse from emerging threats. The measures we propose are crucial for the upcoming elections and imperative for preserving the long-term health and vibrancy of our democratic discourse.
We look forward to your positive response and are keen to discuss this matter further with you. Together, we can ensure that the digital landscape remains a space for free, fair, and truthful engagement, reflecting India’s democratic spirit.
Sincerely,
[in alphabetical order]
Organisations:
Internet Freedom Foundation
Software Freedom Law Center, India
Citizens:
Abhishek Baxi
Akshay Deshmane
Amar Kumar
Amit Kumar Ukey
Angela Thomas
Animesh Narayan
Ashish Asgekar
Ashish Gupta
Gurumurthy Kasinathan
Jahnabi Mitra
Jahnabi Mitra
Kanika Saxena
Kuldeep Nakra
Osama Manzar
Rahul Batra
Ratna Singh
Ritobhash
Romil
Selvaraj
Shrikrishna Kachave
Shuvam
Vickram Crishna
Vishal Singh
To,
Ms. Uthara Ganesh
Head of Public Policy, India and South Asia, Snap Inc.
Diamond Centre, Unit No 26, Ground Floor, near Vardhman Industrial Estate,
Vikhroli (West), MUMBAI, Mumbai City,
Maharashtra, India – 400043
Email: uthara.ganesh@snapchat.com
Subject: Joint letter on concerns of generative AI and manipulated media content on your platform
We are writing to address the critical issue of generative AI and manipulated media content and their potential to undermine the sanctity of elections in India. The emergence of deepfake technology presents a profound challenge, enabling the creation of highly realistic and manipulated media that can distort truth, manipulate public perception, and jeopardise the foundation of trust upon which our democracy rests.
As you are likely aware, generative AI and manipulated media content represent a profound and emerging threat to the integrity of information shared online. These highly sophisticated manipulations of media content can undermine the public’s trust in digital platforms and, more critically, in democratic processes themselves. With this technology’s increasing pervasiveness, it has become imperative to address the potential for misuse, especially in the context of elections.
Apart from ethical concerns, generative AI and manipulated media content entail legal risks as well, which may lead to instances of defamation, impersonation, identity theft, and sexual harassment, amongst other harms. Rule 3(2)(b) mandates due diligence concerning artificially morphed images impersonating an individual in any sexual act or conduct. This requires intermediaries to take all reasonable measures to remove or disable access to such content hosted, published, and transmitted on their platform within 24 hours of receiving a complaint. Failure to expeditiously remove or disable access upon knowledge of such content would strip intermediaries from the safe harbour protection under Section 79 of the Information Technology Act, 2000. This may also attract penalties under Sections 66C, 66E, and 67 of the Act, as also Sections 354A, 354C, and 499 of the Indian Penal Code, 1860.
We recently held an experts’ discussion on digital interference in India’s forthcoming national elections – ‘Ballots and Bots: Elections 2024 In a Digital World’ on March 7th, 2024. Experts deliberated on the potential influence of IT cells, misinformation, deepfakes, and AI-powered audio and video manipulation during the impending general elections in India.
In the face of this growing concern, we call upon Snap to take immediate and decisive action to ensure that the policies and practices robustly counter the menace of deepfakes, generative AI and manipulated media content. It is imperative that we work together to safeguard our electoral processes from external manipulations that threaten the democratic rights and freedoms of our citizens.
We urge you to:
1. Identify systemic risks related to electoral processes – Snap must reinforce internal processes to identify and design reasonable, proportionate, and effective mitigation measures.
2. Elections-specific risk mitigation measures: Snap must set up an internal task force for elections-specific risk mitigation measures. The team should cover all relevant areas: cybersecurity, threat disruption, content moderation and disinformation.
3. Snap must implement reasonable, proportionate, and effective mitigation measures tailored to the risks associated with creating and disseminating generative AI and manipulated media content.
These measures may include:
a. Access to official information on electoral processes,
b. media literacy initiatives,
c. fact-checking labels and providing more contextual information,
d. tools and information to help users assess the provenance of information sources.
e. ensure that generative AI and manipulated media are clearly distinguishable for users and labelled as such.
The integrity of our electoral processes is a cornerstone of our democratic society. As such, it is incumbent upon all stakeholders, including large platforms such as Snap, to actively protect discourse from emerging threats. The measures we propose are crucial for the upcoming elections and imperative for preserving the long-term health and vibrancy of our democratic discourse.
We look forward to your positive response and are keen to discuss this matter further with you. Together, we can ensure that the digital landscape remains a space for free, fair, and truthful engagement, reflecting India’s democratic spirit.
Sincerely,
[in alphabetical order]
Organisations:
Internet Freedom Foundation
Software Freedom Law Center, India
Citizens:
Abhishek Baxi
Akshay Deshmane
Amar Kumar
Amit Kumar Ukey
Angela Thomas
Animesh Narayan
Ashish Asgekar
Ashish Gupta
Gurumurthy Kasinathan
Jahnabi Mitra
Jahnabi Mitra
Kanika Saxena
Kuldeep Nakra
Osama Manzar
Rahul Batra
Ratna Singh
Ritobhash
Romil
Selvaraj
Shrikrishna Kachave
Shuvam
Vickram Crishna
Vishal Singh
4. OpenAI:
To,
Ms. Anna Adeola Makanju
Global Head,
Public Policy, Open AI
3180 18th Street, San Francisco,
California 94110, United States
Email: amakanju@openai.com
Subject: Joint letter on concerns of generative AI and manipulated media content on your platform
We are writing to address the critical issue of generative AI and manipulated media content and their potential to undermine the sanctity of elections in India. The emergence of deepfake technology presents a profound challenge, enabling the creation of highly realistic and manipulated media that can distort truth, manipulate public perception, and jeopardise the foundation of trust upon which our democracy rests.
As you are likely aware, generative AI and manipulated media content represent a profound and emerging threat to the integrity of information shared online. These highly sophisticated manipulations of media content can undermine the public’s trust in digital platforms and, more critically, in democratic processes themselves. With this technology’s increasing pervasiveness, it has become imperative to address the potential for misuse, especially in the context of elections.
Apart from ethical concerns, generative AI and manipulated media content entail legal risks as well, which may lead to instances of defamation, impersonation, identity theft, and sexual harassment, amongst other harms. Rule 3(2)(b) mandates due diligence concerning artificially morphed images impersonating an individual in any sexual act or conduct. This requires intermediaries to take all reasonable measures to remove or disable access to such content hosted, published, and transmitted on their platform within 24 hours of receiving a complaint. Failure to expeditiously remove or disable access upon knowledge of such content would strip intermediaries from the safe harbour protection under Section 79 of the Information Technology Act, 2000. This may also attract penalties under Sections 66C, 66E, and 67 of the Act, as also Sections 354A, 354C, and 499 of the Indian Penal Code, 1860.
We recently held an experts’ discussion on digital interference in India’s forthcoming national elections – ‘Ballots and Bots: Elections 2024 In a Digital World’ on March 7th, 2024. Experts deliberated on the potential influence of IT cells, misinformation, deepfakes, and AI-powered audio and video manipulation during the impending general elections in India.
In the face of this growing concern, we call upon OpenAI to take immediate and decisive action to ensure that the policies and practices robustly counter the menace of deepfakes, generative AI and manipulated media content. It is imperative that we work together to safeguard our electoral processes from external manipulations that threaten the democratic rights and freedoms of our citizens.
We urge you to:
1. Identify systemic risks related to electoral processes – OpenAI must reinforce internal processes to identify and design reasonable, proportionate, and effective mitigation measures.
2. Elections-specific risk mitigation measures: OpenAI must set up an internal task force for elections-specific risk mitigation measures. The team should cover all relevant areas: cybersecurity, threat disruption, content moderation and disinformation.
3. OpenAI must implement reasonable, proportionate, and effective mitigation measures tailored to the risks associated with creating and disseminating generative AI and manipulated media content.
These measures may include:
a. Access to official information on electoral processes,
b. media literacy initiatives,
c. fact-checking labels and providing more contextual information,
d. tools and information to help users assess the provenance of information sources.
e. ensure that generative AI and manipulated media are clearly distinguishable for users and labelled as such.
The integrity of our electoral processes is a cornerstone of our democratic society. As such, it is incumbent upon all stakeholders, including large platforms such as OpenAI, to actively protect discourse from emerging threats. The measures we propose are crucial for the upcoming elections and imperative for preserving the long-term health and vibrancy of our democratic discourse.
We look forward to your positive response and are keen to discuss this matter further with you. Together, we can ensure that the digital landscape remains a space for free, fair, and truthful engagement, reflecting India’s democratic spirit.
Sincerely,
[in alphabetical order]
Organisations:
Internet Freedom Foundation
Software Freedom Law Center, India
Citizens:
Abhishek Baxi
Akshay Deshmane
Amar Kumar
Amit Kumar Ukey
Angela Thomas
Animesh Narayan
Ashish Asgekar
Ashish Gupta
Gurumurthy Kasinathan
Jahnabi Mitra
Jahnabi Mitra
Kanika Saxena
Kuldeep Nakra
Osama Manzar
Rahul Batra
Ratna Singh
Ritobhash
Romil
Selvaraj
Shrikrishna Kachave
Shuvam
Vickram Crishna
Vishal Singh
5. X:
To,
X
Subject: Joint letter on concerns of generative AI and manipulated media content on your platform
We are writing to address the critical issue of generative AI and manipulated media content and their potential to undermine the sanctity of elections in India. The emergence of deepfake technology presents a profound challenge, enabling the creation of highly realistic and manipulated media that can distort truth, manipulate public perception, and jeopardise the foundation of trust upon which our democracy rests.
As you are likely aware, generative AI and manipulated media content represent a profound and emerging threat to the integrity of information shared online. These highly sophisticated manipulations of media content can undermine the public’s trust in digital platforms and, more critically, in democratic processes themselves. With this technology’s increasing pervasiveness, it has become imperative to address the potential for misuse, especially in the context of elections.
Apart from ethical concerns, generative AI and manipulated media content entail legal risks as well, which may lead to instances of defamation, impersonation, identity theft, and sexual harassment, amongst other harms. Rule 3(2)(b) mandates due diligence concerning artificially morphed images impersonating an individual in any sexual act or conduct. This requires intermediaries to take all reasonable measures to remove or disable access to such content hosted, published, and transmitted on their platform within 24 hours of receiving a complaint. Failure to expeditiously remove or disable access upon knowledge of such content would strip intermediaries from the safe harbour protection under Section 79 of the Information Technology Act, 2000. This may also attract penalties under Sections 66C, 66E, and 67 of the Act, as also Sections 354A, 354C, and 499 of the Indian Penal Code, 1860.
We recently held an experts’ discussion on digital interference in India’s forthcoming national elections – ‘Ballots and Bots: Elections 2024 In a Digital World’ on March 7th, 2024. Experts deliberated on the potential influence of IT cells, misinformation, deepfakes, and AI-powered audio and video manipulation during the impending general elections in India.
In the face of this growing concern, we call upon X to take immediate and decisive action to ensure that the policies and practices robustly counter the menace of deepfakes, generative AI and manipulated media content. It is imperative that we work together to safeguard our electoral processes from external manipulations that threaten the democratic rights and freedoms of our citizens.
We urge you to:
1. Identify systemic risks related to electoral processes – X must reinforce internal processes to identify and design reasonable, proportionate, and effective mitigation measures.
2. Elections-specific risk mitigation measures: X must set up an internal task force for elections-specific risk mitigation measures. The team should cover all relevant areas: cybersecurity, threat disruption, content moderation and disinformation.
3. X must implement reasonable, proportionate, and effective mitigation measures tailored to the risks associated with creating and disseminating generative AI and manipulated media content.
These measures may include:
a. Access to official information on electoral processes,
b. media literacy initiatives,
c. fact-checking labels and providing more contextual information,
d. tools and information to help users assess the provenance of information sources. ensure that generative AI and manipulated media are clearly distinguishable for users and labelled as such.
The integrity of our electoral processes is a cornerstone of our democratic society. As such, it is incumbent upon all stakeholders, including large platforms such as X, to actively protect discourse from emerging threats. The measures we propose are crucial for the upcoming elections and imperative for preserving the long-term health and vibrancy of our democratic discourse.
We look forward to your positive response and are keen to discuss this matter further with you. Together, we can ensure that the digital landscape remains a space for free, fair, and truthful engagement, reflecting India’s democratic spirit.
Sincerely,
[in alphabetical order]
Organisations:
Internet Freedom Foundation
Software Freedom Law Center, India
Citizens:
Abhishek Baxi
Akshay Deshmane
Amar Kumar
Amit Kumar Ukey
Angela Thomas
Animesh Narayan
Ashish Asgekar
Ashish Gupta
Gurumurthy Kasinathan
Jahnabi Mitra
Jahnabi Mitra
Kanika Saxena
Kuldeep Nakra
Osama Manzar
Rahul Batra
Ratna Singh
Ritobhash
Romil
Selvaraj
Shrikrishna Kachave
Shuvam
Vickram Crishna
Vishal Singh