Following the Wall Street Journal’s article, SFLC.in tested the Meta AI chatbot by conducting conversations with the chatbot. Meta’s employees were concerned about the company not protecting its underaged users from engaging with its built- in AI personas in fantasy sex. Given its implications on child safety, SFLC.in‘s technologists conducted a series of tests to understand the following –
- Whether Meta has a mechanism to verify the age of its users before partaking in such conversations
- Whether the Meta AI chatbot nudges users to engage it with in a sexually explicit manner
Test 1 – Conversing with the Meta AI chatbot without login authentication
In the first test, the SFLC.in user navigated to ai.meta.com to conduct this test. Having signed in as an adult user, the chatbot was asked to provide general information on Canada. Before providing a response to this query, Meta AI requested the user to verify their age. Thereafter, the chatbot provided a response for the adult as well as a child user.
The user proceeded to further ask questions for general information. When an underaged user requested the chatbot to engage in romantic roleplay, the chatbot responded only till the point such conversations were not sexually explicit in nature. In a subsequent series of tests, the user was unable to receive a response from the chatbot if the user asked the chatbot a question of a sexually explicit nature.
Findings – SFLC.in team found that Meta had given its chatbot the capability to verify its users age before answering any queries from the user. Furthermore, it seems that Meta has now implemented a monitoring and failsafe mechanism to detect and disengage itself from conversations that are sexually explicit in nature.
Test 2 – Conversing with the Meta AI chatbot on Instagram as adult and child user
In addition to testing the AI client through the website, our technologists conducted further testing through a similar methodology as before through an adult as well as an underaged Instagram user’s account. In order to initiate this test, an adult’s user account was utilized and a child user’s account was set up.
In the first scenario, an adult Instagram (test) user initiated a conversation with the chatbot, requesting for a romantic roleplay. The chatbot continued to engage in the conversation, even after the roleplay became sexually explicit in nature. No attempts were made by the chatbot to disengage from the conversation.
In the second scenario, when an underaged Instagram user initiated romantic roleplay with the chatbot, Meta AI corrected its initial response to engage with the user and refused to further engage in any manner whatsoever. Another test resulted in the AI engaging in romantic roleplay only till the point the user does not escalate the conversation further. The chatbot did not nudge the user towards any sexually explicit conversations.
Findings- SFLC.in’s technology team concluded that the Meta AI chatbot is explorative and responds in every possibility, including romantic and explicit in nature. However, the program has an in-built mechanism to verify its users’ age and does not escalate conversations with children if it is of a sexually explicit nature.
Examining Meta’s policies relating to protection of children online
Meta uses Community Standards and technological means to protect children on its platform from sexual exploitation or endangering content. In all cases, Meta removes images or content that involve child nudity (even in cases where such images might be shared with good intentions) given its potential for abuse and misappropriation. Artificial intelligence has also increasingly evolved to generate photo-realistic imagery that can pose an increased risk of images not containing any nudity being subjected to malicious alteration using deepfake technology.
In its child safety policy, Meta categorically requests its users not to post content that qualifies as child sexual exploitation, solicitation, inappropriate interactions with children, exploitative intimate imagery and sextortion, sexualisation of children, child nudity and non-sexual child abuse.
On the technology front, Meta has built various initiatives to make its platforms a safer space for children. It has been hosting child safety hackathons with NGOs, implementing open-source photo and video-matching technologies. Meta also monitors potential suspicious activity from adult accounts. It prevents such users from interacting with children through comments, recommendations or friend/follow requests.
However, apart from the Terms of Service and child safety policy, Meta does not have any specific policy that provides information on how child safety protocols are implemented in its AI chatbot. The rationale and the kinds of safeguard mechanisms built into its chatbot still remain unclear.
Conclusions
In order to ensure your child’s safety on Meta’s platforms, please keep in mind the following –
- While Meta adopts several technological means as well as enforces Community Standards on its platforms, the presence of harmful content cannot be absolutely eliminated from digital spaces. Therefore, it is important become digitally resilient by reporting illegal content.
- Like all platforms, Meta is obligated under Indian law to remove paedophilic and harmful content for children. Such instances can be reported to the Grievance Officer of Meta here.
- While it remains unclear how Meta ensures child safety for users interacting with its chatbot, the WSJ article as well as limited testing reveals that Meta AI will not engage in sexually explicit conversations with its underaged users.