Oversight Board to check how Meta looks at explicit AI images of females

The Oversight Board on Tuesday selected two cases from India and the US to assess Meta’s policies and enforcement practices concerning explicit images of female public figures generated by Artificial Intelligence (AI).

The Board intends to address content decisions made by Meta in the two cases, one on Instagram and one on Facebook.

In each case, it will decide “whether the content should be allowed”.

“Deepfake pornography is a growing cause of gender-based harassment online and is increasingly used to target, silence, and intimidate women — both on and offline,” Oversight Board Co-Chair Helle Thorning-Schmidt said.

“Multiple studies show that deepfake pornography overwhelmingly targets women. This content can be extremely harmful to victims, and the tools used for creating it are becoming more sophisticated and accessible,” Thorning-Schmidt added.

With the selection of two cases from the US and India, the Board aims to understand “whether Meta is protecting all women globally in a fair way” and whether its “policies and enforcement practices are effective at addressing this problem”.

The first case involves an AI-generated image of a nude woman posted on Instagram, resembling a public figure from India. The second case concerns an image posted to a Facebook group for AI creations and features an AI-generated image of a nude woman with a man groping her breast.

“The Board is seeking public comments on the two new cases for consideration. As part of this, we are inviting people and organisations to submit public comments,” it said.

 

 

Leave a Reply

Your email address will not be published.