Mehta said the post did not violate rules that apply only to deepfakes (photos, videos or audio created by artificial intelligence to impersonate a person) that alter someone's words.
Meta's Oversight Board, an independent group of academics, experts and lawyers who oversee the platform's tough content decisions, supported the social media giant's decision to leave the video alone. But amid widespread concerns about the risks of artificial intelligence, the company called for clarity on its policies.
Decisions made by the Meta-funded Oversight Committee on specific cases are considered binding, but recommendations for policy changes are not.
“The volume of misleading content is increasing, and the quality of the tools to create it is rapidly improving,” Oversight Committee co-chair Michael McConnell said in a statement. “Platforms need to keep pace with these changes, especially given global elections where certain actors are trying to mislead the public.”
Meta spokesman Corey Chambliss said the company was reviewing the guidelines.
The rebuke comes as experts warn that AI-generated misinformation is already spreading online and has the potential to confuse voters in a critical election year.
According to the oversight board, the video posted to Facebook in May 2023 used actual footage of Biden voting in the 2022 midterm elections with his granddaughter, who was a first-time voter at the time.
The video “loops” the moment Biden placed an “I Voted” sticker on his adult granddaughter's chest, and the poster includes a caption suggesting the touching was inappropriate.
Because the video does not alter Biden's speech, the Oversight Committee agreed that the video did not violate Meta's rules. The board also said it was clear the video had been edited.
But the video raises questions about Meta's existing policies, which the oversight board said focus on how content is created rather than the potential harm, including voter suppression. They asked Meta to expand its manipulated media policy to cover altered audio as well as videos that show people doing things they don't do.
The Oversight Board also recommended that the company not remove manipulated media if it does not violate other rules and instead attach a label notifying users that the content has changed.