“Help us”, started Twitter’s VP of trust and safety Del Harvey, as she announced that the social media has come up with a new draft policy to tackle the spread of deepfakes on its platform.
The company is urging Twitter users to provide some feedback on the new strategy, which will then be reviewed and adjusted, before being added to the Twitter Rules.
Created to address what it calls “synthetic and manipulated media”, or deepfakes, the draft outlines the actions that Twitter will take when it sees such content being posted with the purpose of misleading people.
The new policy would let Twitter include a label next to tweets sharing what it has identified as manipulated content. The platform would also warn users before they share or like tweets that contain such labelled media, and provide links to sources explaining why they believe the media is fabricated.
Harvey added that tweets may be removed if the fake content they contain could threaten someone’s physical safety “or lead to other serious harm.”
“We want to hear from you,” Harvey continued, calling for Twitter users to complete a survey about the new policy.
Among other things, the survey asks whether it is Twitter’s responsibility to remove misleading media, whether it has the right to remove tweets that share deepfakes, or whether the social media giant should do something about the accounts that share fake content.
Alternatively, users can tweet about the draft using the hashtag #TwitterPolicyFeedback.
This move follows another recent update on Twitter’s rules, which also came after the company asked its users to contribute their thoughts and ideas, this time about its policy on hateful conduct.
Last July, after reviewing 8,000 responses from more than 30 countries, the platform expanded its rules against hateful conduct to include language that dehumanized others on the basis of religion.
Traditionally, changes to Twitter’s rules happen exclusively through consultations with internal teams and with the company’s own Trust and Safety council.
Twitter’s attempt to gather user feedback on its deepfake policy comes as social media platforms ramp up their efforts to tackle the spread of fabricated content online.
Earlier this year, a manipulated video of the US House Speaker Nancy Pelosi was viewed over 1.4 million times on Facebook. It was later found that the video had been slowed to about 75% of its original speed, and that her voice had also been altered.
With the run-up to the presidential election in November 2020, platforms like Twitter have come under intense pressure to tackle the spread of such content.
Facebook, Amazon and Microsoft have teamed up in September in a joint DeepFake Detection Challenge (DFDC), with a budget of $10 million to come up with a technology that could detect videos that have been manipulated by AI.
The main issue identified by DFDC is that there is no large database of deepfakes, which means that there is no industrial benchmark for detecting manipulated content.
Just a few weeks later, Google showed released a dataset of 3,000 fabricated videos with 28 actors, in an attempt to bolster systems designed to detect AI-generated deepfakes.
Although these initiatives suggest that social media platforms are ramping up their efforts to address the issue of fake content, they also show that there is still no advanced method to detect deepfakes efficiently.
Twitter’s recent announcement doesn’t offer a new technical solution. In fact, the post includes a link to a form to complete “if you’d like to partner with us to develop solutions to detect synthetic and manipulated media.”
It is unclear, therefore, how the company will be able to efficiently detect fake content in the first place.
The feedback period for Twitter’s new policy on deepfakes will close on 27 November at 11:59 p.m. GMT. The company has said that it will make another announcement at least 30 days before the policy goes into effect.