Facebook’s fight against misleading the masses?
Manipulated media and ‘deepfake’ videos have become more of an issue over the past few years, especially surrounding elections in the likes of the UK and US. Back in January, Facebook took its first step in cutting this down.
Now, Facebook promises to flag any misleading and manipulated media under the following rules:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
Simply put, if the media is reported and confirmed to be false, users will be warned before watching it – though that doesn’t extend to parody or satire (including meme culture).
Though it might be easier to remove content outright so it can’t be viewed, Facebook says this could be counter-intuitive (because the same content can be found elsewhere online.)
We’ve seen the policy in motion when Facebook flagged a video from Trump’s social media director as ‘partly false’. A clip that originally showed Presidential Candidate Joe Biden warning about the potential re-electing of Trump was edited to misleadingly make him sound as if he endorsed the President.
The video is now bannered with warnings of “Partly False Information” and must be clicked before users can choose to watch it.
It might only be a small step – but social media platforms are beginning to take more responsibility for the content on their platforms. Lessons had to be learned, because of the detrimental impact of fake content.