Facebook's artificial intelligence to monitor live videos for illegal and offensive content
- Facebook working to automatically flag offensive material in live videos
- The firm has historically relied on users to report offensive content
- But, it revealed that they are increasingly turning to artificial intelligence
Facebook is working on automatically flagging offensive material in live video streams, building on a growing effort to use artificial intelligence to monitor content, said Joaquin Candela, the company's director of applied machine learning.
The social media company has been embroiled in a number of content moderation controversies this year, from facing international outcry after removing an iconic Vietnam War photo due to nudity, to allowing the spread of fake news on its site.
Facebook has historically relied mostly on users to report offensive posts, which are then checked by Facebook employees against company 'community standards.'
Facebook Inc is working on automatically flagging offensive material in live video streams, building on a growing effort to use artificial intelligence to monitor content, said Joaquin Candela, the company's director of applied machine learning
FAKE NEWS ON FACEBOOK
In a post this month, Mark Zuckerberg said Facebook has been working on the issue of misinformation for a long time, calling the problem complex both technically and philosophically.
'While the percentage of misinformation is relatively small, we have much more work ahead on our roadmap,' Zuckerberg said.
He outlined a series of steps that were already underway, including greater use of automation to 'detect what people will flag as false before they do it themselves.'
The measures include developing new tools to detect and classify 'misinformation' and to make it easier for users to report the material.
He said the company also is looking into the possibility of working with established fact-checking organizations to evaluate content and into the feasibility of warning labels for stories flagged as false
Decisions on especially thorny content issues that might require policy changes are made by top executives at the company.
Candela told reporters that Facebook increasingly was using artificial intelligence to find offensive material.
It is 'an algorithm that detects nudity, violence, or any of the things that are not according to our policies,' he said.
The company already had been working on using automation to flag extremist video content, as Reuters reported in June.
Now the automated system also is being tested on Facebook Live, the streaming video service for users to broadcast live video.
Using artificial intelligence to flag live video is still at the research stage, and has two challenges, Candela said.
'One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down.'
Facebook said it also uses automation to process the tens of millions of reports it gets each week, to recognize duplicate reports and route the flagged content to reviewers with the appropriate subject matter expertise.
Chief Executive Officer Mark Zuckerberg in November said Facebook would turn to automation as part of a plan to identify fake news.
Facebook has historically relied mostly on users to report offensive posts, which are then checked by Facebook employees against company 'community standards.' Mark Zuckerberg in November said Facebook would turn to automation as part of a plan to identify fake news
Ahead of the Nov. 8 U.S. election, Facebook users saw fake news reports erroneously alleging that Pope Francis endorsed Donald Trump and that a federal agent who had been investigating Democratic candidate Hillary Clinton was found dead.
However, determining whether a particular comment is hateful or bullying, for example, requires context, the company said.
Yann LeCun, Facebook's director of AI research, declined to comment on using AI to detect fake news, but said in general news feed improvements provoked questions of tradeoffs between filtering and censorship, freedom of expressions and decency and truthfulness.
'These are questions that go way beyond whether we can develop AI,' said LeCun. 'Tradeoffs that I'm not well placed to determine.'
Most watched News videos
- Homeowner trolls bungling burglar with Mission Impossible theme
- 'I'm going to wing walk!' Schofield talks to Duke about wing walk
- Valley Stream Best Buy associates gift a teen with a Wii U
- Prince Philip reminisces about expansion of Duke of Edinburgh awards
- 'They make each other laugh': Countess Sophie on the Duke and Queen
- Adorable dog won't allow owner to stop scratching his belly
- Hunters forced to shoot a wild bear dead as it charges towards them
- 'I wanted the painting!': Joanna Lumley jokes about Duke's artwork
- Documentary director attacked by gang of immigrants in Stockholm
- Adorable baby dressed as Lion comes face to face with real one
- Hammer wielding thugs smash car windows and threaten man
- Killer Willard continually punches and tackles trainer on talk show
The comments below have not been moderated.
The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.
We are no longer accepting comments on this article.