To remove extremist content from their sites, some of the web’s biggest destinations for watching videos have quietly started using automation, reports Reuters.
Internet companies are under pressure from governments to eradicate violent propaganda from their sites and are eager to do so and the move is a major step forward in that direction. Various governments are making the demands as attacks by extremists proliferate ranging from Syria to Belgium and the United States.
With the aim to block or rapidly take down Islamic State videos and other similar material, YouTube and Facebook are among the sites who are deploying such systems, reports Reuters.
To identify and remove copyright-protected content on video sites, the websites used such technology originally. The technology removes all content with fingerprints that match “hashes” rapidly form the websites. The technology looks out and finds these “hashes” which are a type of unique digital fingerprint that internet companies automatically assign to specific videos. While such a system would not automatically block videos that have not been seen before, it would catch attempts to repost content already identified as unacceptable.
Reuters claims that numerous people familiar with the technology said that posted videos could be checked against a database of banned content to identify new postings of, say, a beheading or a lecture inciting violence but the companies would not confirm that they are using the method or talk about how it might be employed.
As internet companies continue to discuss the issue internally and with competitors and other interested parties, the use of the new technology is likely to be refined over time.
Internet companies including Alphabet Inc's YouTube, Twitter Inc, Facebook Inc and CloudFlare held a call to discuss options, including a content-blocking system put forward by the private Counter Extremism Project in late April amid pressure from U.S. President Barack Obama and other U.S. and European leaders concerned about online radicalization.
Addressing issues such as terrorism, free speech and the lines between government and corporate authority is central but difficult and the discussions underscored these issues. The companies have typically been wary of outside intervention in how their sites should be policed and none of the companies at this point has embraced the anti-extremist group's system.
“It’s a little bit different than copyright or child pornography, where things are very clearly illegal,” said Seamus Hughes, deputy director of George Washington University’s Program on Extremism.
Hughes said that different web companies draw the line in different places with regards to extremist content which exists on a spectrum.
The process of users flagging content that violates their terms of service is the primary way that internet companies rely to identify inappropriate content on the web. Human editors then delete postings found to be in violation after individually reviewing flagged material.
Reuters quoted sources as saying that out of concern that terrorists might learn how to manipulate their systems or that repressive regimes might insist the technology be used to censor opponents, the companies that are now using automation are not publicly discussing it.
“There's no upside in these companies talking about it. Why would they brag about censorship?” said Matthew Prince, chief executive of content distribution company CloudFlare.
(Source:www.reuters.com)
Internet companies are under pressure from governments to eradicate violent propaganda from their sites and are eager to do so and the move is a major step forward in that direction. Various governments are making the demands as attacks by extremists proliferate ranging from Syria to Belgium and the United States.
With the aim to block or rapidly take down Islamic State videos and other similar material, YouTube and Facebook are among the sites who are deploying such systems, reports Reuters.
To identify and remove copyright-protected content on video sites, the websites used such technology originally. The technology removes all content with fingerprints that match “hashes” rapidly form the websites. The technology looks out and finds these “hashes” which are a type of unique digital fingerprint that internet companies automatically assign to specific videos. While such a system would not automatically block videos that have not been seen before, it would catch attempts to repost content already identified as unacceptable.
Reuters claims that numerous people familiar with the technology said that posted videos could be checked against a database of banned content to identify new postings of, say, a beheading or a lecture inciting violence but the companies would not confirm that they are using the method or talk about how it might be employed.
As internet companies continue to discuss the issue internally and with competitors and other interested parties, the use of the new technology is likely to be refined over time.
Internet companies including Alphabet Inc's YouTube, Twitter Inc, Facebook Inc and CloudFlare held a call to discuss options, including a content-blocking system put forward by the private Counter Extremism Project in late April amid pressure from U.S. President Barack Obama and other U.S. and European leaders concerned about online radicalization.
Addressing issues such as terrorism, free speech and the lines between government and corporate authority is central but difficult and the discussions underscored these issues. The companies have typically been wary of outside intervention in how their sites should be policed and none of the companies at this point has embraced the anti-extremist group's system.
“It’s a little bit different than copyright or child pornography, where things are very clearly illegal,” said Seamus Hughes, deputy director of George Washington University’s Program on Extremism.
Hughes said that different web companies draw the line in different places with regards to extremist content which exists on a spectrum.
The process of users flagging content that violates their terms of service is the primary way that internet companies rely to identify inappropriate content on the web. Human editors then delete postings found to be in violation after individually reviewing flagged material.
Reuters quoted sources as saying that out of concern that terrorists might learn how to manipulate their systems or that repressive regimes might insist the technology be used to censor opponents, the companies that are now using automation are not publicly discussing it.
“There's no upside in these companies talking about it. Why would they brag about censorship?” said Matthew Prince, chief executive of content distribution company CloudFlare.
(Source:www.reuters.com)