Governance & Risk Management , Privacy

US Tech Firms Promise Terror Content Crackdown

YouTube, Facebook, Twitter and Microsoft Will Target Images and Videos
US Tech Firms Promise Terror Content Crackdown
An image from a 2014 recruitment video released by Islamist extremists ISIS.

Facebook, Google, Microsoft and Twitter have promised to better identify and remove terror-related videos and imagery that get posted to their online properties by sharing information.

See Also: Live Webinar | All the Ways the Internet is Surveilling You

The move will involve the firms contributing to a shared database that fingerprints images and videos that have been removed from Facebook, Twitter, Microsoft and Google's YouTube.

"Starting today, we commit to the creation of a shared industry database of 'hashes' - unique digital 'fingerprints' - for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services," the companies say in a shared statement issued Dec. 5. "By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online."

Each participating company will apply its own rules for what qualifies as "terrorist content." The companies also pledge that no personally identifiable information will be shared and say that the information will never be used to automatically remove any content.

The four U.S. technology giants say they're looking to involve more firms in the effort.

While the companies say that the move is an attempt to balance users' privacy with eliminating "terrorist images or videos" from their services, they note that they remain subject to government requests, meaning the identities of users who post or disseminate such content could be shared with authorities. "Each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances," the statement notes.

Follows Efforts to Curtail Child Porn

Facebook tells the Guardian that the precise technological details of how the database will work have yet to be established.

But a similar project to battle child pornography is already in use. The Microsoft-based service, called PhotoDNA, was developed by Hany Farid, the chair of the computer science department at Dartmouth University. It's based on a stock library of millions of pornographic images of children maintained by the National Center for Missing and Exploited Children.

Numerous technology firms, including social networks and cloud providers, as well as governments and law enforcement agencies use the free service to help automatically track and remove such content, wherever it gets posted. The service is reportedly also effective at matching images even when they have been manipulated or cropped.

Responding to the new announcement from Facebook, Google, Microsoft and Twitter, Farid tells the Guardian that he and the Counter Extremism Project, a not-for-profit organization, have been in discussions with Facebook and Microsoft since January to adapt PhotoDNA to battle extremist content.

"We are happy to see this development. It's long overdue," he tells the newspaper. But he questioned the apparent lack of third-party oversight over the program, how frequently and thoroughly the database of hashes would be updated and the effectiveness of not automatically removing flagged content from every service that signs up to the program, as PhotoDNA does.

"If it's removed from one site, it's removed everywhere," he tells the Guardian. "That's incredibly powerful. It's less powerful if it gets removed from Facebook and not from Twitter and YouTube."

Targeting Illegal Online Hate Speech

The four firms say the latest effort to battle extremist imagery and videos has come about via regular meetings with EU officials as part of the EU Internet Forum, which was launched 12 months ago to battle terrorist content and hate speech online. The next meeting for the forum is due to take place later this week.

The move also follows Facebook, Google, Microsoft and Twitter in March signing up to abide by an EU code of conduct on "illegal online hate speech" that they helped create. While the code of conduct isn't legally binding, the firms committed to removing from European view the majority of related takedown requests - relating to hatred or promoting violence - within 24 hours.

That effort was led by Czech politician Vera Jourova, the EU commissioner for justice, consumers and gender equality, who pointed to the terror attacks in Brussels in March and Paris in November 2015, saying they "have reminded us of the urgent need to address illegal online hate speech."

"Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racist use to spread violence and hatred," she said. "This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected."

White House Efforts

The move to remove terror-related imagery and videos from social networks also follows President Obama calling on Silicon Valley last year to help law enforcement agencies better monitor "the flow of extremist ideology" on their networks. Top White House officials met with Apple, Facebook, Microsoft and Twitter in January to explore better ways for combatting the online dissemination of terrorism-related content.

Despite such efforts, some politicians and legislators continue to publicly blame social networks for serving as virtual safe havens for terrorists and related ideologies (see UK Labels Facebook A Terrorist 'Haven'). Political critics, however, contend that turning technology giants into scapegoats is easier than admitting that domestic legislative efforts or a lack of funding for police or intelligence services might be contributing factors.


About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.