In what was bound to be only the second-biggest news story to come out of the U.K. on Monday, Prime Minister David Cameron announced a new program to filter pornographic content from Britons’ online searches. By the end of next year, Internet users will have to actively choose whether or not to allow porn on any device connected to their ISPs. (Or, as one British site declares in a headline, “UK flicks switch on ‘I am a pervert’ web filters.”)
Cameron’s broad proposal applies to all pornography, and so encompasses material that falls in both the adult-consensual and underage-exploitative categories. These “family-friendly filters” are meant to protect children from seeing images that their parents don’t think they’re ready for—in his words, to crack down on porn’s “corroding influence” on kids. But in addition, “extreme pornography,” including scenes of simulated rape, will be outlawed altogether.
If past examples in other categories of crime are any guide, technological advances in enforcing the rules will inevitably be met with further technological advances in breaking them.
At the same time, many U.S.-based tech companies are specifically focusing their efforts on the other side of the child-porn equation: the exploitation of children in pornographic material.
According to the U.K. newspaper The Times earlier this month, Facebook, Microsoft, Google, Twitter, and several other companies are discussing how to standardize the ways they all deal with pornographic images featuring children. Previously, each company had its own set of methods and policies in place for combating child porn. But executives are considering creating a common database of “the worst of the worst” images to be blocked by all of their servers. This database would then be maintained by a non-profit organization called Thorn: Digital Defenders of Children, launched in 2009 by a foundation supported by Ashton Kutcher and Demi Moore.
Under this proposed system, the Thorn database wouldn’t include the actual offending images; it would just keep track of the images’ unique digital signatures. Then, when users uploaded new images to the servers, the companies could check to see whether the digital signatures of the new images matched those in the database. If they did, those images would be blocked.
It sounds far-fetched, but according to the Guardian, Facebook has already been using a slightly similar method with a program called PhotoDNA for about two years now. Just as Facebook can identify your unique mug in your friends’ photos with face-recognition technology, making the tagging process both faster and creepier, it can also identify images that are illegal. Twitter is set to adopt the program soon, too.
Though it has lately started to catch on among social media companies, PhotoDNA was originally developed for law enforcement agencies to help find images of missing or exploited children online. The New York TimesGadgetwise blog explained the technology in 2011, when Facebook announced the program:
PhotoDNA can currently search for about 10,000 images collected by the National Center for Missing & Exploited Children, which has amassed 48 million images and videos depicting child exploitation since 2002, including 13 million in 2010 alone.
…
PhotoDNA works by creating a ‘hash,’ or digital code, to represent a given image and find instances of it within large data sets, much as antivirus software does for malicious programs. However, PhotoDNA’ s ‘robust hashes’ are able to find images even if they have been altered significantly. Tests on Microsoft properties showed it accurately identifies images 99.7 percent of the time and sets off a false alarm only once in every 2 billion images….
Meanwhile, Tumblr is exploring different ways to make porn a bit harder to (accidentally) stumble upon there, too, though it has already hit some snags along the way.
So will this latest rash of technological-filtering make a real difference—either in the number of children who are exposed to images they shouldn’t have access to, or the number of children who are themselves exploited?
It’s hard to say. If past examples in other categories of crime are any guide—say, identity theft, credit card fraud, or malware—technological advances in enforcing the rules will inevitably be met with further technological advances in breaking them.
Yes, curious kids can probably pretty easily figure out how to change their parents’ ISP settings. (Stop the presses.) What is obviously much more worrisome, however, is the fact that child-exploiting criminals will surely continue to get around digital blocks to their reprehensible trade. For instance, they can use proxy networks and anonymous browsers to evade browser filters. They’ll probably find a way around the digital-signature database, too, if they have not already, after which Facebook and Twitter and the other companies will have to develop another method to block them. And the technological arms race will continue.