Some technology firms are refusing to take online child sexual abuse seriously enough; the home secretary has said while announcing an extra £21.5m to help investigators who say they are facing a “constant uphill struggle” to track down offenders. Sajid Javid said he did not want to “name and shame” the companies because he wanted to give them a chance to respond, but if they did not, he would “not be afraid to take action” against them. Facebook was specifically named, however, by staff at the children’s charity NSPCC as one company that could do more, such as by detecting and flagging up accounts that send out multiple friend requests in suspicious circumstances.
Speaking at the headquarters of the NSPCC in London in what was trailed by the Home Office as a landmark speech, Javid said he had been impressed at the progress made by Facebook and other tech giants such as Google, Twitter and Microsoft in removing terrorist content, but now wanted them to make the same level of commitment to battle child sexual exploitation. “In recent years, there has been some good work in the area. But the reality is that the threat has evolved quicker than industry’s response, and industry has not kept up,” said Javid, who was speaking after the National Crime Agency (NCA) estimated that there are up to 80,000 people in the UK assessed as posing some kind of sexual threat to children online. “I am not just asking for change. I am demanding it. If technology companies do not take more measures to remove this type of content, then I won’t be afraid to take action. How far we legislate will be informed by the action and attitude that industry takes.” Along with the £21.5m – which will be delivered over the next 18 months and used to pursue the most hardened abusers – he announced the allocation of £2.6m for prevention work, which will be carried out by bodies including Lucy Faithfull Foundation.
Javid, who will be convening a meeting of industry experts in the US in partnership with Microsoft to challenge companies to work together to create tools to detect online child grooming, said he wanted the industry to block child abuse material as soon as it was detected and work with authorities to shut down live-streamed offending. Rob Jones, the NCA lead for tackling child sex abuse, told the Guardian that steps that could be taken included using new technology to prevent offenders in the first place from accessing images whose “hash values”’ or photo DNA had already marked them out as abusive. He added that the highest priority for the NCA, however, was in investigating new images. Technological advances meant it should be possible to detect such images at the stage when they were being uploaded, he said, adding that Google had alluded to this on Monday. Shortly before the home secretary gave his speech, the search engine announced new artificial intelligence (AI) technology designed to help identify online child sexual abuse material and reduce human reviewers’ exposure to the content. The technology will be made available for free to NGOs and others.
On a third front, the use of anonymisation techniques by potential offenders, Jones said, “Those behaviours in any platform sent up a red flag. We do get good reports from some sections of the industry, but the pace and scale need to be picked up.” Javid’s challenge to industry was welcomed by the NSPCC, which is calling for the creation of an independent regulator with the power to investigate and fine platforms that do not do enough to catch groomers. Andy Burrows, the associate head of Child Safety Online, said it was clear it was the largest sites with the most resources who had the greatest problems. “We know in the first year when it became illegal to send a sexual message to a child that there were more than 3,000 offences and, where the site in question was named, that two-thirds of those offences took place on either Facebook, Snapchat or Instagram. I wouldn’t question the work they have done regarding taking down abusive imagery, but what the social networks, in particular, have singularly failed to do is tackle abuse at its source. We know groomers are using the social network are contact significant numbers of children at scale. But we also know that the technology is in place to proactively spot that behaviour. There are tell-tale signs – is someone making a disproportionate number of friend request to children and young people? Are they being rejected in large numbers? Most importantly, is there no geographical or family connection?
A Facebook spokesperson said, “The exploitation of children on the internet is a real challenge and one we take very seriously. It’s why Facebook works closely with child protection experts, the police and other technology companies to block and remove exploitative photos and videos, as well as to prevent grooming online. We agree with the home secretary that by continuing to work together in this way, we can make more progress, faster.” A Microsoft spokeswoman said: “Child sexual exploitation is a horrific crime and Microsoft works closely with others in the industry, government and civil society to help combat its spread online. “Predators are constantly evolving their tactics, and that is why we work collaboratively with other companies … to create tools that protect children online and help bring perpetrators to justice.
Credit: Ben Quinn for The Guardian, 3 September 2018.