Google’s revealed its plans to remove terror-related content from YouTube and decided the investment community should hear about it before the rest of us.
The plan emerged in a post first published in the Financial Times and later popped online, the company’s revealed a four-point plan.
As was the case in Facebook’s anti-terror plan, AI isn’t front and centre because Google says “a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user.” The company says AI was used to identify 50 per cent of the terror content it’s removed in the past six months, but says it needs to “… devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content.”
YouTube’s Trusted Flagger program, a community of volunteers who rate videos will be expanded by recruiting and funding 50 more non-government organisations with expertise in matters like hate speech, self-harm, and terrorism, so that Youtube can benefit from more people capable of making “nuanced decisions about the line between violent propaganda and religious or newsworthy speech.”
Content that doesn’t breach Google’s guidelines, like “videos that contain inflammatory religious or supremacist content” will be preceded by warnings, won’t allow comments and won’t allow monetisation. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” says Google’s general counsel Kent Walker.
Google’s also going to throw its advertising expertise at users, by redirecting those deemed “potential Isis recruits” based on content they seek online “towards anti-terrorist videos that can change their minds about joining.” Walker says that “In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.”
Recent terrorist attacks in London, and subsequent comment by UK Prime Minister Theresa May that the internet offers terrorists a “safe place to breed” mean that all large internet companies need to at least be seen to do more. Governments worldwide are already confronting such companies over encryption and the risk of further regulation is real if internet companies are seen to be abusing their social licence. Google may also be worried that investors see such regulation as a threat to revenue, hence the release to the FT.
Interestingly, all of the initiatives mentioned above concern YouTube alone: Google+ is apparently so unloved that web scum don’t bother posting their filth there.
Source: theregister.co.uk