Crisp CEO believes we can beat online hate speech, terrorist content, and fake news
摘要：Above: Employees of Crisp Thinking help social platforms monitor everything from hate speech to terrorist content.
Above: Employees of Crisp Thinking help social platforms monitor everything from hate speech to terrorist content.
Social media has become darker, uglier, and meaner in recent years. It may be wrecking our mental health, destroying our politics, and tearing at the social fabric.
Adam Hildreth, founder and CEO of social media risk management firm Crisp, believes it doesn’t have to be that way. Hildreth argues that the tools exist to save social media, or at the very least to beat down the worst elements that have made platforms like Facebook, Twitter, and YouTube so poisonous.
But the first step, he says, is recognizing the urgency and scope of the problem
“We’ve seen a massive change over the past few years,” Hildreth said. “Social media was the Wild West. But because it’s now part of everyone’s everyday life, there has been a massive change. That said, I think this can be managed. And it’s so mainstream, we need to deal with these issues.”
Crisp was founded over a decade ago to help companies manage risk as they began to embrace user-generated content. More recently, the business expanded to begin working directly with social media platforms to help them deal with hate speech, racism, and terrorist-related content. Hildreth said he can’t disclose the names of his partners.
“What we saw in the last year was a surge in hate speech and terrorist content,” Hildreth said. “And the tools they were using weren’t going to work. It doesn’t matter how good you get at it, you still need teams of people involved.”
So what happened that drove this surge? Hildreth points to a couple of things. First, as noted above, social media has become so pervasive, so ingrained in our lives, that its potential impact has made it a richer target for groups looking to spread unsavory or misleading messages. In other words, the return on investment for hate groups and terrorists is strong.
The second is the rise of video.
“What video does is transform the message they want to get out there,” Hildreth said. “They’ve started really using the mechanics and tools of video in a sophisticated way.”
This move to images and video makes detection even more complicated for the platforms. It’s one thing to monitor for keywords, but another when it’s images.
“This makes a big difference when someone uploads user-generated content and they want to avoid detection,” he said. “When I upload video, I want to help Google find it in a positive way. So I add text and tags that highlight the content. But if they’re trying to avoid detection, they can overlap some positive imagery, positive soundtracks over things like beheading videos, that makes detection difficult.”
So Crisp introduced a new service called Capture.
Given that it’s impossible to have someone review every piece of content on a platform, Capture starts by looking externally at where conversations about bad content are happening. That includes scanning the dark web and chat rooms, and even creating accounts to be able to scan some messaging platforms. The system uses a combination of artificial intelligence and people to refine its scanning and profile-building.
Capture is looking for the places where people are talking about creating this kind of content, sharing tips, making it, and strategizing about how to spread it. By finding these people and this content externally, Crisp can narrow its search within the various platforms. Capture then looks for content that is seeing a swell of sharing, using profiles of people sharing and creating it, and takes that information back to the social platforms to identify bad actors and content.
“We know where this content is shared,” he said. “Our view is that if it’s not being shared, you don’t have to worry about it because it’s not being seen.”
The biggest area of concern thus far is terrorism-related content. The company says it finds about 200 pieces of new terrorism-related content every day. Often, that content is reported and taken down within minutes of it appearing on social platforms, Hildreth said.
But, much like cybersecurity battles, the fight against undesirable content is fast-moving and always evolving. As the good guys take a step forward, the bad guys counterpunch.
Video, again, is a good example of this. Images and sound alone are not enough to classify something as terrorist-related or fake news. One person can take a video clip and use it to explain an event, while another can use the same clip out of context, putting text and audio over it to distort its meaning.
“I’d say it’s a constant battle,” Hildreth said. “We’re slightly ahead of the curve at the moment. The faster we remove stuff, the more they’re trying to figure out how to get around it.”
Fake news also presents a profound challenge. Governments are applying growing pressure on social networking and media companies to address this issue. But Hildreth says change needs to be led by the platforms and their customers if it’s going to have a serious impact.
He believes it’s imperative for the platforms to sharpen and refine their terms of service regarding fake news, including better defining what that term means and what constitutes a violation. Crisp builds its service around those rules, and the clearer the lines are, the more effective it can be.
“We’ve started with the very blatant breaking terms of service,” Hildreth said. “That’s black and white. But we are very much led by our customers. There are some lines we draw. But we’re working with our customers to improve on this.”
The real motivation, Hildreth believes, is likely to come from the advertisers that are the lifeblood of these services. The growing backlash against social media and networking services is damaging their reputations. Advertisers obviously don’t want their messages appearing next to hate speech or terrorist videos. But if the social platforms are tagged more generally as cesspools of harassment and racism, advertisers could be forced to walk away to protect their brands.
“I think this will change, and it will be driven by some of the big brands,” he said. “If you look at the amount of money they spend, the brand experience is absolutely essential. They don’t want to be associated with terrorism and hate speech. They want a strong brand experience.”
Hildreth adds that the role of those advertisers, and how they choose to wield or not wield their influence, will be crucial in the coming months and years: “They’re the ones that fund the internet in one way or another. And they fund these social platforms.”