Can an Algorithm Solve Twitter’s Credibility Problem?

Resources   |   
Published May 6, 2014   |   
Adrian Chen

On October 29, 2012, when Hurricane Sandy made landfall, I was in my Brooklyn apartment, refreshing Twitter. The news on my timeline consisted mostly of grim dispatches from amateur storm spotters tracking Sandy’s march up the coast. By the time the storm reached New Jersey, sober reports of rising sea levels and wind speed and pictures of flooding on the east side of Manhattan gave way to apocalyptic photos that suggested the entire Eastern Seaboard had become a modern-day Atlantis. A shark swam in the streets of New Jersey. An enormous tidal wave crashed over the Statue of Liberty. A scuba diver navigated a flooded Brooklyn subway station less than a mile from my apartment. Of course, these photos were all fake. (The tidal wave was from the disaster flick “The Day After Tomorrow,” and only Jake Gyllenhaal on a boogie board would have made it less believable.)

The Twitter commons have a credibility problem, and, in the age of “big data,” all problems require an elegant, algorithmic solution. Last week, a group of researchers at the Qatar Computing Research Institute (Q.C.R.I.) and the Indraprastha Institute of Information Technology (I.I.I.T.), in Delhi, India, released what could be a partial fix. Tweetcred, a new extension for the Chrome browser, bills itself as a “real-time, web-based system to assess credibility of content on Twitter.” When you install Tweetcred, it appends a “credibility ranking” to all of the tweets in your feed, when viewed on twitter.com. Each tweet’s rating, from one to seven, is represented by little blue starbursts next to the user’s name, almost like a Yelp rating. The program learns over time, and users can give tweets their own ratings to help it become more accurate.

Read More