Al Jazeera and QCRI launch platform that predicts traffic to news articles

QCRI/AJE press release: QCRI and Al Jazeera launch predictive web analytics platform for news

New platform developed by QCRI and Al Jazeera can predict visits to news articles by taking cues from social media

News organisations have vast archives of information, as well as a number of web analytic tools that aid in allocating editorial resources to cover different news events, and capitalise on this information. These tools allow editors and media managers to react to shifts in their audience’s interest, but what is lacking is a tool to help predict such shifts.

Qatar Computing Research Institute (QCRI) and Al Jazeera are announcing the launch of FAST (Forecast and Analytics of Social Media and Traffic), a platform that analyses in real-time the life cycle of news stories on the web and social media, and provides predictive analytics that gauge audience interest.

“The explosion of big data in the media domain has provided QCRI an excellent research opportunity to develop an innovative way to derive value from the information,” said Dr Ahmed Elmagarmid, Executive Director of QCRI. “Together with our valued partner, Al Jazeera, the QCRI team has developed a platform that will help shift the way media does business.”

“Al Jazeera English’s website thrives on good original content in news and features, dynamic ways of creativity through interactive and crowd sourcing methods, and up-to-date social media tools. We welcome working with QCRI in developing FAST as it allows us to understand the consumption of news and what is expected to do well in driving traffic forward. Analytics in predicting the future trend of a web story is a crucial component in understanding web traffic, this initiative is a component we welcome,” said Imad Musa, Head of Online for Al Jazeera English.

You can test the platform at http://fast.qcri.org/ and read the full press release at the QCRI website. The system is based on research described in the following paper:

Following the social media crowd to discover news stories

With Janette Lehmann (UPF), Mounia Lalmas (Yahoo!) and Ethan Zuckerman (MIT Civic Media), we developed an automatic method (pdf, blog post) that groups together all the users who tweet a particular news item, and later detects new contents posted by them that are related to the original news item.

We call each such group a transient news crowd. The beauty of this approach, in addition to being fully automatic, is that there is no need to pre-define topics and the crowd becomes available immediately, allowing journalists to cover news beats incorporating the shifts of interest of their audiences.

Continue reading at crowdresearch.org »

Best paper award @ ISCRAM 2013

Congratulations to my colleagues M. Imran (QCRI), S. Elbassuoni (Beirut University), F. Diaz (Microsoft) and P. Meier (QCRI) for a best paper award at the ISCRAM conference. ISCRAM is the main international conference on systems for crisis response and management.

Our work, described in the two papers below (specially on the first one), describes a method to extract information nuggets from tweets related to emergencies. For instance, we can go beyond detecting that a tweet is about a donation to identify which is the item being donated (e.g. clothes, money, etc.).

Official announcement at QCRI website.

News and Social Media (SNOW 2013 Keynote)

Slides from keynote at the Social News on the Web Workshop. Rio de Janeiro, Brazil, May 2013.

Nowhere to hide: The next manhunt will be crowdsourced

New Scientist 2914, 23 April 2013 (free registration required) describes our proyect Veri.ly:

... A big problem with theories floated on social media is that information can go viral simply because it is popular, whether or not it is true. Patrick Meier of the Qatar Computing Research Institute (QCRI) in Doha is building Verily, a system that allows users to submit verification requests for information they are interested in. Each request prompts a crowd of online workers to set off into their networks to figure it out. The system gathers evidence for and against the claim, though it won't pass judgement.

...By training machine learning algorithms on huge data sets, Meier is building up profiles of the classes of digital evidence that tend to be credible, and those that are not.

As an example, Meier points to a recent study of misinformation on Twitter after the 2010 Chilean earthquake. Carlos Castillo of the QCRI and colleagues showed that non-credible tweets tend to spark responses that question or rebuke them – a trait software can be trained to recognise. "Non-credible information propagates across the twittersphere leaving very specific ripples behind," says Meier. "You could absolutely start having a probability – a percentage chance that particular tweets are not credible."

Full article in New Scientist (free registration required) »

Pages

Subscribe to ChaTo (Carlos Castillo) RSS