INEX 2013 Tweet Contextualization Track
Home | About | 2011 | 2012 | 2013

Last news

2014/05/23: Release of informativeness evaluation postponed to Monday 25 May
2014/05/15: Late runs will be accepted until 20 May
2014/05/09: Run submission deadline extension 16 May

Overview

About 340 M of tweets are written every day. However, 140 characters long messages are rarely self-content. The Tweet Contextualization aims at providing automatically information - a summary that explains the tweet. This requires combining multiple types of processing from information retrieval to multi-document summarization including entity linking.

Running since 2010, results show that best systems combine passage retrieval, sentence segmentation and scoring, named entity recognition, and text POS analysis. Anaphora detection and diversity content measure as well as sentence reordering can also help.

Evaluation considers both informativeness and readability. Informativeness is measured as a variant of absolute log-diff between term frequencies in the text reference and the proposed summary.

Use case in 2014

The task in 2014 is a slight variant of previous ones and it is complementary to CLEF RepLab. Using the same cleaned dump of the Wikipedia as in 2013, the new use case of the task is the following: given a tweet AND a related entity, the system must provide some context about the subject of the tweet from the perspective of the entity, in order to help the reader to understand it, i.e. answering questions of the form "why this tweet concerns the entity? should it be an alert?". Like in previous editions, the general process involves:

This context should take the form of a readable summary, not exceeding 500 words, composed of passages from a provided Wikipedia corpus.

We regard as relevant summaries extracts from the wikipedia that both

Topics for 2014

A small set of 240 tweets in English have been selected by the organizers from CLEF RepLab 2013 together with their related entity. These tweets have at least 80 characters and do not contain urls in order to focus on content analysis.

RepLab provides several annotations for tweets, we selected three types of them: the category (4 distinct), an entity name from the wikipedia (64 distinct) and a manual topic label (235 distinct). The entity name should be used as an entry point into wikipedia or DbPedia and gives the contextual perspective. The usefulness of topic labels for this automatic task is an open question at this moment because of their variety.

2014 topics are now available on the document repository in XML or txt tabulated format. (Login information available here).

Document collection for 2013 and 2014

Since tweets are from 2013, the document collection is the same as in 2013 and has been released at http://qa.termwatch.es/data. Password is available here for all INEX participants.

This document collection has been rebuilt in 2013 based on a dump of the English Wikipedia from November 2012. Since we target a plain XML corpus for an easy extraction of plain text answers, we removed all notes and bibliographic references that are difficult to handle and kept only non empty Wikipedia pages (pages having at least on section).

Resulting documents are made of a title (title), an abstract (a) and sections (s). Each section has a sub-title (h). Abstract end sections are made of paragraphs (p) and each paragraph can have entities (t) that refer to Wikipedia pages. Therefore the resulting corpus has this simple DTD:

<!ELEMENT xml (page)+> <!ELEMENT page (ID, title, a, s*)> <!ELEMENT ID (#PCDATA)> <!ELEMENT title (#PCDATA)> <!ELEMENT a (p+)> <!ELEMENT s (h, p+)> <!ATTLIST s o CDATA #REQUIRED> <!ELEMENT h (#PCDATA)> <!ELEMENT p (#PCDATA | t)*> <!ATTLIST p o CDATA #REQUIRED> <!ELEMENT t (#PCDATA)> <!ATTLIST t e CDATA #IMPLIED>

Tools

A baseline XML-element retrieval system powered by Indri is available for participants online with a standard CGI interface. The index covers all words (no stop list, no stemming) and all XML tags. Participants that do not wish to build their own index could use this one by downloading it or by using it online. A perl APIs is also available. More information here or contact eric.sanjuan@univ-avignon.fr.

Evaluation

The summaries will be evaluated according to:

Result Submission

Participants can submit up to 3 runs. One run out of the 3 should be completely automatic. Manual runs are welcome whenever any human intervention is clearly documented.

A submitted summary must contain only passages from the document collection (November 2012 Wikipedia dump of articles in English) and will have the following format:

<tid> Q0 <file> <rank> <rsv> <run_id> <text of passage 1> <tid> Q0 <file> <rank> <rsv> <run_id> <text of passage 2> <tid> Q0 <file> <rank> <rsv> <run_id> <text of passage 3> ... where: 167999582578552 Q0 3005204 1 0.9999 I10UniXRun1 The Alfred Noble Prize is an award presented by the combined engineering societies of the United States, given each year to a person not over thirty-five for a paper published in one of the journals of the participating societies. 167999582578552 Q0 3005204 3 0.9997 I10UniXRun1 It has no connection to the Nobel Prize, although the two are often confused due to their similar spellings. 167999582578552 Q0 3005204 2 0.9998 I10UniXRun1 The prize was established in 1929 in honor of Alfred Noble, Past President of the American Society of Civil Engineers.

2014 Schedule

Organizers (contact : contextweet@limsi.fr)

Patrice Bellot, LSIS - Aix-Marseille University
Josiane Mothe, IRIT, University of Toulouse
Véronique Moriceau, LIMSI-CNRS, University Paris-Sud
Eric SanJuan, LIA, University of Avignon
Xavier Tannier, LIMSI-CNRS, University Paris-Sud
Imprint | Data protection | Contact someone about INEX