INEX 2012 Tweet Contexualization Track
Home | About | 2011 | 2013 | 2014

Last News

July, 27 2012 : All results released at http://qa.termwatch.es/data/ (login information here).

Overview

Participants are solicited for the tweet contextualization INEX task at CLEF 2012. The use case of this new task is the following: given a new tweet, the system must provide some context about the subject of the tweet, in order to help the reader to understand it. This context should take the form of a readable summary, not exceeding 500 words, composed of passages from a provided Wikipedia corpus.

The results of the evaluation campaign will be disseminated at the final workshop which will be organized in conjunction with the CLEF 2012 conference, 17–20 September in Rome, Italy.

Like in 2011 QA@INEX, the task to be performed by the participating groups is contextualizing tweets, i.e. answering questions of the form "what is this tweet about?" using a recent cleaned dump of the Wikipedia. The general process involves:

We regard as relevant passages segments that both

Test data

About 1000 tweets in English were collected by the organizers from Twitter®. They were selected among informative accounts (for example, @CNN, @TennisTweets, @PeopleMag, @science...), in order to avoid purely personal tweets that could not be contextualized. Information such as the user name, tags or URLs will be provided. Theses tweets are now available in two formats:


A sample set of tweets is available:

The complete test set is available from here for all INEX 2012 active participants.

Document collection

The document collection has been rebuilt based on a recent dump of the English Wikipedia from November 2011. Since we target a plain XML corpus for an easy extraction of plain text answers, we removed all notes and bibliographic references that are difficult to handle and kept only non empty Wikipedia pages (pages having at least on section).

Resulting documents are made of a title (title), an abstract (a) and sections (s). Each section has a sub-title (h). Abstract end sections are made of paragraphs (p) and each paragraph can have entities (t) that refer to Wikipedia pages. Therefore the resulting corpus has this simple DTD:

<!ELEMENT xml (page)+> <!ELEMENT page (ID, title, a, s*)> <!ELEMENT ID (#PCDATA)> <!ELEMENT title (#PCDATA)> <!ELEMENT a (p+)> <!ELEMENT s (h, p+)> <!ATTLIST s o CDATA #REQUIRED> <!ELEMENT h (#PCDATA)> <!ELEMENT p (#PCDATA | t)*> <!ATTLIST p o CDATA #REQUIRED> <!ELEMENT t (#PCDATA)> <!ATTLIST t e CDATA #IMPLIED>

This corpus is available from here for all INEX 2012 active participants in two file formats:

Baseline system

A baseline XML-element retrieval system powered by Indri is available online with a standard CGI interface. The index covers all words (no stop list, no stemming) and all XML tags. Participants that do not wish to build their own index could use this one by downloading it or by using it online (More information here or contact eric.sanjuan@univ-avignon.fr).

You can also query this baseline system in batch mode using the perl APIs here for 2011 and 2012 document collections. See their synopsis for more details.

Evaluation

The summaries will be evaluated according to:

Result Submission

Participants can submit up to 3 runs. One run out of the 3 should be completely automatic: participants must use only the Wikipedia dump and possibly their own resources (even if the texts of tweets sometimes contain URLs, the Web must not be used as a resource). Manual runs are welcome whenever any human intervention is clearly documented. That is, a participant cannot submit more than 6 runs in total.

A submitted summary will have the following format:

<tid> Q0 <file> <rank> <rsv> <run_id> <text of passage 1> <tid> Q0 <file> <rank> <rsv> <run_id> <text of passage 2> <tid> Q0 <file> <rank> <rsv> <run_id> <text of passage 3> ... where:

Schedule

March, 1 2012 Online Registrations to the Labs opens.
March, 26 2012 Test set released here.
June, 1 to June, 15 2012 Run submissions here (use QA track form).
June, 30 2012 Manual evaluation of readability by participants launched here (logins sent on July 3rd).
July, 15 2012 Results of informativeness evaluation by organizers released to participants here.
July, 25 2012 End of readability evaluation by participants
July, 27 2012 Results of readability evaluation by participants released to participants here.
August 17 2012 Submission of CLEF 2012 Working Notes papers
August 24 2012 Submission of CLEF 2012 Labs Overviews
September 17-20 2012 CLEF 2012 Conference

Organizers (contact : contextweet@limsi.fr)

Patrice Bellot, LSIS - Aix-Marseille University
Josiane Mothe, IRIT, University of Toulouse
Véronique Moriceau, LIMSI-CNRS, University Paris-Sud
Eric SanJuan, LIA, University of Avignon
Xavier Tannier, LIMSI-CNRS, University Paris-Sud
Imprint | Data protection | Contact someone about INEX