INEX 2013 Tweet Contexualization Track
Home | About | 2011 | 2012 | 2014

Last news

2013/06/02: Readability evaluation results by organizers
2013/05/26: Informativity evaluation results by organizers
2013/05/01: Spanish subtrack : corpus and topics released.
2013/04/20: Run submission (English topics) deadline extension May 1st
2013/03/01: 2013 Topics available on the document repository
2013/02/18: Corpus released

Overview

About 340 M of tweets are written every day. However, 140 characters long messages are rarely self-content. The Tweet Contextualization aims at providing automatically information - a summary that explains the tweet. This requires combining multiple types of processing from information retrieval to multi-document summarization including entity linking.

Running since 2010, results show that best systems combine passage retrieval, sentence segmentation and scoring, named entity recognition, and text POS analysis. Anaphora detection and diversity content measure as well as sentence reordering can also help.

Evaluation considers both informativeness and readability. Informativeness is measured as a variant of absolute log-diff between term frequencies in the text reference and the proposed summary. Maximal informativeness scores obtained by participants from 19 different groups is between 10% and 14%. There is also room for improving readability.

In 2013, the goal of the task and the evaluation metrics remain unchanged but tweet diversity has been improved. More specially, a significant part of tweets with hashtags have been included in the tweet set. Hashtags are authors' annotation on key terms of their tweets. In the two past years, hashtags have been underused, unless they are core components of tweets.

The results of the evaluation campaign will be disseminated at the final workshop which will be organized in conjunction with the CLEF 2013 conference.

Use case

Like in 2012, the use case of this task is the following: given a new tweet, the system must provide some context about the subject of the tweet, in order to help the reader to understand it, i.e. answering questions of the form "what is this tweet about?" using a recent cleaned dump of the Wikipedia. The general process involves:

This context should take the form of a readable summary, not exceeding 500 words, composed of passages from a provided Wikipedia corpus.

We regard as relevant summaries extracts from the wikipedia that both

Topics

598 tweets in English have been collected by the organizers from Twitter(R). They were selected among informative accounts (for example, @CNN, @TennisTweets, @PeopleMag, @science...), in order to avoid purely personal tweets that could not be contextualized. Information such as the user name, tags or URLs are provided in JSON format. These tweets are available in a single xml file with three fields: topic, title and txt. The topic field contains the tweet id as attribute, the title field shows the tweet text, for people not wanting to bother with JSON format and the txt field contains full JSON format with all tweet metadata:

"created_at":"Fri, 03 Feb 2012 09:10:20 +0000", "from_user":"XXX", "from_user_id":XXX, "from_user_id_str":"XXX", "from_user_name":"XXX", "geo":null, "id":XXX, "id_str":"XXX", "iso_language_code":"en", "metadata":{"result_type":"recent"}, "profile_image_url":"http://XXX", "profile_image_url_https":"https://XXX", "source":"<a href='http://XXX'>", "text":"blahblahblah", "to_user":null, "to_user_id":null, "to_user_id_str":null, "to_user_name":null

2013 topics are now available on the document repository (Login information available here).

Document collection

The document collection has been released at http://qa.termwatch.es/data. Password is available here for all INEX participants.

As for previous seditions, document collection has been rebuilt based on a recent dump of the English Wikipedia from November 2012. Since we target a plain XML corpus for an easy extraction of plain text answers, we removed all notes and bibliographic references that are difficult to handle and kept only non empty Wikipedia pages (pages having at least on section).

Resulting documents are made of a title (title), an abstract (a) and sections (s). Each section has a sub-title (h). Abstract end sections are made of paragraphs (p) and each paragraph can have entities (t) that refer to Wikipedia pages. Therefore the resulting corpus has this simple DTD:

<!ELEMENT xml (page)+> <!ELEMENT page (ID, title, a, s*)> <!ELEMENT ID (#PCDATA)> <!ELEMENT title (#PCDATA)> <!ELEMENT a (p+)> <!ELEMENT s (h, p+)> <!ATTLIST s o CDATA #REQUIRED> <!ELEMENT h (#PCDATA)> <!ELEMENT p (#PCDATA | t)*> <!ATTLIST p o CDATA #REQUIRED> <!ELEMENT t (#PCDATA)> <!ATTLIST t e CDATA #IMPLIED>

Tools

A baseline XML-element retrieval system powered by Indri is available for participants online with a standard CGI interface. The index covers all words (no stop list, no stemming) and all XML tags. Participants that do not wish to build their own index could use this one by downloading it or by using it online. A perl APIs is also available. More information here or contact eric.sanjuan@univ-avignon.fr.

Evaluation

The summaries will be evaluated according to:

Result Submission

Participants can submit up to 3 runs. One run out of the 3 should be completely automatic: participants must use only the Wikipedia dump and possibly their own resources (even if the texts of tweets sometimes contain URLs, the Web must not be used as a resource). Manual runs are welcome whenever any human intervention is clearly documented.

A submitted summary must contain only passages from the document collection (November 2012 Wikipedia dump of articles in English) and will have the following format:

<tid> Q0 <file> <rank> <rsv> <run_id> <text of passage 1> <tid> Q0 <file> <rank> <rsv> <run_id> <text of passage 2> <tid> Q0 <file> <rank> <rsv> <run_id> <text of passage 3> ... where: 167999582578552 Q0 3005204 1 0.9999 I10UniXRun1 The Alfred Noble Prize is an award presented by the combined engineering societies of the United States, given each year to a person not over thirty-five for a paper published in one of the journals of the participating societies. 167999582578552 Q0 3005204 3 0.9997 I10UniXRun1 It has no connection to the Nobel Prize, although the two are often confused due to their similar spellings. 167999582578552 Q0 3005204 2 0.9998 I10UniXRun1 The prize was established in 1929 in honor of Alfred Noble, Past President of the American Society of Civil Engineers.

Results

Informativity

Informativity has been evaluated based on three overlapping references:

  1. prior set of relevant pages selected by organizers while building the 2013 topics (40 tweets, 380 passages, 11 523 tokens),
  2. pool selection of most relevant passages from participant submissions for tweets selected by organizers (45 tweets, 1 760 passages, 58 035 tokens),
  3. all relevant text merged together with an extra selection of relevant passages from a random pool of ten tweets (70 tweets, 2 378 passages, 77 043 tokens)

All references are available on task data repository with the evaluation toolkits. Runs are ranked by decreasing score of divergence with the final reference (All.skip).

RankParticipantRunManualAll.skipAll.biAll.uniPool.skipPool.biPool.uniPrior.skipPrior.biPrior.uni
1199256y0.88610.8810.7820.87520.870.78130.9210.91340.7814
2199258n0.89430.89080.79390.88020.87660.79160.92880.92260.7985
3182275n0.89690.89240.80610.87890.87450.79410.91720.91060.7899
4182273n0.89730.89210.80040.88020.8750.79230.92350.91550.7862
5182274n0.89740.89220.80090.88050.87510.79320.92340.91540.7872
6199257y0.89980.89690.79870.89160.88950.8010.93410.9280.7992
765254n0.92420.92290.83310.91620.91590.83630.94730.9430.8223
862276n0.93010.9270.81690.93330.93020.82850.97180.96780.8286
946270n0.93970.93650.84810.92740.92460.84180.96860.96420.8529
1046267n0.94680.94440.88380.93890.93620.88020.96250.95960.883
1146271n0.950.94750.85690.94460.94210.85430.97930.97590.867
1262306n0.95750.9540.86730.95450.95190.87390.95480.94860.8365
13210277n0.96620.96490.89950.96420.96260.90050.97920.97730.9102
14129261n0.9670.96680.86390.96560.96590.86660.98880.98620.8687
15129259n0.96790.96730.86310.96680.96660.86580.9890.9870.8656
16129260n0.9680.96770.86430.96860.96860.86790.98910.9870.8672
17128262n0.97470.97340.87380.97360.97270.87750.98210.97880.8635
18128255n0.97830.97710.88170.97590.97480.88010.99380.99140.8941
19138265n0.97890.97810.87930.97510.97490.88210.99270.99040.8845
20138263n0.97930.97850.87960.97590.97540.88430.99250.98990.8856
21138264n0.97980.97910.8790.97720.97690.88210.99260.99020.8827
22275266n0.98350.98240.90590.98650.98590.91320.99030.98770.8952
23180269n0.99990.99990.99650.99990.99990.997210.99990.9962
24180269y0.99990.99990.99810.99980.99980.9982110.9981

Readability

Readability has been evaluated by organizers over the ten tweets having the largest text references (t-rels). For these tweets, summaries are expected to have almost 500 words since the reference is much larger. For each participant summary, we have then check the number of words over 500 in passages that are:

  1. Relevant (T) i.e. clearly related to the tweet,
  2. Sound (A) i.e. no issues about resolving references to earlier or later items in the discourse.
  3. Non redundant (R) with previous passages.
  4. Syntactically (S) correct.

Non relevant passages have also been considered non sound, redundant and syntactically incorrect.

Ranking: Runs are ranked according to mean average scores per summary over Soundness, Non redundancy and Syntactically correctness among Relevant passages

.
RankMean AverageRelevancy (T)Non redundancy (R)Soundness (A)Syntax (S)Run
172.44%76.64%67.30%74.52%75.50%275
272.13%74.24%71.98%70.78%73.62%256
371.71%74.66%68.84%71.78%74.50%274
471.35%75.52%67.88%71.20%74.96%273
569.54%72.18%65.48%70.96%72.18%257
667.46%73.30%61.52%68.94%71.92%254
765.97%68.36%64.52%66.04%67.34%258
849.72%52.08%45.84%51.24%52.08%276
946.72%50.54%40.90%49.56%49.70%267
1044.17%46.84%41.20%45.30%46.00%270
1138.76%41.16%35.38%39.74%41.16%271
1238.56%41.26%33.16%41.26%41.26%264
1338.21%38.64%37.36%38.64%38.64%260
1437.92%39.46%36.46%37.84%39.46%265
1537.70%38.78%35.54%38.78%38.78%259
1636.59%38.98%31.82%38.98%38.98%255
1735.99%36.42%35.14%36.42%36.42%261
1832.75%34.48%31.86%31.92%34.48%263
1932.35%33.34%30.38%33.34%33.34%262
2025.64%25.92%25.08%25.92%25.92%266
2120.00%20.00%20.00%20.00%20.00%277
2200.04%00.04%00.04%00.04%00.04%269

Pilot subtask in Spanish

A extra set of topics (only tweet texts) has been released in Spanish to try a different language and a slightly different task. Topics in Spanish are opinionated personal tweets about music bands, cars and politics. They were manually selected from CLEF RepLab 2013 test set among those without external url and with at least 15 words. Contextualization should help the reader to also understand the opinion polarity, allusions and humor.

Spanish corpus is available here and topics are here.

Special settings:

Schedule

Organizers (contact : contextweet@limsi.fr)

Patrice Bellot, LSIS - Aix-Marseille University
Josiane Mothe, IRIT, University of Toulouse
Véronique Moriceau, LIMSI-CNRS, University Paris-Sud
Eric SanJuan, LIA, University of Avignon
Xavier Tannier, LIMSI-CNRS, University Paris-Sud
Imprint | Data protection | Contact someone about INEX