What you can find on this page:

Announcements
General information and motivation of this lab
Important dates in 2025
Organizers and how to reach us

LongEval 2025 announcements !

You can find the final versions of the qrels and queries for the training collections here: (Train qrels and queries)
We have released the Training Collections for the WebRetrieval and SciRetrieval Tasks!

Registration is open here
April 2025: Test data release

Motivation

Most of the IR models are trained and evaluated on static data. However, (web) data is not static, and research shows that the efficiency of IR models drops as the test data becomes more distant in time from the training data. LongEval differs from traditional IR shared tasks by providing IR researchers the possibility to focus on models that mitigate performance drop over time. We envisage that this task will bring more attention from the NLP community to the problem of temporal generalisability of models, what enables or prevents it, potential solutions and limitations.

The CLEF LongEval Lab encourages participants to develop information retrieval systems that survive through dynamic temporal text changes, introducing time as a new dimension for ranking models performance.

Compared with the LongEval 2023 and 2024 editions, in 2025, we enlarge the scope of LongEval to include both web oriented retrieval and scientific article retrieval. There are two tasks, each focusing on one area (web and scientific retrieval).

Both LongEval tasks use a sequence of datasets collected at different points in time, called "snaphots". The evaluation of IR system will consider the drop in IR metrics computed on the various snapshots.

The CLEF 2025 LongEval Lab encourages participants to develop temporal information retrieval systems that survive through dynamic temporal text changes, introducing time as a new dimension for ranking models performance.

Previous editions

LongEval 2025 is the third edition of this lab. Check our 2024 and 2023 websits to find information about previous years.

Relevant Publications

To understand the details of the LongEval Lab, please check the following publicaitions:
P. Galuscakova, R. Deveaud, G. Gonzalez-Saez, P. Mulhem, L. Goeuriot, F. Piroi, M. Popel: LongEval-Retrieval: French-English Dynamic Test Collection for Continuous Web Search Evaluation.
LongEval 2023 overview paper.
CEUR-WS LongEval 2023 workshop proceedings (scroll down in the document).
LongEval 2024 overview paper.
CEUR-WS LongEval 2024 workshop proceedings (scroll down in the document).

Important Dates

18 November 2025: Registration opens -- here
February 2025: Training data release
April 2025: Test data release
25 April 2025: Registration closes
30 May 2025: Submission of Participant Papers [CEUR-WS] via EasyChair (CLEF-imposed deadline)
27 June 2025: Notification on Participant Papers [CEUR-WS]
7 July 2025: Camera Ready Copy of Participant Papers and Extended Lab Overviews [CEUR-WS]
9-12 September 2025: CLEF Conference

Organizers

Florina Piroi - TU Wien & RSA, AT
Alaa El-Ebshihy - Research Studios Austria (RSA), AT
Tobias Fink - RSA, AT
Philippe Mulhem - University Grenoble, FR
Philipp Schaer - TH-Köln, DE
David Iommi - RSA, AT
Jüri Keller - TH-Köln, DE
Petra Galuscakova - University of Stavanger, NO
Lorraine Goeuriot - University of Grenoble, FR
Gabriela Gonzalez-Saez - University of Grenoble, FR
Matteo Cancellieri - The Open University
David Pride - The Open University
Petr Knoth - The Open University

Contact

LongEval organizers: longeval-ir-task@univ-grenoble-alpes.fr

Join our slack channel for any question.