Description | Dates | Organizers | Tasks | Data | Submissions | 2023 |
11:15 - 11:25 | LongEval: Intro and Short Overview |
11:25 - 11:45 | Analyzing the Effectiveness of Listwise Reranking with Positional Invariance on Temporal Generalizability
Soyoung Yoon, Jongyoon Kim and Seung-won Hwang |
11:45 - 12:05 | Leveraging Prior Relevance Signals in Web Search
Jüri Keller, Timo Breuer and Philipp Schaer |
12:05 - 12:25 | Team OpenWebSearch at CLEF 2024: LongEval
Daria Alexander, Maik Fröbe, Gijs Hendriksen, Ferdinand Schlatt, Matthias Hagen, Djoerd Hiemstra, Martin Potthast and Arjen P. de Vries |
12:25 - 12:45 | Team Galápagos Tortoise at LongEval 2024: Neural Re-Ranking and Rank Fusion for Temporal Stability
Marlene Gründel, Malte Weber, Johannes Franke and Jan Heinrich Merker |
In this page we present CLEF 2024 shared task evaluating the temporal persistence of information retrieval (IR) systems and text classifiers. The task is motivated by recent research showing that the performance of these models drops as the test data becomes more distant in time from the training data. LongEval differs from traditional IR and classification shared task with special considerations on evaluating models that mitigate performance drop over time. We envisage that this task will bring more attention from the NLP community to the problem of temporal generalisability of models, what enables or prevents it, potential solutions and limitations.
The CLEF 2024 LongEval Lab encourages participants to develop temporal information retrieval systems and longitudinal text classifiers that survive through dynamic temporal text changes, introducing time as a new dimension for ranking models performance.
For Task 1. LongEval-Retrieval: longeval-ir-task@univ-grenoble-alpes.fr
For Task 2. LongEval-Classification: Rabab Alkhalifa
Join our slack channel for any question.