diff --git a/2024/EvaLatin.md b/2024/EvaLatin.md index 2789054..072bfd2 100755 --- a/2024/EvaLatin.md +++ b/2024/EvaLatin.md @@ -41,7 +41,7 @@ EvaLatin 2024 is organized by Rachele Sprugnoli, Federica Iurescia and Marco Pas Dependency parsing will be based on the Universal Dependencies framework. **No** specific training data will be released but participants will be free to make use of any (kind of) resource they consider useful for the task, including the Latin treebanks already available in the UD collection. In this regard, one of the challenges of this task will be to understand which treebank (or combination of treebanks) is the most suitable to deal with new test data. Test data will be both prose and poetic texts from different time periods. Even for the emotion polarity detection task, **no** training data will be released but the organizers will provide an annotation sample, a manually created polarity lexicon and annotation guidelines. Also in this task, participants will be free to pursue the approach they prefer, including unsupervised and/or cross-language ones (which promise to be the most efficient, given the lack of training data for Latin for this task). Test data will be poetic texts from different time periods. ### HOW TO PARTICIPATE -Participants will be required to submit their runs and to provide a technical report that should include a brief description of their approach, focusing on the adopted algorithms, models and resources, a summary of their experiments, and an analysis of the obtained results. Technical reports will be included in the proceedings as short papers: the maximum length is 4 pages (excluding references) and they should follow the [LREC-COLING 2024 official format](https://lrec-coling-2024.org/authors-kit/). Reports will receive a light review (we will check for the correctness of the format, the exactness of results and ranking, and overall exposition). +Participants will be required to submit (using the [START submission page of the workshop](https://softconf.com/lrec-coling2024/lt4hala2024/) their runs and to provide a technical report that should include a brief description of their approach, focusing on the adopted algorithms, models and resources, a summary of their experiments, and an analysis of the obtained results. Technical reports will be included in the proceedings as short papers: the maximum length is 4 pages (excluding references) and they should follow the [LREC-COLING 2024 official format](https://lrec-coling-2024.org/authors-kit/). Reports will receive a light review (we will check for the correctness of the format, the exactness of results and ranking, and overall exposition). Participants are allowed to use any approach (e.g. from traditional machine learning algorithms to Large Language Models) and any resource (annotated and non-annotated data, embeddings): all approaches and resources are expected to be described in the systems' reports.