You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, I’d like to sincerely thank you for your outstanding work on constructing and open-sourcing the dataset. The effort you put into creating it through crowdsourcing is truly appreciated, and it is a valuable contribution to the research community.
As part of my research, I am interested in analyzing the consistency between individual annotators. However, I noticed that while the labels have been made available, the original crowdsourcing data—such as the annotator IDs, the specific samples each annotator labeled, the provided labels, and time information (both task acceptance and submission times)—has not been released. Access to this information would be extremely helpful for calculating inter-annotator agreement and performing a more detailed analysis of annotator behavior.
Would it be possible to release the original crowdsourcing files with the mentioned details included? Having access to this data would greatly enhance reproducibility and allow for deeper insights into label quality and consistency.
Thank you again for your excellent work, and I look forward to your response!
The text was updated successfully, but these errors were encountered:
Hi there,
First of all, I’d like to sincerely thank you for your outstanding work on constructing and open-sourcing the dataset. The effort you put into creating it through crowdsourcing is truly appreciated, and it is a valuable contribution to the research community.
As part of my research, I am interested in analyzing the consistency between individual annotators. However, I noticed that while the labels have been made available, the original crowdsourcing data—such as the annotator IDs, the specific samples each annotator labeled, the provided labels, and time information (both task acceptance and submission times)—has not been released. Access to this information would be extremely helpful for calculating inter-annotator agreement and performing a more detailed analysis of annotator behavior.
Would it be possible to release the original crowdsourcing files with the mentioned details included? Having access to this data would greatly enhance reproducibility and allow for deeper insights into label quality and consistency.
Thank you again for your excellent work, and I look forward to your response!
The text was updated successfully, but these errors were encountered: