Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low Confidence Scores for High IoU Segments in THUMOS14 Predictions #144

Open
Damon-Salvatore-liu opened this issue Dec 4, 2024 · 2 comments

Comments

@Damon-Salvatore-liu
Copy link

image

Hello,

Thank you very much for your outstanding work!

Now I encountered an issue while working with your codebase on the THUMOS14 dataset. For example, in video_test_0000004, I noticed that the predicted segment overlaps well with the ground truth segment, achieving a high IoU (e.g., 0.616060). However, the confidence score for this prediction is unexpectedly low (e.g., 0.06995).

This issue is not limited to this video; I observed similar behavior in other videos as well.

Could you please provide insights into why this might happen? Have you analyzed this phenomenon before? I’d also like to invite the community to discuss potential reasons and solutions for this.

Thank you for your time and support!

@tzzcl
Copy link
Collaborator

tzzcl commented Dec 8, 2024

a video will have multiple groundtruth actions, what you see in the table is that the predictions may have low IoU with specific actions [0.2, 1.1], but it may have high overlap with other groundtruth actions.

@Damon-Salvatore-liu
Copy link
Author

Thank you very much for your response.
However, you might have misunderstood my point. What I meant is that the predicted segments have relatively high IoU values with the ground-truth labels, but their classification scores are relatively low (as shown in the red box). I would like to ask if this is normal. Theoretically, if the IoU score is high, the classification score should also be relatively high.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants