-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Marking known/expected test failures and add extra information #2483
Comments
Adding @nuclearcat and @padovan to the discussion as well. |
Assigning to Helen for now. We can change assignee after discussion that how this should be resolved. |
@spbnick JFYI This is the ticket in which we are tracking related information. |
@helen-fornazier @laura-nao
We want to have a mechanism or support in logspec where we separate all the errors/warnings and compare them with known issues like "No irq handler for vector" in this case. If there are only known issues and no error, mark the test status "ignored" (or "pass"). |
^ Example |
Hello,
I'd like to discuss potential strategies for marking specific test failures as known or expected. This arises from some dmesg test failures observed on grunt and zork Chromebooks, where the
emerg
test case is failing due toNo irq handler for vector errors
being reported in the logs (as expected) - see e.g.: https://lava.collabora.dev/scheduler/job/13063812#results_468058486. This issue is known and harmless on these boards. I'm wondering if it would make sense to have a way to encode this information so that users can easily know about it and act accordingly (e.g. focus on other types of errors or filter out specific test failures if needed).In the specific case of the dmesg test mentioned, I don't see a sane way to report the exact error log lines from the LAVA test case itself. So I guess this may require users to manually tag failures or provide additional information after they are reported.
I suspect there could be other instances where tests could benefit from manually added debug information (e.g. flakey tests); this issue is to discuss whether any actions should be taken to address these scenarios and explore potential solutions.
cc: @nuclearcat
The text was updated successfully, but these errors were encountered: