-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor the test suite verifier tests #349
Conversation
962157c
to
42f2f75
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well I dig it, a lot less code. Maybe the bash file needs to be run on just test
too because it could 'slip through' on local development but would get caught on ci so not a huge deal
# Extract the test files from the JSON | ||
jq -c '.test_files[] | select(type=="object")' "$json_file" | while read test_file; do | ||
# Extract language and file path | ||
language=$(echo "$test_file" | jq -r '.language') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
language
isn't used
# Loop through each test name | ||
for test_name in $test_names; do | ||
# Check if the test name exists in the file content | ||
if ! grep -q "$test_name" <<< "$file_content"; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
grep
is pragmatic and I'm okay with moving it forward, however I'd like to explore an alternative, and @diehuxx let me know what you think.
Instead of grep
, we use the given languages test runtime to execute a command which runs all of the tests, and subsequently, given all tests pass, prints out the aggregate list of test cases, and then we assert that list against our lists inside of test/unit_test_cases
, and if tests are missing then we print the names of the missing tests. The reason it's slightly more robust is it relies on the actual test runner to aggregate the test cases which are executable and execute successfully. Whereas this could be gamed because it's reading text files. I think grep
is pragmatic, but for example, we could accidentally let one slip through if we comment it out. This would also localize the test suite to the given project. So rather than relying on adding "path": "bound/kt/src/test/kotlin/web5/sdk/dids/BearerDidTest.kt"
for every test file we add, which is maintanence less overhead, the JSON files are merely lists and we rely on the target language testing tools for test name aggregation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A quick tinkering, with mvn
this is possible to do by generating a report via a surefire plugin, and with cargo
it looks like it's natively supported with the cargo test
command with various CLI arguments. So the idea being, inside of each SDK implementations directory we would write a script which executes the assertion for the given language. And I like @nitro-neal's idea of bubbling this functionality up into the Justfile regardless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some commentary with a slightly different approach, lmk what you think @diehuxx
Having tests of tests is weird. So we're at least gonna separate our meta-tests from our tests.
The old way: Use AfterAll in kotlin and lazy_static in rust to assert that we are running all the test names specified in tests/unit_test_cases/*.json
The new way: Use a bash script to assert that rust and kotlin each are running all the test names specified in tests/unit_test_cases/*.json
The new way is simpler; we aren't fighting the testing philosphy of each programming language.