-
-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve the robustness of the test suite #98
Comments
Leaving this as a note.
|
Thanks for raising this, fish. I'm assuming you're referring to this right? https://nodejs.org/api/test.html At first glance, the API does look similar to our existing use of Jest - it even seems to support mocking nicely, and so on. Happy to discuss later in the year - removing dependencies would help, as long it doesn't make support for the platforms we're targeting harder. We had some earlier problems when targetting node over browsers when developing earlier, and there are a number of new runtimes we probably want to be explicit about supporting / not supporting when making this decision. |
Renamed this issue. Currently, the biggest problem we have is our test suite being brittle - in that a change to one input (e.g. global average grid intensity) causes 20-something tests to fail. This requires the expected values for each test to be recalculated and updated so that tests pass again. In a perfect world, our tests would be robust enough to gracefully handle a change like this without chucking the toys out of the pram. |
hi @fershad - one simple way used on other projects is to agree a sensible range of variation from a given number, and then rather than testing for a value returned to to be EXACTLY a figure, use a range. Jest has some native support like so, for checking to given number of digits after the decimal point: test('adding works sanely with decimals', () => {
expect(0.2 + 0.1).toBe(0.3); // Fails!
});
test('adding works sanely with decimals', () => {
expect(0.2 + 0.1).toBeCloseTo(0.3, 5); // passes
}); See the Jest docs on Another approach is to try using a custom matcher to sanity check that a number is within say… a 10% range either way of what we expect. An example custom matcher in jest for checking a figure is between two numbers is linked below: https://gist.github.com/marcelblijleven/70058042eb2054f43c18a24b8516a79e It's not a huge leap to have a matcher based on the one above that checks a value is within an sensible margin of error, by passing in a single number, then check the returned results is between Worth a try? |
This test-only change: - Tests various permutations of `perByte()` - Uses [`toBeCloseTo()`][0] (see [issue thegreenwebfoundation#98][1]) [0]: https://jestjs.io/docs/expect#tobeclosetonumber-numdigits [1]: thegreenwebfoundation#98
Attempted to solve this for the |
@EvanHahn I'll check the PR over the weekend when I'm home from travel. Thank you very much for picking up this issue. |
Sounds good. I'd love to help further with this issue. If useful, feel free to message me on the CAT Slack or by emailing [email protected]. |
Sorry @EvanHahn, this one slipped off my radar while traveling. I've just left some comments. 🙏🏽 |
This test-only change: - Tests various permutations of `perByte()` - Uses [`toBeCloseTo()`][0] (see [issue thegreenwebfoundation#98][1]) [0]: https://jestjs.io/docs/expect#tobeclosetonumber-numdigits [1]: thegreenwebfoundation#98
@EvanHahn I've opened a new PR #156 and started some of the work of tidying up/updating the sustainable web design model tests. I've update the |
@fershad Yes, will do. I expect to have something by the end of next week. |
What is the status of the |
Relatedly, the |
I just opened #157 to clean up the |
@EvanHahn thanks for this, both of these implementations were before my time but I'll comment on them & pull in others who might be able to add more context.
I believe that these methods are specific to SiteSpeed, and are from the very early days of this library. I think they are still used by SiteSpeed itself, but @soulgalore should be able to provide more context.
You're talking about methods like |
The tests we have for checking in put and output in our models calculate figures to high levels of precision, that use numbers that themselves are long strings of numbers, where it's not obvious where they come from. Most of the time, we check if a number is exactly matching the number we have given.
This contributes to our tests being brittle and hard to compare to related work in spreadsheets that we use to explain how the different models work.
For this issue we want to have a documented, easier to understand approach that our tests follow, to make it easier to debug, and easier to explain what is happening at various stages in the models, and how one set of results from one model might be different from another.
The text was updated successfully, but these errors were encountered: