-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unit Tests #25
Comments
I've found unit tests often onerous to maintain, but if you know of an easy to setup unit framework it could be useful as we expand support to other devices. |
Any potential onerous scenarios that you can picture? I am mostly worried about regressions. Currently I am thinking of unit testing on the Device classes, mocking out the API classes with the golden result for the info dict. One potential trickiness I see is that we use a single class for all AWS devices, but the golden responses are per model. It most likely translates to one test class per model. Python comes with a stock unit testing framework that works pretty well. I also used pytest both are relatively easy to set up if I remember correctly. |
my experience with unit tests has often been that people mock too much and the tests failing doesn't represent errors just changes, so they change the test without really exercising the integration level. Do you have experience with any integration level frameworks in python? the blueair api has been very stable so if we could mock out the api level only and focus on assertions against real/mock api calls that would be great and feels a lot more useful then just adding class level tests that just verify themselves if that makes sense? |
There is a blueair server written in node.js, but it seems to be only for the non-aws API, and also a bit of an overkill. We don't have the devices, nor the servers, so we have to use a mock (or a fake, whichever name we call it). Faking at the API service seems to be the easiest because that's closest to our logic. If we fake the API server responses, and split the testing of authentication out of the device API (since they are orthogonal axes of functionality), then calling this an integration tests or unit tests is not very different in terms of implementation (as there is no need to manage SUT setup/teardown). I just looked up, and python Kasa seems to use pytest: (e.g. https://github.com/python-kasa/python-kasa/blob/master/tests/test_bulb.py) As a first step, I would like the tests in blueair_api to exercise all of the DeviceAws (And Device) attributes and async setters, assert their local cache update logic, as well as the refresh logic. This will ensure we are protected against major regressions as we add more device support into the shared DeviceAws class. Let me draft something up quickly to share. On the ha_blueair side, a particular test case I had in mind is asserting the behavior of set_percentage of the fan -- for each percentage integer, it should translate to exactly the desired integer fan speed. This can be done either as a test for a percentage to speed utitlity function(maybe easier), or at the data coordinator level (seems to be more complex and better wait till studying how HA does tests). |
Now that unit tests framework is added to blueair_api. Maybe before starting ha_blueair's unit test framework we shall look into consolidation between non-aws and aws code paths, under than banner for adding coverage for the non-aws path before diving into testing for ha_blueair A slight motivation is that the fact that blueair_api has unit tests whilst ha_blueair has not, may somewhat encourage consolidating more logic into blueair_api in the longer run. Thus it is perhaps a good idea to keep the status quo like this a bit as an experiment to verify this theory. |
before we add more unit tests, I'd rather focus on adding more devices to the aws device tests. I don't think it is worth adding tests to ha_blueair, that energy would be better spent making an official (core) ha component since the main reason I haven't bothered is we'd have to have tests, adding to core in my experience is a bit slow but would vastly increase the reliability of the integration since it would be refactored as core makes breaking changes which is most of the changes I generally have to make. When adding to core the maintainer team will want us to start small with one or two platforms at once, likely best to start with fan (and sensors if they let us have two at once) |
Yes making the component into HA core would be nice (large cost large return); augmenting the blueair_api package towards a cli tool could also be cool (like python-kasa) -- the immediate value is unclear but the effort is also low. I am a bit confused about your rationale regarding tests and making ha_blueair into core -- does HA core not require test coverage? Or you mean we will be riding the core's test infra (in which case we can simply port the test cases over)? Is there a guideline doc that you have read about the policies regarding tests (which seems to be the most effective tools for reliability? If we are not adding unit tests to ha_blueair, then to cover the non-trivial logics like the percentage to fan speed conversion, we may shift them into blueair_api. The consequence of doing so would be leaving ha_blueair very thin, only does the attribute to entity translation for example -- however unfortunately in my opinion even as thin as that is still something that can break for individual entities, because each entity has its own source code execution path. I don't have a good idea how to deal with this. |
yeah I meant riding the core's test infra, if we can imitate core's testing framework without a lot of work, sure. |
Re: #22 (comment)
stub.py is an integration test. I have my local copy that I run as well.
The problem is the system-under-test for stub.py requires the actual devices. As a result, a developer cannot easily assert no-regression if the developer does not already have the actual device: you probably cannot really test a change breaks h35i, and I cannot test whether changes cause regress in other devices.
I wonder if we shall start adding unit tests using the golden AWS(and non-AWS) responses as the the mock device info. Doing so will also force us to start collecting the golden info responses into source code which may be more organized than, keeping them as part of the issues.
If this sounds good to you, I can start setting up some devinfra for unit testing. But the bigger effort is to track down the golden info responses.
The text was updated successfully, but these errors were encountered: