-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add coverage test for Spike ColorSensor #1
Comments
Some additional background about this repository until this is documented in any shape or form:
|
Fun stuff, thanks. My idea at the moment is to add in Starting without verification of the results. Was hoping to verify the hub light color in this script also, but the reported HSV values are too much off. |
That sounds like a good plan. We could probably order the tests somehow so that these type of tests always run first. |
Yeah, these need to be calibrated better, especially the read method for external lights. |
It would also be good to test everything with and without keyword arguments. Sometimes we document the wrong name. |
Would be nice to have the documentation verified by tests. Is Well, let me get started.... |
Right, and feel free to test out of bounds values as well. For example, in the display I tested negative pixel indexes. Depending on what it is, it may pass or raise an exception, but either one is a good test to prove that it does not crash.
It's still early days, so if you intend to work on this, chances are you will be defining the standards :) So yeah, feel free to investigate and see what you find practical. |
That is a nice positive response. So out of bounds tests are in ;-) |
script to test Spike ColorSensor for pybricks/pybricks-coverage/#1 verify that published documentation is accepted No verification at the moment on the results returned by the sensor. Not all wrong parameters **are** refused.
Maybe stupid idea, but . . . Would it be doable (useful?) to generate tests from docstrings that document the syntax and arguments for devices? I am thinking of tests to verify that the doc is compatible with the firmware, vice versa. |
script to test Spike ColorSensor for pybricks/pybricks-coverage/#1 verify that published documentation is accepted No verification at the moment on the results returned by the sensor. Not all wrong parameters **are** refused.
script to test Spike ColorSensor for pybricks/pybricks-coverage/#1 verify that published documentation is accepted No verification at the moment on the results returned by the sensor. Not all wrong parameters **are** refused.
Some updates: I've been thinking about how to write these tests and added a few here. After some considerations, these are the guidelines I used. Input is welcome. Succeed silently, fail loudly This can be done with distance = ultrasonic_sensor.distance()
assert distance > 100, "Expected > 100 mm, got {0}.".format(distance) This explains what failed, and also gives the line number. I've found this to be more helpful than testing for expected output mismatch like the tests MicroPython has. Then you still have 1) find the expected output and 2) manually search where it failed. If we should get an error as part of a test, we can just catch it, and assert that it has the expected type. try:
a = 1/0
raise ValueError
except Exception as e:
expected = ZeroDivisionError
assert type(e) == expected, "Expected {0} but got {1}".format(expected, type(e)) Keep things simple Avoid auxiliary functions when they are there just to make test output "nice" or "efficient". If we find that some functions are really helpful, we could add them to the firmware so you could import them from Keep things short Keep tests very short and to the point. We can automate the process of running a bunch of small scripts with No numbering schemes Names like If order is important, then those two tests should be one test. Each test does its own initialization, if any. When running tests automatically, we can make sure to run previously failed tests first. |
Then on to the actual tests. I've been thinking we could build small modules on which we can run several tests. Here's "Hardware Module 1" used in the tests I linked to above: There may also be a single large test that combines all modules, but smaller modules make it easier for people to try this out themselves. Ideally, all of them can be built simultaneously from a limited number of sets. |
@BertLindeman, does this make sense? I'd be curious if this could work for your tests as well, after some adaptation. As you've seen, we we'll probably keep tests in the main repo instead of here. |
Sure, less output if all OKE. Less clutter.
Hard for me to not enhance but I agree.
No need for numbering if the tests themselves are short.
Agreed, many small tests are easily automated to be all run,
Makes good sense, @laurensvalk. Will take a look at your examples, so much better than my coding. A bit related: is there a possibility to automate testing the examples in the doc, so there is no need to build tests for that? |
I've been wondering about that too. We'd probably need to revise the examples to make sure they could run on the test setups. This could be worth the effort though. |
I think so. The first problem a new user encounters is an example not working. I wonder: would you want to see what firmware version a test has been run with? |
I think we'd mainly just try to run all tests whenever we prepare a beta or full release. Then if everything passes, we press the button to release it. |
By taking the examples out of the doc by hand? I hope not. |
Not sure what you mean. We'd leave the examples in the docs. I just meant that we don't need each test to create a report with a version number, etc. The tests either pass or they don't. We can open issues for things that are broken, and go ahead with a release if we are happy enough with the results. |
The examples go as python source into a hub to verify the syntax and if they work.
Oke. |
Yes. |
Oke, Laurens. Thanks. |
@laurens from the photo above I can almost build "Hardware Module 1" |
Thank you @laurensvalk |
FYI, the ports were changed in pybricks/pybricks-micropython@b4b450d |
Indeed a nice change. I was using a technic hub to test with, as it was close at hand. |
@laurensvalk Got |
@laurensvalk
Copied from support issue #224
I still don't know how to simply refer to an issue in another repo.
Will give it a go for the Spike color sensor
The text was updated successfully, but these errors were encountered: