Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add coverage test for Spike ColorSensor #1

Open
BertLindeman opened this issue Jan 20, 2021 · 27 comments
Open

Add coverage test for Spike ColorSensor #1

BertLindeman opened this issue Jan 20, 2021 · 27 comments

Comments

@BertLindeman
Copy link
Contributor

@laurensvalk
Copied from support issue #224
I still don't know how to simply refer to an issue in another repo.

Any hint on where I might look at first?

Thanks for the help! It would be really helpful to have one script per sensor, which tests all its methods, with all possible arguments. Like this, but for sensors.

Starting with the SPIKE sensors would be a good start. Perhaps mounted side by side. With one motor (itself not the subject in these tests) waving something (colored) in front of the sensors to verify that they give the expected values. Simpler is better, so that the test itself does not fail mechanically at least.

Let's continue this conversation in the coverage repo.

Will give it a go for the Spike color sensor

@laurensvalk
Copy link
Member

Some additional background about this repository until this is documented in any shape or form:

  • The most important folder is /scripts, containing all the Pybricks 3.0 scripts to be tested. There's only one for now.
  • The /ev3 folder contains an EV3 project for a robot that can operate all the buttons and the USB cable on a SPIKE Prime Hub. This is to facilitate automated testing of firmware installation, and turning the SPIKE Prime hub on and off. The design of this EV3 robot is not documented yet.
  • The /mailbox folder contains some tools to communicate with the EV3 brick.
  • The /main.py is a script on the PC that talks to the EV3 as well as the SPIKE Prime Hub, to sequentially run everything in /scripts.

@BertLindeman
Copy link
Contributor Author

Fun stuff, thanks.
(I used to be QA on mainframe security, so some background is available)

My idea at the moment is to add in /scripts a spike_color_sensor.py program to "do" all documented functions and parameters.

Starting without verification of the results.
So just a syntax check first.
Adding result check after an initial script has been accepted.

Was hoping to verify the hub light color in this script also, but the reported HSV values are too much off.

@laurensvalk
Copy link
Member

My idea at the moment is to add in /scripts a spike_color_sensor.py program to "do" all documented functions and parameters.

That sounds like a good plan. We could probably order the tests somehow so that these type of tests always run first.

@laurensvalk
Copy link
Member

Was hoping to verify the hub light color in this script also, but the reported HSV values are too much off.

Yeah, these need to be calibrated better, especially the read method for external lights.

@laurensvalk
Copy link
Member

It would also be good to test everything with and without keyword arguments. Sometimes we document the wrong name.

@BertLindeman
Copy link
Contributor Author

It would also be good to test everything with and without keyword arguments. Sometimes we document the wrong name.

Would be nice to have the documentation verified by tests.
I intend to do:
default, and all possible values if possible or otherwise only low' and high` limits and something in-between.

Is raise BaseException a valid signal of this test fails?

Well, let me get started....

@laurensvalk
Copy link
Member

Right, and feel free to test out of bounds values as well. For example, in the display I tested negative pixel indexes.

Depending on what it is, it may pass or raise an exception, but either one is a good test to prove that it does not crash.

Is raise BaseException a valid signal of this test fails?

It's still early days, so if you intend to work on this, chances are you will be defining the standards :) So yeah, feel free to investigate and see what you find practical.

@BertLindeman
Copy link
Contributor Author

That is a nice positive response.

So out of bounds tests are in ;-)

BertLindeman referenced this issue in BertLindeman/pybricks-coverage Jan 21, 2021
script to test Spike ColorSensor for pybricks/pybricks-coverage/#1

verify that published documentation is accepted 

No verification at the moment on the results returned by the sensor.
Not all wrong parameters **are** refused.
@BertLindeman
Copy link
Contributor Author

Maybe stupid idea, but . . .

Would it be doable (useful?) to generate tests from docstrings that document the syntax and arguments for devices?

I am thinking of tests to verify that the doc is compatible with the firmware, vice versa.

laurensvalk pushed a commit that referenced this issue Mar 12, 2021
script to test Spike ColorSensor for pybricks/pybricks-coverage/#1

verify that published documentation is accepted 

No verification at the moment on the results returned by the sensor.
Not all wrong parameters **are** refused.
laurensvalk pushed a commit that referenced this issue Mar 12, 2021
script to test Spike ColorSensor for pybricks/pybricks-coverage/#1

verify that published documentation is accepted

No verification at the moment on the results returned by the sensor.
Not all wrong parameters **are** refused.
@laurensvalk
Copy link
Member

Some updates:

I've been thinking about how to write these tests and added a few here. After some considerations, these are the guidelines I used. Input is welcome.

Succeed silently, fail loudly

This can be done with assert. For example:

distance = ultrasonic_sensor.distance()
assert distance > 100, "Expected > 100 mm, got {0}.".format(distance)

This explains what failed, and also gives the line number. I've found this to be more helpful than testing for expected output mismatch like the tests MicroPython has. Then you still have 1) find the expected output and 2) manually search where it failed.

If we should get an error as part of a test, we can just catch it, and assert that it has the expected type.

try:
    a = 1/0
    raise ValueError
except Exception as e:
    expected = ZeroDivisionError
    assert type(e) == expected, "Expected {0} but got {1}".format(expected, type(e))

Keep things simple

Avoid auxiliary functions when they are there just to make test output "nice" or "efficient".

If we find that some functions are really helpful, we could add them to the firmware so you could import them from pybricks.util or something.

Keep things short

Keep tests very short and to the point. We can automate the process of running a bunch of small scripts with pybricksdev.

No numbering schemes

Names like t03_color_surface_False, t04_color_surface_WRONG can be somewhat hard to keep updated and expand.

If order is important, then those two tests should be one test. Each test does its own initialization, if any. When running tests automatically, we can make sure to run previously failed tests first.

@laurensvalk
Copy link
Member

laurensvalk commented Mar 31, 2021

Then on to the actual tests. I've been thinking we could build small modules on which we can run several tests. Here's "Hardware Module 1" used in the tests I linked to above:

IMG_20210331_143925849

There may also be a single large test that combines all modules, but smaller modules make it easier for people to try this out themselves. Ideally, all of them can be built simultaneously from a limited number of sets.

@laurensvalk
Copy link
Member

laurensvalk commented Mar 31, 2021

@BertLindeman, does this make sense? I'd be curious if this could work for your tests as well, after some adaptation.

As you've seen, we we'll probably keep tests in the main repo instead of here.

@BertLindeman
Copy link
Contributor Author

Succeed silently, fail loudly

Sure, less output if all OKE. Less clutter.

Keep things simple

Avoid auxiliary functions when they are there just to make test output "nice" or "efficient".

If we find that some functions are really helpful, we could add them to the firmware so you could import them from pybricks.util or something.

Keep things short

Keep tests very short and to the point. We can automate the process of running a bunch of small scripts with pybricksdev.

Hard for me to not enhance but I agree.

No numbering schemes

Names like t03_color_surface_False, t04_color_surface_WRONG can be somewhat hard to keep updated and expand.

If order is important, then those two tests should be one test. Each test does its own initialization, if any. When running tests automatically, we can make sure to run previously failed tests first.

No need for numbering if the tests themselves are short.

There may also be a single large test that combines all modules, but smaller modules make it easier for people to try this out themselves. Ideally, all of them can be built simultaneously from a limited number of sets.

Agreed, many small tests are easily automated to be all run,
and the small tests are easily run, debugged, and updated separately by "hand".

@BertLindeman, does this make sense? I'd be curious if this could work for your tests as well, after some adaptation.

As you've seen, we we'll probably keep tests in the main repo instead of here.

Makes good sense, @laurensvalk.

Will take a look at your examples, so much better than my coding.
I had a hardware module with a ring of all expected colors, but I did not get a constant and solid result on them. Probably also environmental light involved, although I also tried it in a completely dark room.
(See that it's hard for me not to enhance 😁 )

A bit related: is there a possibility to automate testing the examples in the doc, so there is no need to build tests for that?

@laurensvalk
Copy link
Member

A bit related: is there a possibility to automate testing the examples in the doc, so there is no need to build tests for that?

I've been wondering about that too.

We'd probably need to revise the examples to make sure they could run on the test setups. This could be worth the effort though.

@BertLindeman
Copy link
Contributor Author

A bit related: is there a possibility to automate testing the examples in the doc, so there is no need to build tests for that?

I've been wondering about that too.

We'd probably need to revise the examples to make sure they could run on the test setups. This could be worth the effort though.

I think so. The first problem a new user encounters is an example not working.
Important to prevent that.

I wonder: would you want to see what firmware version a test has been run with?

@laurensvalk
Copy link
Member

I wonder: would you want to see what firmware version a test has been run with?

I think we'd mainly just try to run all tests whenever we prepare a beta or full release. Then if everything passes, we press the button to release it.

@BertLindeman
Copy link
Contributor Author

I think we'd mainly just try to run all tests whenever we prepare a beta or full release. Then if everything passes, we press the button to release it.

By taking the examples out of the doc by hand? I hope not.
Although it's not a huge number of examples...

@laurensvalk
Copy link
Member

Not sure what you mean. We'd leave the examples in the docs.

I just meant that we don't need each test to create a report with a version number, etc.

The tests either pass or they don't. We can open issues for things that are broken, and go ahead with a release if we are happy enough with the results.

@BertLindeman
Copy link
Contributor Author

Not sure what you mean. We'd leave the examples in the docs.

The examples go as python source into a hub to verify the syntax and if they work.
Are the examples in Github separate files? I just don't know.

just meant that we don't need each test to create a report with a version number, etc.

The tests either pass or they don't. We can open issues for things that are broken, and go ahead with a release if we are happy enough with the results.

Oke.

@laurensvalk
Copy link
Member

Are the examples in Github separate files?

Yes.

@BertLindeman
Copy link
Contributor Author

Oke, Laurens. Thanks.

@BertLindeman
Copy link
Contributor Author

@laurens from the photo above I can almost build "Hardware Module 1"
Just the fixture onto the motor I miss.
Do you have a building instruction laying around? Or a 3D model I can use?
No real problem if not, I'll do trial and error 😄

@laurensvalk
Copy link
Member

IMG_20210402_083645976
IMG_20210402_083720950
IMG_20210402_083705550
IMG_20210402_083619367
IMG_20210402_083638634

Cat not included.

@BertLindeman
Copy link
Contributor Author

Thank you @laurensvalk
I saw the text before the photo's, hilarious.

@laurensvalk
Copy link
Member

FYI, the ports were changed in pybricks/pybricks-micropython@b4b450d

@BertLindeman
Copy link
Contributor Author

FYI, the ports were changed in pybricks/pybricks-micropython@b4b450d

Indeed a nice change. I was using a technic hub to test with, as it was close at hand.
Thanks for the heads-up.

@BertLindeman
Copy link
Contributor Author

@laurensvalk Got hardware machine #1 built. Have mounted the hub onto the machine for stability.
The test setup is on a pile of books next to my chair for comfortability.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants