Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tester is not working #16

Closed
douglaskastle opened this issue Mar 26, 2020 · 19 comments
Closed

Tester is not working #16

douglaskastle opened this issue Mar 26, 2020 · 19 comments
Labels
bug Something isn't working

Comments

@douglaskastle
Copy link
Member

@myselfhimself

OK I decided to test the tester. I introduced and error that should cause every test to fail. However I get green every where, both on travis and github actions:

https://github.com/douglaskastle/blender-addon-tester/runs/537749667?check_suite_focus=true
https://travis-ci.org/github/douglaskastle/blender-addon-tester/builds/667444567

Looking in the test themselves they are failing, it is just that the failure is not percolating out.

This was the whole effort of the original work I did. The error happens in python in blender. That error has to get out of the python, out of blender, out of the wrapper python to the CI tool.

I have not taken your flow apart yet, but I suspect it the the blender to the top level python is where this is not taking place.

I did this inside the pypi-test branch as it was the active one I was on. It has all the work from todays work in macosc-support branch merged into it.

tests/test_version.py::test_versionID_pass FAILED                        [ 50%]
=================================== FAILURES ===================================
_____________________________ test_versionID_pass ______________________________
bpy_module = 'fake_addon'
    def test_versionID_pass(bpy_module):
        expect_version = (0, 1, 1)
        return_version = get_version(bpy_module)
>       assert expect_version == return_version
E       assert (0, 1, 1) == (0, 0, 1)
E         At index 1 diff: 1 != 0
E         Full diff:
E         - (0, 0, 1)
E         ?     ^
E         + (0, 1, 1)
E         ?     ^
tests/test_version.py:13: AssertionError
=========================== short test summary info ============================
FAILED tests/test_version.py::test_versionID_pass - assert (0, 1, 1) == (0, 0...
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
============================== 1 failed in 0.05s ===============================
@douglaskastle douglaskastle added the bug Something isn't working label Mar 26, 2020
@douglaskastle
Copy link
Member Author

this command should have a exit 1

python test_addon_blender.py ${BLENDER_VERSION} fake_addon

I suspect

BAT.test_blender_addon(....)

is passing up the value up but we have to then do a sys.exit()

@douglaskastle
Copy link
Member Author

OK made edit that seems to catch the errors correctly for travis,
https://travis-ci.org/github/douglaskastle/blender-addon-tester/builds/667457887
unsure why actions are still passing:
https://github.com/douglaskastle/blender-addon-tester/actions/runs/64138088

9f13749

@douglaskastle
Copy link
Member Author

OK github actions are running different set of test than travis:

advanced_tests/test_version_advanced.py::test_versionID_pass PASSED [ 66%]

I don't want drift like this. Lets harmonize testing. It is making debug harder than it needs to be

@douglaskastle
Copy link
Member Author

Added a change to the advanced tests to fail, the failure is not being passed up.

Not too sure if rebuild_pipreinstall_and_testexamples.sh is passing the exit failure up to the github actions CI.

Not too happy with a shell script in general, maybe rewrite altogether in python:

#17

One of my stated goals was:

Where possible I try and script in python only.

to reduce the language creep, but also sh doesn't work on a basic windows cmd tool.

@myselfhimself
Copy link
Collaborator

Agreed
also all three python tests scripts run in a row overwrite a single coverage.xml, which is picked as a single artifact in github action. We could change the name of the coverage file with BLENDER_VERSION and platform keys in it and have both CIs store them properly, by your bash upload for travis CI and maybe by additional artifacts on GH Actions.

@douglaskastle
Copy link
Member Author

For coverage I have found that some code can only be hit by some versions of blender and other version hit different parts. 2.79 still has blender render support for example. 2.80 does not, so 2.80 can never hit that code. When coverage is being run I would like to see all the code being hit, if hit.

What does the 3 scripts achieve, apart from proof of concept of things like alternate test directories?

I would prefer that it only be one script, one test handler unless there is a compelling argument for more? And I would like to keep one coverage.xml per run as a result, again if there is a reason for otherwise, I am all ears.

@myselfhimself
Copy link
Collaborator

the 3 test scripts do :
a. test run from 1 .zip addon
b. test run from 1 directory addon
x
c. coverage toggling
x
d. custom blender load tests file injection

the test_advanced test does a+c(coverage enabled)+d, we could keep this one
testing a .zip archive (a) is closer to the final situation where a user installs an addon by zip file
having prezipped archives compatibility helps me to test a self-packaged addon before releasing it...
but non zipped is also OK

they don't do:

  • test run from a single-python-file uncompressed addon

douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
@douglaskastle
Copy link
Member Author

OK recommend that the three tests themselves are run explicitly inside the yml

Please look at this yml fora working technique, might still need to be refined:

https://github.com/douglaskastle/blender-addon-tester/blob/c04f56eeb010909cbde8b25872081e880de215fe/.github/workflows/test-fake-addon-example-from-local-wheel.yml

@douglaskastle
Copy link
Member Author

Note, github actions is very annoying here, if one of the runs fails, it cancels all other current and future runs. So if a early run test has a failure, which might only be limited to the blender version or the operating system, you can't see if all other cases are successful, they are don't knows.

Example here:

https://github.com/douglaskastle/blender-addon-tester/actions/runs/65554988

Can't seem to find a working, continue on error flag.

@myselfhimself
Copy link
Collaborator

myselfhimself commented Mar 29, 2020 via email

douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
douglaskastle added a commit that referenced this issue Mar 29, 2020
@douglaskastle
Copy link
Member Author

Hello good idea Here is a related flag: https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstepscontinue-on-error See you

No that doesn't work, it drops the error and marks it as a pass, exactly what you don't want:

https://github.com/douglaskastle/blender-addon-tester/runs/543790142?check_suite_focus=true

@douglaskastle
Copy link
Member Author

Identified all open problems with this issue, re-runs on nightly builds in windows was a) masked by the flow and b) contained an error, that the flow caught (yay) and is now fixed.

Leaving open for a few days for comment, otherwise I am closing.

@myselfhimself
Copy link
Collaborator

Thank you for your fixing the tester and windows CI flow and your feedback on the non-satisfactory continue-on-error Github actions option.

Putting everything files and directories as 777 is not so nice for now. Is that needed for windows only?

@myselfhimself
Copy link
Collaborator

@douglaskastle
Copy link
Member Author

Putting everything files and directories as 777 is not so nice for now. Is that needed for windows only?

I believe it is only for windows, and it only appears on the nightlies so it might be a new thing. It appears to be only on the python executable itself, but I haven't investigated it further.

I checked in what I did to get it unblocked. It's a bit of a hammer, but since the whole directory is being removed after there is no lasting damage for Ubuntu or macosx, that I am aware of

@myselfhimself
Copy link
Collaborator

myselfhimself commented Mar 30, 2020 via email

@douglaskastle
Copy link
Member Author

If so I would propose adding an if statement for windows only. Having too many executable things could be an increased leak for escalation on unix OSes (sorry for being a bit paranoid).

That checkin was a patch fix until I rattled out the real problem, it turns out the python permissions were being expliclty set, with out taken into consideration of it current state.

os.chmod(python, stat.S_IXOTH | stat.S_IXGRP | stat.S_IXUSR)

which is leaving that file in a execute only state. So it is no longer writable, which means it is not deleteable, rewrote to this:

os.chmod(python, os.stat(python).st_mode | stat.S_IXOTH | stat.S_IXGRP | stat.S_IXUSR)

this is the correct fix. I don't know why windows and 2.83 are the only combo to show this problem.

@myselfhimself
Copy link
Collaborator

myselfhimself commented Mar 31, 2020 via email

@douglaskastle
Copy link
Member Author

Great, I think the blockers that this issue was created for have been addressed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants