Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat/fix: Add Additional Testing Files and Fix the Coverage Menu for Execexam #38

Merged
merged 70 commits into from
Nov 11, 2024

Conversation

dyga01
Copy link
Collaborator

@dyga01 dyga01 commented Oct 23, 2024

  1. Pull Request: Add additional testing files and fix the coverage menu for execexam

  2. Aidan Dyga (@dyga01), Hannah Brown (@hannahb09 ), Coltin Colucci (@Coltin2121)

  3. Additional and Improved Tests and Fuzzing #4 and Bug: Accurate Coverage Checks #35

  4. test, enhancement

  5. This pull request aims to add a large amount of testing files and functions to execexam. Most of the tests were created with Pytest. We also worked to fix the coverage menu to provide accurate coverage and omit files that do not have tests from the coverage report. Overall, this pull request mainly aims to add initial test cases and fix the coverage menu.

  6. Coverage of the test cases in our branch is currently passing. However, this number is expected to drop as our tests are integrated with the rest of the updated tool. Once most of the feature PRs have been added, we will work to improve the coverage again.

  7. The tests have been conducted on a Mac. It would be great if I could have a Linux and Windows user test it out.

  8. Here is the output of running the following commands on Mac.

a. poetry run task test
Screenshot 2024-10-23 at 2 30 54 PM

b. poetry run task coverage
Screenshot 2024-10-23 at 2 31 07 PM

rebekahrudd and others added 30 commits September 12, 2024 16:17
@dyga01
Copy link
Collaborator Author

dyga01 commented Oct 30, 2024

@dyga01 did you check with @Chezka109 about this issue? My preference would have been to keep each of the PRs separate and to merge them separately. Why this approach?

@gkapfham Yes, I briefly spoke with @Chezka109. I thought that the plan was to merge her branch into our branch in the fork so that there are no merge conflicts when we merge her PR and then our PR for the main execexam tool. We also needed to ensure that the 2 PR's work together. What do you think?

@Chezka109
Copy link
Collaborator

@gkapfham, I just tested out if the coverage in this branch properly updates the coverage badge and I can now say with 100% confidence that it works.

When a user runs the command for coverage: poetry run task coverage the badges will properly update automatically.

@PCain02
Copy link
Collaborator

PCain02 commented Oct 30, 2024

Hi, I am currently reviewing this PR on behalf of Windows since I don't think we have had a Windows user yet.

@PCain02
Copy link
Collaborator

PCain02 commented Oct 30, 2024

So on windows I am running into one test failure. In test_main.py it is not getting the same exit code as the one being asserted FAILED tests/test_main.py::test_run_with_missing_test - AssertionError: assert 1 == 4. I believe this is (yet another) windows encoding error. I included all the output I got with poetry run task coverage. It was the same result with the command poetry run task test. Honestly, I think maybe we cut test_main.py and tackle it again later so this PR can get merged because the rest of the content is good.

image
image
image
image

Here is the error

E           AssertionError: assert 1 == 4
E            +  where 1 = CompletedProcess(args=['poetry', 'run', 'execexam', '.', './tests/test_question_one.py', '--report', 'trace', '--repor...ncodeError: \'charmap\' codec can\'t encode character \'\\u2718\' in position\r\n0: character maps to <undefined>\r\n').returncode

tests\test_main.py:60: AssertionError

Here is the full terminal output containing the error

================================================================================================== test session starts ===================================================================================================
platform win32 -- Python 3.12.1, pytest-8.3.3, pluggy-1.5.0
Using --randomly-seed=1966499522
rootdir: C:\Users\Palla\Documents\GitHub\execexam
configfile: pytest.ini
plugins: anyio-4.6.0, hypothesis-6.112.2, clarity-1.0.1, cov-4.1.0, json-report-1.5.0, metadata-3.1.1, randomly-3.15.0
collected 58 items

tests\test_pytest_plugin.py .....
tests\test_util.py ......
tests\test_debug.py .......
tests\test_convert.py .
tests\test_main.py F   
tests\test_extract.py ...............
tests\test_advise.py ........
tests\test_display.py ......
tests\test_enumerations.py .........

======================================================================================================== FAILURES ========================================================================================================
_______________________________________________________________________________________________ test_run_with_missing_test _______________________________________________________________________________________________

cwd = 'C:\\Users\\Palla\\Documents\\GitHub\\execexam'

    def test_run_with_missing_test(cwd):
        """Test the run command with default options."""
        # Create a temporary directory
        with tempfile.TemporaryDirectory() as temp_dir: 
            test_one = Path(temp_dir) / "test_one"
            test_one.mkdir()

            # Run the CLI command in a subprocess
            result = subprocess.run(
                [
                    "poetry",
                    "run",
                    "execexam",
                    ".",
                    "./tests/test_question_one.py",
                    "--report",
                    "trace",
                    "--report",
                    "status",
                    "--report",
                    "failure",
                    "--report",
                    "code",
                    "--report",
                    "setup",
                    # "--advice-method", "apiserver",
                    # "--advice-model", "anthropic/claude-3-haiku-20240307",
                    # "--advice-server", "https://execexamadviser.fly.dev/",
                    # "--report", "advice",
                    "--fancy",
                    "--debug",
                ],
                cwd=cwd,  # Change working directory to the root of the project
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
                check=False,
            )

>           assert (
                result.returncode == EXPECTED_EXIT_CODE_FILE_NOT_FOUND
            )  # confirms that the file was not found
E           AssertionError: assert 1 == 4
E            +  where 1 = CompletedProcess(args=['poetry', 'run', 'execexam', '.', './tests/test_question_one.py', '--report', 'trace', '--repor...ncodeError: \'charmap\' codec can\'t encode character \'\\u2718\' in position\r\n0: character maps to <undefined>\r\n').returncode

tests\test_main.py:60: AssertionError

---------- coverage: platform win32, python 3.12.1-final-0 -----------
Name                        Stmts   Miss Branch BrPart  Cover   Missing
-----------------------------------------------------------------------
execexam\advise.py             65     35     20      3    41%   25, 38-42, 84-89, 108-113, 123-128, 146-257
execexam\convert.py             7      0      2      0   100%
execexam\debug.py              22      0      4      0   100%
execexam\display.py            54     20     16      1    70%   35-83, 162-168
execexam\enumerations.py       17      0      0      0   100%
execexam\extract.py           220    137    125      1    32%   89->80, 193-383, 387-398, 403-412
execexam\main.py              113     35     26      8    63%   44-45, 107, 115-116, 161-179, 228->233, 267-305, 329-367, 381->399, 417
execexam\pytest_plugin.py      88     74     30      0    12%   35-43, 50-83, 91, 101-104, 114-117, 125-199, 212-247
execexam\util.py               14      0     10      0   100%
-----------------------------------------------------------------------
TOTAL                         600    301    233     13    45%

1 empty file skipped.
Coverage JSON written to file coverage.json

FAIL Required test coverage of 50% not reached. Total coverage: 44.78%
================================================================================================ short test summary info ================================================================================================= 
FAILED tests/test_main.py::test_run_with_missing_test - AssertionError: assert 1 == 4
============================================================================================= 1 failed, 57 passed in 16.79s ==============================================================================================

@PCain02
Copy link
Collaborator

PCain02 commented Oct 30, 2024

I am on Windows 10 if that makes a difference. I see that it passed the Windows env in the GitHub Actions so I am curious what the root cause of this may be. I will also note I made sure that I was in the correct branch.

@hannahb09
Copy link
Collaborator

I just reran both of the commands on my computer linux and it was fine no errors

@gkapfham
Copy link
Collaborator

Hello @hannahb09 and @Coltin2121 and @dyga01, please note that there is a report that this test suite does not run correctly on Windows. Can you please investigate this issue?

@dyga01
Copy link
Collaborator Author

dyga01 commented Oct 31, 2024

Hello @hannahb09 and @Coltin2121 and @dyga01, please note that there is a report that this test suite does not run correctly on Windows. Can you please investigate this issue?

@gkapfham I believe none of us are using windows os, so it might be better if we could work with someone on that os to resolve this issue.

@PCain02
Copy link
Collaborator

PCain02 commented Nov 7, 2024

I made some edits to test_main.py and now the coverage report is working on Windows 10. The issues were with the unicode and the poetry environment.
image

Please retest that these changes work for Linux and macOS

@AlishChhetri
Copy link
Collaborator

I made some edits to test_main.py and now the coverage report is working on Windows 10. The issues were with the unicode and the poetry environment. image

Please retest that these changes work for Linux and macOS

@hannahb09, @dyga01, @Coltin2121, please review the recent updates to test_main.py by @PCain02. The changes resolved issues with Unicode and the Poetry environment, enabling the coverage report to run successfully on Windows. Please confirm that these adjustments do not impact functionality on Linux and macOS.

@dyga01
Copy link
Collaborator Author

dyga01 commented Nov 7, 2024

I made some edits to test_main.py and now the coverage report is working on Windows 10. The issues were with the unicode and the poetry environment. image
Please retest that these changes work for Linux and macOS

@hannahb09, @dyga01, @Coltin2121, please review the recent updates to test_main.py by @PCain02. The changes resolved issues with Unicode and the Poetry environment, enabling the coverage report to run successfully on Windows. Please confirm that these adjustments do not impact functionality on Linux and macOS.

@AlishChhetri I just ran these changes on Mac OS and they appear to be working.

@AlishChhetri
Copy link
Collaborator

I made some edits to test_main.py and now the coverage report is working on Windows 10. The issues were with the unicode and the poetry environment. image
Please retest that these changes work for Linux and macOS

@hannahb09, @dyga01, @Coltin2121, please review the recent updates to test_main.py by @PCain02. The changes resolved issues with Unicode and the Poetry environment, enabling the coverage report to run successfully on Windows. Please confirm that these adjustments do not impact functionality on Linux and macOS.

@AlishChhetri I just ran these changes on Mac OS and they appear to be working.

@dyga01, once a Linux user verifies that these changes work, we’ll have confirmed functionality across all three major operating systems. I’ll approve the PR following that round of testing.

@CalebKendra
Copy link
Collaborator

Here is the output of my usage from Ubuntu

caleb@:execexam (coverage_menu_fixes)$ poetry run task coverage
=============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.11.5, pytest-8.3.2, pluggy-1.5.0
Using --randomly-seed=826929214
rootdir: /home/caleb/Allegheny/CMP203/execexam
configfile: pytest.ini
plugins: anyio-4.4.0, json-report-1.5.0, cov-4.1.0, clarity-1.0.1, metadata-3.1.1, randomly-3.15.0, hypothesis-6.112.0
collected 58 items                                                                                                                                                                                                

tests/test_display.py ......
tests/test_util.py ......
tests/test_enumerations.py .........
tests/test_convert.py .
tests/test_debug.py .......
tests/test_extract.py ...............
tests/test_pytest_plugin.py .....
tests/test_main.py .
tests/test_advise.py ........

---------- coverage: platform linux, python 3.11.5-final-0 -----------
Name                        Stmts   Miss Branch BrPart  Cover   Missing
-----------------------------------------------------------------------
execexam/advise.py             64     34     20      3    42%   25, 38-42, 84-89, 108-113, 123-128, 145-254
execexam/convert.py             7      0      2      0   100%
execexam/debug.py              22      0      4      0   100%
execexam/display.py            54     20     18      2    69%   35-83, 119->exit, 162-168
execexam/enumerations.py       17      0      0      0   100%
execexam/extract.py            80      0     29      1    99%   89->80
execexam/main.py              112     37     26      8    62%   44-45, 107, 115-116, 161-179, 228->233, 267-305, 329-365, 380-383
execexam/pytest_plugin.py      88     74     30      0    12%   35-43, 50-83, 91, 101-104, 114-117, 125-199, 212-247
execexam/util.py               14      0     10      0   100%
-----------------------------------------------------------------------
TOTAL                         458    165    139     14    62%

1 empty file skipped.
Coverage JSON written to file coverage.json

Required test coverage of 50% reached. Total coverage: 61.64%

=============================================================================================== 58 passed in 17.90s ===============================================================================================
caleb@:execexam (coverage_menu_fixes)$ poetry run task test
=============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.11.5, pytest-8.3.2, pluggy-1.5.0 -- /home/caleb/.cache/pypoetry/virtualenvs/execexam-fnP_q_NH-py3.11/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.11.5', 'Platform': 'Linux-6.8.0-48-generic-x86_64-with-glibc2.35', 'Packages': {'pytest': '8.3.2', 'pluggy': '1.5.0'}, 'Plugins': {'anyio': '4.4.0', 'json-report': '1.5.0', 'cov': '4.1.0', 'clarity': '1.0.1', 'metadata': '3.1.1', 'randomly': '3.15.0', 'hypothesis': '6.112.0'}}
Using --randomly-seed=4222188027
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/home/caleb/Allegheny/CMP203/execexam/.hypothesis/examples'))
rootdir: /home/caleb/Allegheny/CMP203/execexam
configfile: pytest.ini
plugins: anyio-4.4.0, json-report-1.5.0, cov-4.1.0, clarity-1.0.1, metadata-3.1.1, randomly-3.15.0, hypothesis-6.112.0
collected 58 items                                                                                                                                                                                                

tests/test_extract.py::test_is_failing_test_details_empty_with_empty_string PASSED
tests/test_extract.py::test_no_labels PASSED
tests/test_extract.py::test_extract_details_hypothesis PASSED
tests/test_extract.py::test_extract_test_run_details PASSED
tests/test_extract.py::test_extract_failing_test_details PASSED
tests/test_extract.py::test_extract_test_output_with_label PASSED
tests/test_extract.py::test_multiple_labels PASSED
tests/test_extract.py::test_extract_test_assertion_details PASSED
tests/test_extract.py::test_is_failing_test_details_empty_with_non_empty_string PASSED
tests/test_extract.py::test_extract_test_assertion_details_list PASSED
tests/test_extract.py::test_is_failing_test_details_empty_with_newline PASSED
tests/test_extract.py::test_extract_details PASSED
tests/test_extract.py::test_extract_test_output_without_label PASSED
tests/test_extract.py::test_extract_test_assertions_details PASSED
tests/test_extract.py::test_single_label PASSED
tests/test_debug.py::test_has_debugging_messages PASSED
tests/test_debug.py::test_add_message PASSED
tests/test_debug.py::test_clear_messages PASSED
tests/test_debug.py::test_debug_function PASSED
tests/test_debug.py::test_enum_values PASSED
tests/test_debug.py::test_get_debugging_messages PASSED
tests/test_debug.py::test_messages_list_initially_empty PASSED
tests/test_convert.py::test_path_to_string PASSED
tests/test_enumerations.py::test_advice_method_enum_values PASSED
tests/test_enumerations.py::test_report_type_enum_access_by_name PASSED
tests/test_enumerations.py::test_report_type_enum_values PASSED
tests/test_enumerations.py::test_advice_method_enum_access_by_name PASSED
tests/test_enumerations.py::test_report_type_enum_invalid_name PASSED
tests/test_enumerations.py::test_theme_enum_values PASSED
tests/test_enumerations.py::test_theme_enum_invalid_name PASSED
tests/test_enumerations.py::test_theme_enum_access_by_name PASSED
tests/test_enumerations.py::test_advice_method_enum_invalid_name PASSED
tests/test_pytest_plugin.py::test_pytest_assertion_pass_no_assertions PASSED
tests/test_pytest_plugin.py::test_pytest_assertion_pass PASSED
tests/test_pytest_plugin.py::test_pytest_assertion_pass_no_report PASSED
tests/test_pytest_plugin.py::test_pytest_runtest_logreport_not_call PASSED
tests/test_pytest_plugin.py::test_pytest_runtest_logreport PASSED
tests/test_display.py::test_display_content_plain_text PASSED
tests/test_display.py::test_display_content PASSED
tests/test_display.py::test_get_display_return_code PASSED
tests/test_display.py::test_display_advice PASSED
tests/test_display.py::test_make_colon_separated_string PASSED
tests/test_display.py::test_display_content_no_newline PASSED
tests/test_advise.py::test_check_internet_connection_timeout PASSED
tests/test_advise.py::test_check_internet_connection_different_dns[dns_server0] PASSED
tests/test_advise.py::test_check_internet_connection_failure PASSED
tests/test_advise.py::test_check_internet_connection_different_dns[dns_server1] PASSED
tests/test_advise.py::test_check_internet_connection_success PASSED
tests/test_advise.py::test_check_internet_connection_different_dns[dns_server2] PASSED
tests/test_advise.py::test_validate_url PASSED
tests/test_advise.py::test_check_internet_connection_different_dns[dns_server3] PASSED
tests/test_util.py::test_determine_execexam_return_code_other PASSED
tests/test_util.py::test_determine_execexam_return_code_no_tests_collected PASSED
tests/test_util.py::test_determine_execexam_return_code_interrupted PASSED
tests/test_util.py::test_determine_execexam_return_code_tests_failed PASSED
tests/test_util.py::test_determine_execexam_return_code_internal_error PASSED
tests/test_util.py::test_determine_execexam_return_code_usage_error PASSED
tests/test_main.py::test_run_with_missing_test PASSED

=============================================================================================== 58 passed in 9.88s ================================================================================================
caleb@:execexam (coverage_menu_fixes)$ 

Copy link
Collaborator

@CalebKendra CalebKendra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. test functionality works for Ubuntu here

Copy link
Collaborator

@AlishChhetri AlishChhetri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you to Caleb for testing out this feature on Linux (Ubuntu)!

Copy link
Collaborator

@PCain02 PCain02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am glad the changes I made worked for everyone else it seems. All the other work looks good to me!

@rebekahrudd rebekahrudd removed their request for review November 7, 2024 22:57
@rebekahrudd
Copy link
Collaborator

The command poetry run task coverage returns: could not find task "coverage". I have not been able to figure out how to troubleshoot it so I have removed myself from being a reviewer.

@hannahb09
Copy link
Collaborator

I know I am someone who worked on the PR but with the updates from Pallas they do work on my linux computer

@AlishChhetri AlishChhetri merged commit 07aaa50 into GatorEducator:main Nov 11, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
infrastructure CI/CD configuration test Test cases or test running
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants