Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network policies tests #232

Merged
merged 8 commits into from
Jan 23, 2024
Merged

Network policies tests #232

merged 8 commits into from
Jan 23, 2024

Conversation

kooomix
Copy link
Contributor

@kooomix kooomix commented Jan 23, 2024

Type

Tests


Description

  • The PR primarily focuses on enhancing and refactoring the network policy tests.
  • More detailed logging has been added to the test steps for better traceability.
  • The validation of expected network neighbors and generated network policies has been refactored into a single method.
  • Backend validation for expected network neighbors and generated network policies has been added.
  • Implemented deletion flow validation in the network policy tests.
  • A new test configuration for network_policy_known_servers_cache has been added.
  • New methods for validating backend results and workload deletion have been added in the base network policy.
  • A new API endpoint for known servers cache has been added in the backend API.

Changes walkthrough

Relevant files
Tests
network_policy.py
Refactor and enhance network policy tests                                               

tests_scripts/helm/network_policy.py

  • Added more detailed logging to the test steps.
  • Updated the test plan steps in the docstrings.
  • Refactored the validation of expected network neighbors and generated
    network policies into a single method.
  • Added backend validation for expected network neighbors and generated
    network policies.
  • Implemented deletion flow validation.
+110/-114
network_policy_tests.py
Add new test configuration for known servers cache                             

configurations/system/tests_cases/network_policy_tests.py

  • Added a new test configuration for
    network_policy_known_servers_cache.
+9/-0     
base_network_policy.py
Add new validation methods in base network policy                               

tests_scripts/helm/base_network_policy.py

  • Added new methods for validating backend results and workload
    deletion.
+172/-7 
Enhancement
backend_api.py
Add new API endpoint for known servers cache                                         

infrastructure/backend_api.py

  • Added a new API endpoint for known servers cache.
+68/-0   

✨ Usage guide:

Overview:
The describe tool scans the PR code changes, and generates a description for the PR - title, type, summary, walkthrough and labels. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.

When commenting, to edit configurations related to the describe tool (pr_description section), use the following template:

/describe --pr_description.some_config1=... --pr_description.some_config2=...

With a configuration file, use the following template:

[pr_description]
some_config1=...
some_config2=...
Enabling\disabling automation
  • When you first install the app, the default mode for the describe tool is:
pr_commands = ["/describe --pr_description.add_original_user_description=true" 
                         "--pr_description.keep_original_user_title=true", ...]

meaning the describe tool will run automatically on every PR, will keep the original title, and will add the original user description above the generated description.

  • Markers are an alternative way to control the generated description, to give maximal control to the user. If you set:
pr_commands = ["/describe --pr_description.use_description_markers=true", ...]

the tool will replace every marker of the form pr_agent:marker_name in the PR description with the relevant content, where marker_name is one of the following:

  • type: the PR type.
  • summary: the PR summary.
  • walkthrough: the PR walkthrough.

Note that when markers are enabled, if the original PR description does not contain any markers, the tool will not alter the description at all.

Custom labels

The default labels of the describe tool are quite generic: [Bug fix, Tests, Enhancement, Documentation, Other].

If you specify custom labels in the repo's labels page or via configuration file, you can get tailored labels for your use cases.
Examples for custom labels:

  • Main topic:performance - pr_agent:The main topic of this PR is performance
  • New endpoint - pr_agent:A new endpoint was added in this PR
  • SQL query - pr_agent:A new SQL query was added in this PR
  • Dockerfile changes - pr_agent:The PR contains changes in the Dockerfile
  • ...

The list above is eclectic, and aims to give an idea of different possibilities. Define custom labels that are relevant for your repo and use cases.
Note that Labels are not mutually exclusive, so you can add multiple label categories.
Make sure to provide proper title, and a detailed and well-phrased description for each label, so the tool will know when to suggest it.

Utilizing extra instructions

The describe tool can be configured with extra instructions, to guide the model to a feedback tailored to the needs of your project.

Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Notice that the general structure of the description is fixed, and cannot be changed. Extra instructions can change the content or style of each sub-section of the PR description.

Examples for extra instructions:

[pr_description] 
extra_instructions="""
- The PR title should be in the format: '<PR type>: <title>'
- The title should be short and concise (up to 10 words)
- ...
"""

Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

More PR-Agent commands

To invoke the PR-Agent, add a comment using one of the following commands:

  • /review: Request a review of your Pull Request.
  • /describe: Update the PR title and description based on the contents of the PR.
  • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
  • /ask <QUESTION>: Ask a question about the PR.
  • /update_changelog: Update the changelog based on the PR's contents.
  • /add_docs 💎: Generate docstring for new components introduced in the PR.
  • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
  • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

See the tools guide for more details.
To list the possible configuration parameters, add a /config comment.

See the describe usage page for a comprehensive guide on using this tool.

Copy link

PR Description updated to latest commit (ef4eaeb)

Copy link

codiumai-pr-agent-free bot commented Jan 23, 2024

PR Analysis

(review updated until commit ef4eaeb)

  • 🎯 Main theme: Refactoring and enhancing network policy tests
  • 📝 PR summary: This PR focuses on enhancing and refactoring the network policy tests. It includes more detailed logging, refactoring of validation methods, backend validation, deletion flow validation, and the addition of a new test configuration for network_policy_known_servers_cache.
  • 📌 Type of PR: Tests
  • 🧪 Relevant tests added: Yes
  • ⏱️ Estimated effort to review [1-5]: 3, because the PR involves changes in multiple test files and introduces new functionalities which need to be thoroughly reviewed.
  • 🔒 Security concerns: No security concerns found

PR Feedback

💡 General suggestions: The PR is well-structured and focuses on enhancing the test coverage for network policies. The addition of more detailed logging and refactoring of validation methods is a good improvement. However, it would be beneficial to add more comments in the code to explain the logic, especially for complex test scenarios.

🤖 Code feedback:
relevant filetests_scripts/helm/network_policy.py
suggestion      

Consider using constants or configuration for hardcoded values like timeouts and sleep durations. This would make it easier to manage these values and make the code more maintainable. [important]

relevant lineTestUtil.sleep(3 * int(update_period_in_seconds), "wait for node-agent update period", "info")

relevant filetests_scripts/helm/network_policy.py
suggestion      

It would be better to handle exceptions in the test methods. This will help in identifying the issues if the test fails. [medium]

relevant lineself.validate_expected_network_neighbors_and_generated_network_policies_lists(namespace=namespace, expected_network_neighbors_list=expected_network_neighbors_list, expected_generated_network_policy_list=expected_generated_network_policy_list)

relevant filetests_scripts/helm/network_policy.py
suggestion      

Consider breaking down large test methods into smaller ones. This will improve readability and maintainability of the code. [medium]

relevant linedef start(self):

relevant filetests_scripts/helm/network_policy.py
suggestion      

It would be better to use more descriptive log messages. This will help in better understanding of the test steps and easier debugging. [medium]

relevant lineLogger.logger.info('1. Install armo helm-chart')


✨ Usage guide:

Overview:
The review tool scans the PR code changes, and generates a PR review. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.
When commenting, to edit configurations related to the review tool (pr_reviewer section), use the following template:

/review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...

With a configuration file, use the following template:

[pr_reviewer]
some_config1=...
some_config2=...
Utilizing extra instructions

The review tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.

Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.

Examples for extra instructions:

[pr_reviewer] # /review #
extra_instructions="""
In the code feedback section, emphasize the following:
- Does the code logic cover relevant edge cases?
- Is the code logic clear and easy to understand?
- Is the code logic efficient?
...
"""

Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

How to enable\disable automation
  • When you first install PR-Agent app, the default mode for the review tool is:
pr_commands = ["/review", ...]

meaning the review tool will run automatically on every PR, with the default configuration.
Edit this field to enable/disable the tool, or to change the used configurations

About the 'Code feedback' section

The review tool provides several type of feedbacks, one of them is code suggestions.
If you are interested only in the code suggestions, it is recommended to use the improve feature instead, since it dedicated only to code suggestions, and usually gives better results.
Use the review tool if you want to get a more comprehensive feedback, which includes code suggestions as well.

Auto-labels

The review tool can auto-generate two specific types of labels for a PR:

  • a possible security issue label, that detects possible security issues (enable_review_labels_security flag)
  • a Review effort [1-5]: x label, where x is the estimated effort to review the PR (enable_review_labels_effort flag)
Extra sub-tools

The review tool provides a collection of possible feedbacks about a PR.
It is recommended to review the possible options, and choose the ones relevant for your use case.
Some of the feature that are disabled by default are quite useful, and should be considered for enabling. For example:
require_score_review, require_soc2_review, enable_review_labels_effort, and more.

More PR-Agent commands

To invoke the PR-Agent, add a comment using one of the following commands:

  • /review: Request a review of your Pull Request.
  • /describe: Update the PR title and description based on the contents of the PR.
  • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
  • /ask <QUESTION>: Ask a question about the PR.
  • /update_changelog: Update the changelog based on the PR's contents.
  • /add_docs 💎: Generate docstring for new components introduced in the PR.
  • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
  • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

See the tools guide for more details.
To list the possible configuration parameters, add a /config comment.

See the review usage page for a comprehensive guide on using this tool.

Copy link

Persistent review updated to latest commit ef4eaeb

Copy link

PR Code Suggestions

Suggestions                                                                                                                                                         
maintainability
Improve readability of error messages by reducing the amount of data printed.

In the method validate_expected_backend_workloads_list, there is a potential issue with <br> the <br> ``assert
statement. If the lengths of workloads_list and expected_workloads_list are not equal,
the error message will print the entire lists. If these lists are large, this could lead
to a very long and hard-to-read error message. Consider printing only the lengths of the
lists, or a subset of the lists, in the error message.

tests_scripts/helm/base_network_policy.py [224]

-assert len(workloads_list) == len(expected_workloads_list), f"workloads_list length is not equal to expected_workloads_list length, actual: len:{len(workloads_list)}, expected: len:{len(expected_workloads_list)}; actual results: {workloads_list}, expected results: {expected_workloads_list}"
+assert len(workloads_list) == len(expected_workloads_list), f"workloads_list length is not equal to expected_workloads_list length, actual: len:{len(workloads_list)}, expected: len:{len(expected_workloads_list)}"
 
best practice
Improve error handling by failing fast when an exception occurs.             

In the method validate_expected_backend_generated_network_policy_list, the errors list
is used to collect exceptions and then checked at the end of the method. However, the
method continues to execute even after an exception is caught. Consider failing fast and
raising an exception immediately when an error occurs, instead of collecting errors and
checking at the end. This can make debugging easier and prevent unnecessary code
execution.

tests_scripts/helm/base_network_policy.py [236-257]

-errors = []
 for i in range(0, len(expected_network_policy_list)):
     ...
     except Exception as e:
-        errors.append(e)
-        continue
-...
-assert len(errors) == 0, f"Errors in validate_expected_backend_generated_network_policy_list: {errors}"
+        raise Exception(f"Error in validate_expected_backend_generated_network_policy_list: {e}")
 
Replace assert statements with exceptions for error checking in production code.

In the method get_network_policies_generate, the assert statements are used to check
the response from the API call. However, using assert statements for error checking in
production code is not recommended because they can be globally disabled with the -O and
-OO command line switches. Consider raising exceptions instead.

infrastructure/backend_api.py [2020-2028]

-assert len(response) > 0, "network policies generate response is empty '%s' (code: %d, message: %s)" % (self.customer, r.status_code, r.text)
+if len(response) <= 0:
+    raise Exception("network policies generate response is empty '%s' (code: %d, message: %s)" % (self.customer, r.status_code, r.text))
 ...
-assert np is not None, "no 'new' NetworkPolicy '%s' (code: %d, message: %s)" % (self.customer, r.status_code, r.text)
+if np is None:
+    raise Exception("no 'new' NetworkPolicy '%s' (code: %d, message: %s)" % (self.customer, r.status_code, r.text))
 ...
-assert graph is not None, "No 'graph' '%s' (code: %d, message: %s)" % (self.customer, r.status_code, r.text)
+if graph is None:
+    raise Exception("No 'graph' '%s' (code: %d, message: %s)" % (self.customer, r.status_code, r.text))
 
possible issue
Add checks to ensure function parameters are not None and contain the required keys before using them.

In the method validate_network_policy_spec, the expected_network_policy_spec and
actual_network_policy_spec parameters are used directly without checking if they are
None or if they have the required keys. This could lead to a KeyError or TypeError
if the parameters are not as expected. Consider adding checks to ensure the parameters are
not None and contain the required keys before using them.

tests_scripts/helm/base_network_policy.py [350-358]

-if 'Ingress' in expected_network_policy_spec['policyTypes']:
-    ...
-if 'Egress' in expected_network_policy_spec['policyTypes']:
-    ...
+if expected_network_policy_spec and 'policyTypes' in expected_network_policy_spec:
+    if 'Ingress' in expected_network_policy_spec['policyTypes']:
+        ...
+    if 'Egress' in expected_network_policy_spec['policyTypes']:
+        ...
 

✨ Usage guide:

Overview:
The improve tool scans the PR code changes, and automatically generates suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.
When commenting, to edit configurations related to the improve tool (pr_code_suggestions section), use the following template:

/improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...

With a configuration file, use the following template:

[pr_code_suggestions]
some_config1=...
some_config2=...
Enabling\disabling automation

When you first install the app, the default mode for the improve tool is:

pr_commands = ["/improve --pr_code_suggestions.summarize=true", ...]

meaning the improve tool will run automatically on every PR, with summarization enabled. Delete this line to disable the tool from running automatically.

Utilizing extra instructions

Extra instructions are very important for the improve tool, since they enable to guide the model to suggestions that are more relevant to the specific needs of the project.

Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.

Examples for extra instructions:

[pr_code_suggestions] # /improve #
extra_instructions="""
Emphasize the following aspects:
- Does the code logic cover relevant edge cases?
- Is the code logic clear and easy to understand?
- Is the code logic efficient?
...
"""

Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

A note on code suggestions quality
  • While the current AI for code is getting better and better (GPT-4), it's not flawless. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
  • Suggestions are not meant to be simplistic. Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
  • Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project.
  • Best quality will be obtained by using 'improve --extended' mode.
More PR-Agent commands

To invoke the PR-Agent, add a comment using one of the following commands:

  • /review: Request a review of your Pull Request.
  • /describe: Update the PR title and description based on the contents of the PR.
  • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
  • /ask <QUESTION>: Ask a question about the PR.
  • /update_changelog: Update the changelog based on the PR's contents.
  • /add_docs 💎: Generate docstring for new components introduced in the PR.
  • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
  • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

See the tools guide for more details.
To list the possible configuration parameters, add a /config comment.

See the improve usage page for a more comprehensive guide on using this tool.

@kooomix kooomix merged commit 2cb2a15 into master Jan 23, 2024
3 checks passed
@Bezbran Bezbran deleted the network_policies_tests branch May 1, 2024 10:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants