-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal of Static Code Analysis Subgroup for new Acceptance Criteria #73
base: master
Are you sure you want to change the base?
Conversation
give examples for code duplication figures, specifiy code smell levels
MODULE_ACCEPTANCE_CRITERIA.MD
Outdated
@@ -20,7 +20,7 @@ Please see [Before Development](MODULE_EVALUATION_TEMPLATE#before-development) f | |||
1. Module adheres to Community Code of Conduct | |||
1. Module license is compatible with the FOLIO Project | |||
1. Module can be included in existing community build processes | |||
1. Module has robust testing that can run with existing community testing processes | |||
1. Module has robust and fully automated testing procedure supplied and documented by the development team |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can / should the development team provide such a testing procedure?
What does fully automated mean in this context? Does it mean that automated tests within the module build must provide 100% coverage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. 80% code coverage. "Fully automated" is supposed to mean that the tests are automatically carried out upon each Pull Request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i.e. that they are not "manual".
MODULE_ACCEPTANCE_CRITERIA.MD
Outdated
@@ -58,8 +58,13 @@ Please see [Before Development](MODULE_EVALUATION_TEMPLATE#before-development) f | |||
* _This is not applicable to libraries_ | |||
* Must not depend on a FOLIO library that has not been approved through the TCR process | |||
* Gracefully handles the absence of third party systems or related configuration. (3, 5, 12) | |||
* Sonarqube hasn't identified any security issues, major code smells or excessive (>3%) duplication (6); and any disabled or intentionally ignored rules/recommendations are reasonably justified. | |||
* An automated code scannner hasn't identified any security issues, major code smells or excessive (>3%) duplication (6); and any disabled or intentionally ignored rules/recommendations are reasonably justified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will this automated code scan be run? (currently, Sonar is run by the centralised build process)
If I'm understanding the expanded explanation below, we are stating that any module that uses a language not compatible with Sonar is effectively forced to implement a build within github actions, because there is no other mechanism that can reasonably be used during builds
How will evidence of the results be supplied to the TC? (currently the results are published to sonar cloud)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm understanding the expanded explanation below, we are stating that any module that uses a language not compatible with Sonar is effectively forced to implement a build within github actions, because there is no other mechanism that can reasonably be used during builds
That would be my understanding as well, and I guess would just be the overhead of introducing a new language that's not Sonar-compatible. I'm not clear how feasible that is though, given I think we have a shared build process across modules? So it would have to account for multiple options based on language or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we are stating that any module that uses a language not compatible with Sonar is effectively forced to implement a build within github actions, because there is no other mechanism that can reasonably be used during builds
effectively: yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will evidence of the results be supplied to the TC?
It suffices to give the "metrics" (=characteristic numbers) as mentioned. Those might be supplied in a YAML file, but does not need to be. See the examples on the page Examples+of+Code+Duplication... (we might need to give more examples)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess would just be the overhead of introducing a new language that's not Sonar-compatible.
As far as I recall, the Grails/Groovy modules are not Sonar-compatible. Therefore I changed this sentence: "Sonarqube" -> "An automated code scannner" . In the group discussions I think we agreed that it is in the development teams' responsibility to implement automated test (also for the non Sonar-compatible modules).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not clear how feasible that is though, given I think we have a shared build process across modules? So it would have to account for multiple options based on language or something
That was going to be my next question
I don't know if the back end modules are using github actions
If they are, then I imagine at the very least, the team will need to implement a custom build definition / extensions to use a different static code analysis tool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the group discussions I think we agreed that it is in the development teams' responsibility to implement automated test
When I read automated test
, I think of code that demonstrates / checks the behaviour of the code under test e.g. the module implementation
I don't think of static analysis as a test, rather it is a separate activity to determine some metrics / characteristics of the code, usually to inform discussions about the general quality of the code (rather than if it behaves as intended)
@inkuss In this context, does automated test
refer to static code analysis (e.g. duplication, security issues etc) or code that checks / demonstrates behaviour?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if the back end modules are using github actions
I think @julianladisch has more details here.
I don't think of static analysis as a test, rather it is a separate activity to determine some metrics / characteristics of the code
I agree. Maybe "test" is the wrong word here.
"automated tests" refers solely to static code analysis here.
I assumed unit tests and integration test are part of static code analysis - maybe wrong assumption ??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the TC we agreed that unit tests and integration tests are not subject of the work of this subgroup.
MODULE_ACCEPTANCE_CRITERIA.MD
Outdated
@@ -68,8 +73,9 @@ Please see [Before Development](MODULE_EVALUATION_TEMPLATE#before-development) f | |||
Note: Frontend criteria apply to both modules and shared libraries. | |||
|
|||
* If provided, End-to-end tests must be written in an [officially supported technology](https://wiki.folio.org/display/TC/Officially+Supported+Technologies)[^1] (3, 4) | |||
* -_note: while it's strongly recommended that modules implement integration tests, it's not a requirement_ | |||
* -_note: If the new module works together with existing modules, the team must provide a result of an integration test. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the motivation for making this mandatory if a module has any interaction with another module?
Given this is within the front end section of the criteria, in practice this means that every front end module will have to implement some set of end to end automated tests (as there is no provision for integration tests for the front end modules IIRC)
Is that a reasonable understanding of this expectation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Julian and I think that it is fair to write integration tests (mandatory) if the module is not stand-alone. If it is not a requirement, it does not even need to be mentioned in the acceptance criteria, we think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Julian and I think that it is fair to write integration tests (mandatory) if the module is not stand-alone
What is meant by integration test
in this context?
My definition of that term in this context is a automated test that demonstrates multiple modules interacting with each other
In FOLIO I believe that varies based upon the type of module:
- For back-end modules, Karate based tests in the shared repository
- For front-end modules, Jest IIRC, end to end tests (that exercise the UI in a browser) in the shared repository
If it is not a requirement, it does not even need to be mentioned in the acceptance criteria, we think
The original criteria was included because we wanted to make sure that if developers chose to write that style of test, then it should be written in the already agreed technologies (and not another tool)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My definition of that term in this context is a automated test that demonstrates multiple modules interacting with each other
Yes, that fits to what I have in mind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The scope of these changes go beyond what I understood to be the scope of this working group
What was the group's motivation for expanding the scope to also change criteria related to automated testing?
Please correct me if I'm misunderstood or misremembered the working group's scope
I agree with this objection. Julian and I also think that the scope of this working group is unclear (or at least unknown to us). We need to clarify the scope. This can be done either on Monday in the TC meeting or on Tuesday in the next group meeting (assuming a greater participation than last time...). What of these two options do you consider more appropriate ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reminder that after the changes to this file are agreed to, the same should be made to the TEMPLATE file.
MODULE_ACCEPTANCE_CRITERIA.MD
Outdated
* See [Rule Customization](https://dev.folio.org/guides/code-analysis/#rule-customization) details. | ||
* The code scanner may be Sonar (via Jenkins), github Actions or other. The testing algorithm must be set up, maintained and documented by the development team. The tests need to be reproducible by the Community. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the intent of
The testing algorithm must be set up, maintained and documented by the development team.
is to handle new languages not covered by existing testing algorithms. But the statement goes too far, since this criteria applies to all new modules, and any module in an existing language, i.e. Java, would not need anyone, least of all this development team, to "set up, maintain and document" Sonar. Right? Same concern applies to the "values" section edit above.
According to the wiki, you are both members of the sub-group. Is that correct?
If the members of the sub-group are not confident they understand the scope of the group, this should be discussed in a whole TC meeting, because that is where the TC can agree what the scope is and communicate that to the sub-group |
MODULE_ACCEPTANCE_CRITERIA.MD
Outdated
* See [Rule Customization](https://dev.folio.org/guides/code-analysis/#rule-customization) details. | ||
* The code scanner may be Sonar (via Jenkins), github Actions or other. The testing algorithm must be set up, maintained and documented by the development team. The tests need to be reproducible by the Community. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
testing algorithms
What is meant by testing algorithm
in this context?
I'm concerned that these changes are mixing together automated testing and static code analysis
Personally, I consider those to be separate activities with different purposes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the wiki, you are both members of the sub-group. Is that correct?
Yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I asked in the group chat if the other sub group members have a better understanding of what is the scope of this group. In particular, concerning changing the acceptance criteria of automated tests. If we should not find agreement there, I agree that this should be a topic of a Monday TC meeting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that Marc has a point, we use SonarQube as a tool to identify code smell, bugs and potential security issues. These are the domain of static code analysis.
SonarQube is also used to enforce a standard for code coverage as reported by the coverage tools that are employed for the specific language. This is not, strictly speaking, part of the static code analysis.
I do think this is valid, but it was a nuance that was not apparent to myself of the other members (at least as far as our discussions went.)
Given that, we should most likely revisit the scope in council to nail down specifically what this group is formed to address.
Drop 3% criterion on duplication
@@ -58,7 +58,7 @@ Please see [Before Development](MODULE_EVALUATION_TEMPLATE#before-development) f | |||
* _This is not applicable to libraries_ | |||
* Must not depend on a FOLIO library that has not been approved through the TCR process | |||
* Gracefully handles the absence of third party systems or related configuration. (3, 5, 12) | |||
* An automated code scannner hasn't identified any security issues, major code smells or excessive (>3%) duplication (6); and any disabled or intentionally ignored rules/recommendations are reasonably justified. | |||
* An automated code scannner hasn't identified any security issues, major code smells or excessive duplication (6); and any disabled or intentionally ignored rules/recommendations are reasonably justified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
excessive duplication (6)
Unless 6% duplication is intended to be a tool ignorant policy, this statement still includes policy specific to Sonar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(6)
referts to the value "6. Module is secure".
* See [Rule Customization](https://dev.folio.org/guides/code-analysis/#rule-customization) details. | ||
* The code scanner may be Sonar (via Jenkins), github Actions or other. The results of the code scanning need to be reproducible by the Community. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that GitHub actions are a build tool rather than a static code analyser
If that is the case, then it is similar to Jenkins rather than Sonar
At the moment, this sentence suggests to me that GitHub actions are an alternative to Sonar rather than an alternative to Jenkins
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reminder to bump "version 2.0 (ratified 2022-06-10)".
changed requirements for code analysis: code duplication, code smells, security issues, unit tests and integration tests.