Skip to content

Commit

Permalink
chore: general code cleanup (#776)
Browse files Browse the repository at this point in the history
* refactor: replace deprecate NewCommand

* refactor(clusterconfig): remove deprecated Region when Openstack

* refactor: SchemeGroupVersion is deprecated

* refactor: AddToScene was moved to SchemeBuilder

* refactor: solve variable name collision

* chore: fix grammar

* refactor: error capitalized

* refactor: renames OCMFailureCountThreshold to FailureCountThreshold

* refactor: renames OCMFailureCountThreshold to FailureCountThreshold

* refactor: renames InsightsRecommendationCollector to Collector

* refactor: renames InsightsRecommendationCollector to Collector

* chore: fix golang comments

* docs: fix typo on README.md

* docs: fix typo on README.md

* refactor: remove rand.Seed (https://go-review.googlesource.com/c/go/+/443058)

* chore: remove golang comment

* chore: disable depguard lint

* refactor: replace context.TODO with context.Background
  • Loading branch information
Ricardo Lüders authored Jul 14, 2023
1 parent 37673b1 commit fce828f
Show file tree
Hide file tree
Showing 32 changed files with 74 additions and 77 deletions.
2 changes: 1 addition & 1 deletion .golangci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,6 @@ linters:
enable:
- bodyclose
- deadcode
#- depguard
- dogsled
- dupl
- errcheck
Expand Down Expand Up @@ -102,6 +101,7 @@ linters:

# don't enable:
# - asciicheck
# - depguard
# - scopelint
# - gochecknoglobals
# - gocognit
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ the body of the commit should describe the why.
```
Add the gitgooks command
This add githooks to the project, in order to run tests
This adds githooks to the project, in order to run tests
and liting and prevent bad commits.
```

Expand Down
2 changes: 1 addition & 1 deletion STYLEGUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
## Directory and File Names

- Use lowercase dash-separated names for all files (to avoid git issues with case-insensitive file systems)
- Exceptions are files which have their own naming conventions (eg Dockerfile, Makefile, README)
- Exceptions are files which have their own naming conventions (e.g. Dockerfile, Makefile, README)
- Automation scripts should be stored into `dotdirs` (eg .githooks, .openshiftci)

## Go
Expand Down
4 changes: 2 additions & 2 deletions cmd/changelog/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ const gitHubPath = "https://github.com/openshift/insights-operator"
// API reference: https://docs.github.com/en/rest/reference/pulls#get-a-pull-request
const gitHubAPIFormat = "https://api.github.com/repos/%s/%s/pulls/%s" // owner, repo, pull-number

// OpenShift release version helper type
// ReleaseVersion OpenShift release version helper type
type ReleaseVersion struct {
Major int
Minor int
Expand Down Expand Up @@ -163,7 +163,7 @@ func readCHANGELOG() map[ReleaseVersion]MarkdownReleaseBlock {
if match := miscSectionRegExp.FindStringSubmatch(versionSection); len(match) > 0 {
releaseBlock.misc = match[1]
}
// We want only found versions - e.g master is not considered as a version
// We want only found versions - e.g. master is not considered as a version
if version != (ReleaseVersion{}) {
releaseBlocks[version] = releaseBlock
}
Expand Down
2 changes: 1 addition & 1 deletion cmd/gendoc/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -318,7 +318,7 @@ func execExampleMethod(methodFullPackage, methodPackage, methodName string) (str
return string(output), err
}

// createRandom creates a random non existing file name in current folder
// createRandom creates a random non-existing file name in current folder
func createRandom() string {
var f string
for {
Expand Down
14 changes: 7 additions & 7 deletions docs/arch.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The main goal of the Insights Operator is to periodically gather anonymized data from the OCP cluster (mostly Kubernetes/OpenShift APIs and control plane components) and upload it to `console.redhat.com` for Insights analysis.

Insights Operator does not manage any application. As usual with operator applications, most of the code is structured in the `pkg` package and `pkg/controller/operator.go` hosts the operator controller. Typically operator controllers read configuration and start some periodical tasks.
Insights Operator does not manage any application. As usual with operator applications, most of the code is structured in the `pkg` package and `pkg/controller/operator.go` hosts the operator controller. Typically, operator controllers read configuration and start some periodical tasks.

## How the Insights operator reads configuration
The Insights Operator's configuration is a combination of the file [config/pod.yaml](../config/pod.yaml)(basically default configuration hardcoded in the image) and configuration stored in the `support` secret in the `openshift-config` namespace. The secret doesn't exist by default, but when it does, it overrides default settings which IO reads from the `config/pod.yaml`.
Expand Down Expand Up @@ -100,27 +100,27 @@ There are these main tasks scheduled:

## Gathering the data

Insights operator defines three types of gatherers (see below). Each of them must implement the [Interface](../pkg/gatherers/interface.go#L11) and they are initialized by calling `gather.CreateAllGatherers` in `operator.go`. The actual gathering is triggered in `Run` method in `pkg/controller/periodic/periodic.go`, but not every gatherer is triggered every time ( for example, see the [CustomPeriodGatherer type](../pkg/gatherers/interface.go#L21)).
Insights operator defines three types of gatherers (see below). Each of them must implement the [Interface,](../pkg/gatherers/interface.go#L11) and they are initialized by calling `gather.CreateAllGatherers` in `operator.go`. The actual gathering is triggered in `Run` method in `pkg/controller/periodic/periodic.go`, but not every gatherer is triggered every time ( for example, see the [CustomPeriodGatherer type](../pkg/gatherers/interface.go#L21)).

Each gatherer includes one or more gathering functions. Gathering functions are defined as a map, where the key is the name of the function and the value is the [GatheringClosure type](../pkg/gatherers/interface.go#L34). They are executed concurrently in the `HandleTasksConcurrently` function in `pkg/gather/task_processing.go`.
One of the attributes of the `GatheringClosure` type is the function that returns the values: `([]record.Record, []error)`. The slice of the records is the result of gathering function. The actual data is in the `Item` attribute of the `Record`. This `Item` is of type `Marshalable` (see the interface in the [record.go](../pkg/record/record.go)) and there are two JSON marshallers used to serialize the data - `JSONMarshaller` and `ResourceMarshaller` which allows you to save few bytes by omitting the `managedFields` during the serialization.
Errors, warnings or panics that occurred during given gathering function are logged in the "metadata" part of the Insights operator archive. See [sample archive example](../docs/insights-archive-sample/insigths-operator/gathers.json)

### Clusterconfig gatherer

Defined in [clusterconfig_gatherer.go](../pkg/gatherers/clusterconfig/clusterconfig_gatherer.go). This gatherer is ran regularly (2h by default) and gathers various data related to cluster config (see [gathered-data doc](../docs/gathered-data.md) for more details).
Defined in [clusterconfig_gatherer.go](../pkg/gatherers/clusterconfig/clusterconfig_gatherer.go). This gatherer is run regularly (2h by default) and gathers various data related to cluster config (see [gathered-data doc](../docs/gathered-data.md) for more details).

The data from this gatherer is stored under `/config` directory in the archive.

### Workloads gatherer

Defined in [workloads_gatherer.go](../pkg/gatherers/workloads/workloads_gatherer.go). This gatherer only runs every 12 hours and the interval is not configurable. This is done because running the gatherer more often would significantly increase data in the archive, that is assumed will not change very often. There is only one gathering function in this gatherer and it gathers workload fingerprint data (SHA of the images, fingerprints of namespaces as number of pods in namespace, fingerprints of containers as first command and first argument).
Defined in [workloads_gatherer.go](../pkg/gatherers/workloads/workloads_gatherer.go). This gatherer only runs every 12 hours and the interval is not configurable. This is done because running the gatherer more often would significantly increase data in the archive, that is assumed will not change very often. There is only one gathering function in this gatherer, and it gathers workload fingerprint data (SHA of the images, fingerprints of namespaces as number of pods in namespace, fingerprints of containers as first command and first argument).

The data from this gatherer is stored in the `/config/workload_info.json` file in the archive, but please note that not every archive contains this data.

### Conditional gatherer

Defined in [conditional_gatherer.go](../pkg/gatherers/conditional/conditional_gatherer.go). This gatherer is ran regularly (2h by default), but it only gathers some data when a corresponding condition is met. The conditions and corresponding gathering functions are defined in an external service (https://console.redhat.com/api/gathering/gathering_rules). A typical example of a condition is when an alert is firing. This also means that this gatherer relies on the availability of Prometheus metrics and alerts.
Defined in [conditional_gatherer.go](../pkg/gatherers/conditional/conditional_gatherer.go). This gatherer is run regularly (2h by default), but it only gathers some data when a corresponding condition is met. The conditions and corresponding gathering functions are defined in an external service (https://console.redhat.com/api/gathering/gathering_rules). A typical example of a condition is when an alert is firing. This also means that this gatherer relies on the availability of Prometheus metrics and alerts.

The data from this gatherer is stored under the `/conditional` directory in the archive.

Expand Down Expand Up @@ -166,7 +166,7 @@ health_statuses_insights{metric="total"} 2
### Scheduling and running of Uploader
The `operator.go` starts background task defined in `pkg/insights/insightsuploader/insightsuploader.go`. The insights uploader periodically checks if there is any data to upload. If no data is found, the uploader continues with next cycle.
The uploader triggers the `wait.Until` function, which waits until the configuration changes or it is time to upload. After start of the operator, there is some waiting time before the very first upload. This time is defined by `initialDelay`. If no error occurred while sending the POST request, then the next uploader check is defined as `wait.Jitter(interval, 1.2)`, where interval is the gathering interval.
The uploader triggers the `wait.Until` function, which waits until the configuration changes, or it is time to upload. After start of the operator, there is some waiting time before the very first upload. This time is defined by `initialDelay`. If no error occurred while sending the POST request, then the next uploader check is defined as `wait.Jitter(interval, 1.2)`, where interval is the gathering interval.

## How Uploader authenticates to console.redhat.com
The HTTP communication with the external service (e.g uploading the Insights archive or downloading the Insights analysis) is defined in the [insightsclient package](../pkg/insights/insightsclient/). The HTTP transport is encrypted with TLS (see the `clientTransport()` function defined in the `pkg/insights/insightsclient/insightsclient.go`. This function (and the `prepareRequest` function) uses `pkg/authorizer/clusterauthorizer.go` to respect the proxy settings and to authorize (i.e add the authorization header with respective token value) the requests. The user defined certificates in the `/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt` are taken into account (see the cluster wide proxy setting in the [OCP documentation](https://docs.openshift.com/container-platform/latest/networking/enable-cluster-wide-proxy.html)).
Expand Down Expand Up @@ -379,4 +379,4 @@ Such a client is used in [GatherMachineSet](pkg/gather/clusterconfig/clusterconf
## Configuring what to gather
[Insights API](https://github.com/openshift/api/blob/master/config/v1alpha1/types_insights.go) and the corresponding `insightsdatagather.config.openshift.io` CRD allow you to specify a list of `disabledGatherers` to disable specific gatherers or disable data gathering altogether by setting the value to `all`.
[Insights API](https://github.com/openshift/api/blob/master/config/v1alpha1/types_insights.go) and the corresponding `insightsdatagather.config.openshift.io` CRD allow you to specify a list of `disabledGatherers` to disable specific gatherers or disable data gathering altogether by setting the value to `all`.
6 changes: 3 additions & 3 deletions docs/conditional-gatherer/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Conditional Gatherer

Conditional gatherer is a special gatherer which uses a set of rules describing which gathering functions to activate.
The conditional rules are defined in the [insights-operator-gathering-conditions GitHub repository](https://github.com/RedHatInsights/insights-operator-gathering-conditions). This content is consumed and exposed by the [insights-operator-gathering-conditions-service](https://github.com/RedHatInsights/insights-operator-gathering-conditions-service). A new version of the `insights-operator-gathering-conditions` requires a new release version of the `insights-operator-gathering-conditions-service`.
The conditional rules are defined in the [insights-operator-gathering-conditions GitHub repository](https://github.com/RedHatInsights/insights-operator-gathering-conditions). This content is consumed and exposed by the [insights-operator-gathering-conditions-service](https://github.com/RedHatInsights/insights-operator-gathering-conditions-service). A new version of the `insights-operator-gathering-conditions` requires a new release version of the `insights-operator-gathering-conditions-service`.
The Insights Operator connects to this service and consumes the conditional rules from it. The connection endpoint is defined in the [pod.yaml config file](../../config/pod.yaml) in the `conditionalGathererEndpoint` attribute (the value can be overriden in the `support` secret). Authentication is required and the `pull-secret` token is used for this purpose.

## Validation of the conditional rules
Expand All @@ -10,7 +10,7 @@ The Insights Operator internally validates the conditional rules JSON against th

The following are some examples of validation failures (which will show up in the log):

Non-existing gathering fumction:
Non-existing gathering function:
```
E0808 16:29:51.864716 241084 parsing.go:22] skipping a rule because of an error: unable to create params for conditional.GatheringFunctionName: containers_log {[{alert_is_firing 0xc0004639c0 <nil>}] map[]}
```
Expand All @@ -31,7 +31,7 @@ Failed to parse the provided cluster version:
E0809 10:02:16.383643 37430 conditional_gatherer.go:140] error checking conditions for a gathering rule: Could not parse Range "4-11.12": Could not parse version "4-11.12" in "4-11.12": No Major.Minor.Patch elements found
```

One of the common conditions type (see below) is the `alert_is_firing`. This condition depends on availability of Prometheus alerts - i.e connection to the in-cluster Prometheus instance. If the connection is not available, this may manifests in the log as follows for example:
One of the common conditions type (see below) is the `alert_is_firing`. This condition depends on availability of Prometheus alerts - i.e. connection to the in-cluster Prometheus instance. If the connection is not available, this may manifest in the log as follows for example:

```
E0809 11:56:48.491346 46838 conditional_gatherer.go:226] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory
Expand Down
4 changes: 2 additions & 2 deletions pkg/anonymization/anonymizer.go
Original file line number Diff line number Diff line change
Expand Up @@ -403,7 +403,7 @@ func (anonymizer *Anonymizer) ObfuscateIP(ipStr string) string {
return ipStr
}

// We could use something like https://github.com/yl2chen/cidranger but we shouldn't typically have many networks
// We could use something like https://github.com/yl2chen/cidranger, but we shouldn't typically have many networks,
// so it's fine to just iterate over them
for i := range anonymizer.networks {
networkInfo := &anonymizer.networks[i]
Expand Down Expand Up @@ -473,7 +473,7 @@ func (anonymizer *Anonymizer) StoreTranslationTable() *corev1.Secret {
return result
}

// ResetTranslationTable resets the translation table, so that the translation table of multiple gathers wont mix toghater.
// ResetTranslationTable resets the translation table, so that the translation table of multiple gathers won't mix together.
func (anonymizer *Anonymizer) ResetTranslationTable() {
anonymizer.translationTable = make(map[string]string)
}
Expand Down
12 changes: 5 additions & 7 deletions pkg/cmd/start/start.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ package start
import (
"context"
"fmt"
"math/rand"
"os"
"time"

Expand Down Expand Up @@ -31,7 +30,7 @@ const (
pbAcceptContentTypes = "application/vnd.kubernetes.protobuf,application/json"
)

// NewOperator create the commad for running the Insights Operator.
// NewOperator create the command for running the Insights Operator.
func NewOperator() *cobra.Command {
operator := &controller.Operator{
Controller: config.Controller{
Expand All @@ -57,12 +56,12 @@ func NewOperator() *cobra.Command {
Short: "Start the operator",
Run: runOperator(operator, cfg),
}
cmd.Flags().AddFlagSet(cfg.NewCommand().Flags())
cmd.Flags().AddFlagSet(cfg.NewCommandWithContext(context.Background()).Flags())

return cmd
}

// NewGather create the commad for running the a single gather.
// NewGather create the command for running a single gather.
func NewGather() *cobra.Command {
operator := &controller.GatherJob{
Controller: config.Controller{
Expand All @@ -77,7 +76,7 @@ func NewGather() *cobra.Command {
Short: "Does a single gather, without uploading it",
Run: runGather(operator, cfg),
}
cmd.Flags().AddFlagSet(cfg.NewCommand().Flags())
cmd.Flags().AddFlagSet(cfg.NewCommandWithContext(context.Background()).Flags())

return cmd
}
Expand Down Expand Up @@ -196,8 +195,7 @@ func runGather(operator *controller.GatherJob, cfg *controllercmd.ControllerComm
// Boilerplate for running an operator and handling command line arguments.
func runOperator(operator *controller.Operator, cfg *controllercmd.ControllerCommandConfig) func(cmd *cobra.Command, args []string) {
return func(cmd *cobra.Command, args []string) {
// boiler plate for the "normal" command
rand.Seed(time.Now().UTC().UnixNano())
// boilerplate for the "normal" command
defer serviceability.BehaviorOnPanic(os.Getenv("OPENSHIFT_ON_PANIC"), version.Get())()
defer serviceability.Profile(os.Getenv("OPENSHIFT_PROFILE")).Stop()
serviceability.StartProfiler()
Expand Down
2 changes: 1 addition & 1 deletion pkg/config/configobserver/secretconfigobserver_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ func Test_ConfigObserver_ConfigChanged(t *testing.T) {
if err != nil {
t.Fatalf("Unexpected error %s", err)
}
// Check if the event arrived on the channel
// Check if the event arrived at the channel
if len(configCh) != 1 {
t.Fatalf("Config channel has more/less then 1 event on a singal config change. len(configCh)==%d", len(configCh))
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/config/mock_configurator.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ type MockSecretConfigurator struct {
Conf *Controller
}

// NewMockConfigurator constructs a new MockConfigurator with default config values
// NewMockSecretConfigurator constructs a new MockConfigurator with default config values
func NewMockSecretConfigurator(conf *Controller) *MockSecretConfigurator {
if conf == nil {
conf = &Controller{}
Expand Down
4 changes: 2 additions & 2 deletions pkg/controller/operator.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ import (
"github.com/openshift/insights-operator/pkg/recorder/diskrecorder"
)

// Operator is the type responsible for controlling the start up of the Insights Operator
// Operator is the type responsible for controlling the start-up of the Insights Operator
type Operator struct {
config.Controller
}
Expand Down Expand Up @@ -201,7 +201,7 @@ func (s *Operator) Run(ctx context.Context, controller *controllercmd.Controller
go periodicGather.PeriodicPrune(ctx)
}

// check we can read IO container status and we are not in crash loop
// check we can read IO container status, and we are not in crash loop
initialCheckTimeout := s.Controller.Interval / 24
initialCheckInterval := 20 * time.Second
baseInitialDelay := s.Controller.Interval / 12
Expand Down
2 changes: 1 addition & 1 deletion pkg/controller/periodic/periodic_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ func Test_Controller_CustomPeriodGatherer(t *testing.T) {
mockRecorder.Reset()

c.Gather()
// 5 gatherers + metadata (one is low priority and we need to wait 999 hours to get it
// 5 gatherers + metadata (one is low priority, and we need to wait 999 hours to get it
assert.Len(t, mockRecorder.Records, 6)
mockRecorder.Reset()
}
Expand Down
4 changes: 2 additions & 2 deletions pkg/controller/status/conditions.go
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ func (c *conditions) setCondition(conditionType configv1.ClusterStatusConditionT
status configv1.ConditionStatus, reason, message string) {
originalCondition, ok := c.entryMap[conditionType]
transitionTime := metav1.Now()
// if condition is defined and there is not new status then don't update transition time
// if condition is defined and there is no new status then don't update transition time
if ok && originalCondition.Status == status {
transitionTime = originalCondition.LastTransitionTime
}
Expand Down Expand Up @@ -105,7 +105,7 @@ func (c *conditions) findCondition(condition configv1.ClusterStatusConditionType
}

// entries returns a sorted list of status conditions from the mapped values.
// The list is sorted by by type ClusterStatusConditionType to ensure consistent ordering for deep equal checks.
// The list is sorted by type ClusterStatusConditionType to ensure consistent ordering for deep equal checks.
func (c *conditions) entries() []configv1.ClusterOperatorStatusCondition {
var res []configv1.ClusterOperatorStatusCondition
for _, v := range c.entryMap {
Expand Down
Loading

0 comments on commit fce828f

Please sign in to comment.