Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: remove references to jvm-build-service #1478

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

tnevrlka
Copy link
Contributor

Description

  • jvm-build-service is currently not being used by anyone
  • It is going to undergo major breaking changes and is going to be removed for now until the changes are done
  • Current tests are probably going to be useless due to the rewrite

Issue ticket number and link

STONEBLD-3015

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

N/A

Checklist:

  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added meaningful description with JIRA/GitHub issue key(if applicable), for example HASSuiteDescribe("STONE-123456789 devfile source")
  • I have updated labels (if needed)

@tnevrlka
Copy link
Contributor Author

/retest

@tnevrlka tnevrlka force-pushed the remove-jvm-build-service branch from 8449f48 to c993e0d Compare December 13, 2024 12:32
Copy link

openshift-ci bot commented Dec 16, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mmorhun
Once this PR has been reviewed and has the lgtm label, please assign flacatus for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tnevrlka
Copy link
Contributor Author

/retest

1 similar comment
@tnevrlka
Copy link
Contributor Author

/retest

@tnevrlka tnevrlka force-pushed the remove-jvm-build-service branch from c993e0d to f8af801 Compare December 19, 2024 16:04
@tnevrlka
Copy link
Contributor Author

/retest

@tnevrlka tnevrlka force-pushed the remove-jvm-build-service branch from f8af801 to a18243a Compare January 6, 2025 14:21
- jvm-build-service is currently not being used by anyone
- It is going to undergo major breaking changes and is going to be
  removed for now until the changes are done
- Current tests are probably going to be useless due to the rewrite
@tnevrlka tnevrlka force-pushed the remove-jvm-build-service branch from a18243a to 1f206c4 Compare January 8, 2025 12:30
@tnevrlka
Copy link
Contributor Author

tnevrlka commented Jan 8, 2025

/retest

1 similar comment
@tnevrlka
Copy link
Contributor Author

/retest

@konflux-ci-qe-bot
Copy link

@tnevrlka: The following test has Failed, say /retest to rerun failed tests.

PipelineRun Name Status Rerun command Build Log Test Log
konflux-e2e-qgzhp Failed /retest View Pipeline Log View Test Logs

Inspecting Test Artifacts

To inspect your test artifacts, follow these steps:

  1. Install ORAS (see the ORAS installation guide).
  2. Download artifacts with the following commands:
mkdir -p oras-artifacts
cd oras-artifacts
oras pull quay.io/konflux-test-storage/konflux-team/e2e-tests:konflux-e2e-qgzhp

Test results analysis

🚨 Failed to provision a cluster, see the log for more details:

Click to view logs
INFO: Log in to your Red Hat account...
INFO: Configure AWS Credentials...
WARN: The current version (1.2.47) is not up to date with latest rosa cli released version (1.2.49).
WARN: It is recommended that you update to the latest version.
INFO: Logged in as 'konflux-ci-418295695583' on 'https://api.openshift.com'
INFO: Create ROSA with HCP cluster...
WARN: The current version (1.2.47) is not up to date with latest rosa cli released version (1.2.49).
WARN: It is recommended that you update to the latest version.
INFO: Creating cluster 'kx-e637a5907e'
INFO: To view a list of clusters and their status, run 'rosa list clusters'
INFO: Cluster 'kx-e637a5907e' has been created.
INFO: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information.

Name: kx-e637a5907e
Domain Prefix: kx-e637a5907e
Display Name: kx-e637a5907e
ID: 2g9bq6rueaihpl4flc6db16f0v6hisqd
External ID: 6f019906-4d78-433a-9e06-0af2f319fc3e
Control Plane: ROSA Service Hosted
OpenShift Version: 4.15.42
Channel Group: stable
DNS: Not ready
AWS Account: 418295695583
AWS Billing Account: 418295695583
API URL:
Console URL:
Region: us-east-1
Availability:

  • Control Plane: MultiAZ
  • Data Plane: SingleAZ

Nodes:

  • Compute (desired): 3
  • Compute (current): 0
    Network:
  • Type: OVNKubernetes
  • Service CIDR: 172.30.0.0/16
  • Machine CIDR: 10.0.0.0/16
  • Pod CIDR: 10.128.0.0/14
  • Host Prefix: /23
  • Subnets: subnet-05b9daa0609597f68, subnet-04cf6376374bf9e09
    EC2 Metadata Http Tokens: optional
    Role (STS) ARN: arn:aws:iam::418295695583:role/ManagedOpenShift-HCP-ROSA-Installer-Role
    Support Role ARN: arn:aws:iam::418295695583:role/ManagedOpenShift-HCP-ROSA-Support-Role
    Instance IAM Roles:
  • Worker: arn:aws:iam::418295695583:role/ManagedOpenShift-HCP-ROSA-Worker-Role
    Operator IAM Roles:
  • arn:aws:iam::418295695583:role/rosa-hcp-kube-system-kms-provider
  • arn:aws:iam::418295695583:role/rosa-hcp-openshift-cloud-network-config-controller-cloud-credent
  • arn:aws:iam::418295695583:role/rosa-hcp-openshift-image-registry-installer-cloud-credentials
  • arn:aws:iam::418295695583:role/rosa-hcp-openshift-ingress-operator-cloud-credentials
  • arn:aws:iam::418295695583:role/rosa-hcp-openshift-cluster-csi-drivers-ebs-cloud-credentials
  • arn:aws:iam::418295695583:role/rosa-hcp-kube-system-kube-controller-manager
  • arn:aws:iam::418295695583:role/rosa-hcp-kube-system-capa-controller-manager
  • arn:aws:iam::418295695583:role/rosa-hcp-kube-system-control-plane-operator
    Managed Policies: Yes
    State: waiting (Waiting for user action)
    Private: No
    Delete Protection: Disabled
    Created: Jan 13 2025 12:46:23 UTC
    User Workload Monitoring: Enabled
    Details Page: https://console.redhat.com/openshift/details/s/2rZhDIQLZPQxZLeOuJjSNI4kgZN
    OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/2du11g36ejmoo4624pofphlrgf4r9tf3 (Managed)
    Etcd Encryption: Disabled
    Audit Log Forwarding: Disabled
    External Authentication: Disabled
    Zero Egress: Disabled

INFO: Preparing to create operator roles.
INFO: Operator Roles already exists
INFO: Preparing to create OIDC Provider.
INFO: OIDC provider already exists
INFO: To determine when your cluster is Ready, run 'rosa describe cluster -c kx-e637a5907e'.
INFO: To watch your cluster installation logs, run 'rosa logs install -c kx-e637a5907e --watch'.
INFO: Track the progress of the cluster creation...
WARN: The current version (1.2.47) is not up to date with latest rosa cli released version (1.2.49).
WARN: It is recommended that you update to the latest version.
�[0;33mW:�[m Region flag will be removed from this command in future versions
INFO: Cluster 'kx-e637a5907e' is in waiting state waiting for installation to begin. Logs will show up within 5 minutes
0001-01-01 00:00:00 +0000 UTC hostedclusters kx-e637a5907e Version
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e The hosted control plane is not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e The hosted control plane is not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e The hosted control plane is not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e The hosted control plane is not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e The hosted control plane is not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e The hosted control plane is not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Waiting for hosted control plane to be healthy
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e The hosted control plane is not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Ignition server deployment not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Configuration passes validation
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e HostedCluster is supported by operator configuration
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Release image is valid
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e ValidAWSIdentityProvider StatusUnknown
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Reconciliation active on resource
2025-01-13 12:50:42 +0000 UTC hostedclusters kx-e637a5907e HostedCluster is at expected version
2025-01-13 12:50:43 +0000 UTC hostedclusters kx-e637a5907e Required platform credentials are found
2025-01-13 12:50:43 +0000 UTC hostedclusters kx-e637a5907e failed to get referenced secret ocm-production-2g9bq6rueaihpl4flc6db16f0v6hisqd/cluster-api-cert: Secret "cluster-api-cert" not found
0001-01-01 00:00:00 +0000 UTC hostedclusters kx-e637a5907e Version
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the HCP
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the HCP
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the HCP
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the HCP
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the HCP
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Reconciliation active on resource
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the HCP
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e The hosted control plane is not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Condition not found in the CVO.
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Ignition server deployment not found
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Configuration passes validation
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e HostedCluster is supported by operator configuration
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Waiting for hosted control plane to be healthy
2025-01-13 12:50:39 +0000 UTC hostedclusters kx-e637a5907e Release image is valid
2025-01-13 12:50:42 +0000 UTC hostedclusters kx-e637a5907e HostedCluster is at expected version
2025-01-13 12:50:43 +0000 UTC hostedclusters kx-e637a5907e Required platform credentials are found
2025-01-13 12:52:18 +0000 UTC hostedclusters kx-e637a5907e Reconciliation completed successfully
2025-01-13 12:52:18 +0000 UTC hostedclusters kx-e637a5907e OIDC configuration is valid
2025-01-13 12:52:24 +0000 UTC hostedclusters kx-e637a5907e WebIdentityErr
2025-01-13 12:52:28 +0000 UTC hostedclusters kx-e637a5907e All is well
2025-01-13 12:52:28 +0000 UTC hostedclusters kx-e637a5907e lookup api.kx-e637a5907e.6wae.p3.openshiftapps.com on 172.30.0.10:53: no such host
2025-01-13 12:52:28 +0000 UTC hostedclusters kx-e637a5907e capi-provider deployment has 1 unavailable replicas
2025-01-13 12:52:28 +0000 UTC hostedclusters kx-e637a5907e Configuration passes validation
2025-01-13 12:52:28 +0000 UTC hostedclusters kx-e637a5907e AWS KMS is not configured
2025-01-13 12:52:28 +0000 UTC hostedclusters kx-e637a5907e Waiting for etcd to reach quorum
2025-01-13 12:52:28 +0000 UTC hostedclusters kx-e637a5907e Kube APIServer deployment not found
2025-01-13 12:53:59 +0000 UTC hostedclusters kx-e637a5907e All is well
2025-01-13 12:54:43 +0000 UTC hostedclusters kx-e637a5907e EtcdAvailable QuorumAvailable
2025-01-13 12:56:36 +0000 UTC hostedclusters kx-e637a5907e Kube APIServer deployment is available
2025-01-13 12:56:58 +0000 UTC hostedclusters kx-e637a5907e All is well
2025-01-13 12:59:37 +0000 UTC hostedclusters kx-e637a5907e All is well
2025-01-13 12:59:48 +0000 UTC hostedclusters kx-e637a5907e The hosted control plane is available
INFO: Cluster 'kx-e637a5907e' is now ready
INFO: ROSA with HCP cluster is ready, create a cluster admin account for accessing the cluster
WARN: The current version (1.2.47) is not up to date with latest rosa cli released version (1.2.49).
WARN: It is recommended that you update to the latest version.
INFO: Storing login command...
INFO: Check if it's able to login to OCP cluster...
Retried 1 times...
Retried 2 times...
INFO: Check if apiserver is ready...
[INFO] Checking cluster operators' status...
[INFO] Attempt 1/10
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console 4.15.42 True False False 3m58s
csi-snapshot-controller 4.15.42 True False False 11m
dns 4.15.42 True False False 3m51s
image-registry 4.15.42 True False False 3m26s
ingress False True True 14s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: LoadBalancerReady=False (SyncLoadBalancerFailed: The service-controller component is reporting SyncLoadBalancerFailed events like: Error syncing load balancer: failed to ensure load balancer: error creating load balancer target group: "TooManyTargetGroups: The maximum number of target groups has been reached\n\tstatus code: 400, request id: 7182babb-3322-43e7-8c0f-c2393fb71001"...
insights 4.15.42 True False False 4m33s
kube-apiserver 4.15.42 True False False 11m
kube-controller-manager 4.15.42 True False False 11m
kube-scheduler 4.15.42 True False False 11m
kube-storage-version-migrator 4.15.42 True False False 4m28s
monitoring 4.15.42 True False False 2m49s
network 4.15.42 True False False 10m
node-tuning 4.15.42 True False False 5m18s
openshift-apiserver 4.15.42 True False False 11m
openshift-controller-manager 4.15.42 True False False 11m
openshift-samples 4.15.42 True False False 3m30s
operator-lifecycle-manager 4.15.42 True False False 11m
operator-lifecycle-manager-catalog 4.15.42 True False False 11m
operator-lifecycle-manager-packageserver 4.15.42 True False False 11m
service-ca 4.15.42 True False False 4m28s
storage 4.15.42 True False False 5m52s
[INFO] Cluster operators are accessible.
[INFO] Waiting for cluster operators to be in 'Progressing=false' state...


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants