Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌱 add Github action to capture logs and pod descriptions when e2e fails #1510

Conversation

camilamacedo86
Copy link
Contributor

Description

Reviewer Checklist

  • API Go Documentation
  • Tests: Unit Tests (and E2E Tests, if appropriate)
  • Comprehensive Commit Messages
  • Links to related GitHub Issue(s)

@camilamacedo86 camilamacedo86 requested a review from a team as a code owner December 3, 2024 18:19
Copy link

netlify bot commented Dec 3, 2024

Deploy Preview for olmv1 ready!

Name Link
🔨 Latest commit fedeae3
🔍 Latest deploy log https://app.netlify.com/sites/olmv1/deploys/674f4bb8e945bb0008b67f7e
😎 Deploy Preview https://deploy-preview-1510--olmv1.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@everettraven
Copy link
Contributor

Just a note that our e2e action should already have logic to upload artifacts upon failure. For example, https://github.com/operator-framework/operator-controller/actions/runs/11746095340

The artifacts are gathered by

// getArtifactsOutput gets all the artifacts from the test run and saves them to the artifact path.
// Currently it saves:
// - clusterextensions
// - pods logs
// - deployments
// - catalogsources
func getArtifactsOutput(t *testing.T) {
basePath := env.GetString("ARTIFACT_PATH", "")
if basePath == "" {
return
}
kubeClient, err := kubeclient.NewForConfig(cfg)
require.NoError(t, err)
// sanitize the artifact name for use as a directory name
testName := strings.ReplaceAll(strings.ToLower(t.Name()), " ", "-")
// Get the test description and sanitize it for use as a directory name
artifactPath := filepath.Join(basePath, artifactName, fmt.Sprint(time.Now().UnixNano()), testName)
// Create the full artifact path
err = os.MkdirAll(artifactPath, 0755)
require.NoError(t, err)
// Get all namespaces
namespaces := corev1.NamespaceList{}
if err := c.List(context.Background(), &namespaces); err != nil {
fmt.Printf("Failed to list namespaces: %v", err)
}
// get all cluster extensions save them to the artifact path.
clusterExtensions := ocv1.ClusterExtensionList{}
if err := c.List(context.Background(), &clusterExtensions, client.InNamespace("")); err != nil {
fmt.Printf("Failed to list cluster extensions: %v", err)
}
for _, clusterExtension := range clusterExtensions.Items {
// Save cluster extension to artifact path
clusterExtensionYaml, err := yaml.Marshal(clusterExtension)
if err != nil {
fmt.Printf("Failed to marshal cluster extension: %v", err)
continue
}
if err := os.WriteFile(filepath.Join(artifactPath, clusterExtension.Name+"-clusterextension.yaml"), clusterExtensionYaml, 0600); err != nil {
fmt.Printf("Failed to write cluster extension to file: %v", err)
}
}
// get all catalogsources save them to the artifact path.
catalogsources := catalogd.ClusterCatalogList{}
if err := c.List(context.Background(), &catalogsources, client.InNamespace("")); err != nil {
fmt.Printf("Failed to list catalogsources: %v", err)
}
for _, catalogsource := range catalogsources.Items {
// Save catalogsource to artifact path
catalogsourceYaml, err := yaml.Marshal(catalogsource)
if err != nil {
fmt.Printf("Failed to marshal catalogsource: %v", err)
continue
}
if err := os.WriteFile(filepath.Join(artifactPath, catalogsource.Name+"-catalogsource.yaml"), catalogsourceYaml, 0600); err != nil {
fmt.Printf("Failed to write catalogsource to file: %v", err)
}
}
for _, namespace := range namespaces.Items {
// let's ignore kube-* namespaces.
if strings.Contains(namespace.Name, "kube-") {
continue
}
namespacedArtifactPath := filepath.Join(artifactPath, namespace.Name)
if err := os.Mkdir(namespacedArtifactPath, 0755); err != nil {
fmt.Printf("Failed to create namespaced artifact path: %v", err)
continue
}
// get all deployments in the namespace and save them to the artifact path.
deployments := appsv1.DeploymentList{}
if err := c.List(context.Background(), &deployments, client.InNamespace(namespace.Name)); err != nil {
fmt.Printf("Failed to list deployments %v in namespace: %q", err, namespace.Name)
continue
}
for _, deployment := range deployments.Items {
// Save deployment to artifact path
deploymentYaml, err := yaml.Marshal(deployment)
if err != nil {
fmt.Printf("Failed to marshal deployment: %v", err)
continue
}
if err := os.WriteFile(filepath.Join(namespacedArtifactPath, deployment.Name+"-deployment.yaml"), deploymentYaml, 0600); err != nil {
fmt.Printf("Failed to write deployment to file: %v", err)
}
}
// Get logs from all pods in all namespaces
pods := corev1.PodList{}
if err := c.List(context.Background(), &pods, client.InNamespace(namespace.Name)); err != nil {
fmt.Printf("Failed to list pods %v in namespace: %q", err, namespace.Name)
}
for _, pod := range pods.Items {
if pod.Status.Phase != corev1.PodRunning && pod.Status.Phase != corev1.PodSucceeded && pod.Status.Phase != corev1.PodFailed {
continue
}
for _, container := range pod.Spec.Containers {
logs, err := kubeClient.CoreV1().Pods(namespace.Name).GetLogs(pod.Name, &corev1.PodLogOptions{Container: container.Name}).Stream(context.Background())
if err != nil {
fmt.Printf("Failed to get logs for pod %q in namespace %q: %v", pod.Name, namespace.Name, err)
continue
}
defer logs.Close()
outFile, err := os.Create(filepath.Join(namespacedArtifactPath, pod.Name+"-"+container.Name+"-logs.txt"))
if err != nil {
fmt.Printf("Failed to create file for pod %q in namespace %q: %v", pod.Name, namespace.Name, err)
continue
}
defer outFile.Close()
if _, err := io.Copy(outFile, logs); err != nil {
fmt.Printf("Failed to copy logs for pod %q in namespace %q: %v", pod.Name, namespace.Name, err)
continue
}
}
}
}
}
on test failure

@bentito bentito changed the title 🌱 add function to add capture logs and pod descriptions when fail 🌱 add Github action to capture logs and pod descriptions when e2e fails Dec 3, 2024
Copy link

codecov bot commented Dec 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 74.68%. Comparing base (e51c0c2) to head (fedeae3).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1510   +/-   ##
=======================================
  Coverage   74.68%   74.68%           
=======================================
  Files          42       42           
  Lines        3271     3271           
=======================================
  Hits         2443     2443           
  Misses        652      652           
  Partials      176      176           
Flag Coverage Δ
e2e 52.06% <ø> (-0.10%) ⬇️
unit 57.99% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@camilamacedo86
Copy link
Contributor Author

Hi @everettraven

Just a note that our e2e action should already have logic to upload artifacts upon failure. For example, https://github.com/operator-framework/operator-controller/actions/runs/11746095340

I see that, but IHMO it is not so straightforward way helpful as looking the logs in the action. I do not see any harm to add it but if you folks see any reason for not please let me know.

@joelanford
Copy link
Member

IMO, we should not have two different ways of doing this. If we can improve on what's there, great! If we need to replace what we already have with an alternate method, let's make sure we pull in the folks that were involved with the original implementation to make sure there we account for everything that was originally considered. For example, was there some specific reason that we did NOT use a log-based approach?

@camilamacedo86
Copy link
Contributor Author

Fair enough.
Closing this one as not accepted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants