Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backup gets created but volumes are empty and the error is Could not connect to minio container! #8600

Open
revoltez opened this issue Jan 12, 2025 · 3 comments
Assignees
Labels
Area/Storage/Minio For marking the issues where backend storage is minio

Comments

@revoltez
Copy link

What steps did you take and what happened:

I have the following setup:

  • minikube cluster
  • minio container that was run with
docker run -d --name minio -p 9000:9000  -p 9001:9001 -e "MINIO_ROOT_USER=minioadmin" -e "MINIO_ROOT_PASSWORD=minioadmin" minio/minio server /data --console-address "0.0.0.0:9001"

i created a credentials filecredentials-velero which contains the pasword and the username:

[default]
aws_access_key_id=minioadmin
aws_secret_access_key=minioadmin

i installed velero with this command:

velero install \                        
  --provider aws \
  --plugins velero/velero-plugin-for-aws:v1.8.0 \
  --bucket mybucket \
  --secret-file ./credentials-velero \
  --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://host.minikube.internal:9000

note i needed to first create mybucket in minio otherwise it fails

and when i created a backup for any namespace that contains persistent volumes using this command:

velero backup create mongodb-namespace \
  --include-namespaces mongo \
  --snapshot-volumes \
  --wait

it will give this output:

Backup request "mongodb-namespace" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.

Backup completed with status: PartiallyFailed. You may check for more information using the commands `velero backup describe mongodb-namespace` and `velero backup logs mongodb-namespace`.

and the result of describing the backup is the following:

Phase:  PartiallyFailed (run `velero backup logs mongodb-namespace` for more information)

Warnings:  <error getting warnings: Get "http://host.minikube.internal:9000/mybucket/backups/mongodb-namespace/mongodb-namespace-results.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=minioadmin%2F20250112%2Fminio%2Fs3%2Faws4_request&X-Amz-Date=20250112T191907Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=3d9a5e3b8e0fdb404bb6a5369120842bad44b74b25e3180c16b3937720699f25": dial tcp: lookup host.minikube.internal: no such host>

Errors:  <error getting errors: Get "http://host.minikube.internal:9000/mybucket/backups/mongodb-namespace/mongodb-namespace-results.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=minioadmin%2F20250112%2Fminio%2Fs3%2Faws4_request&X-Amz-Date=20250112T191907Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=3d9a5e3b8e0fdb404bb6a5369120842bad44b74b25e3180c16b3937720699f25": dial tcp: lookup host.minikube.internal: no such host>

Namespaces:
  Included:  mongo
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Or label selector:  <none>

Storage Location:  default

Velero-Native Snapshot PVs:  true
Snapshot Move Data:          false
Data Mover:                  velero

TTL:  720h0m0s

CSISnapshotTimeout:    10m0s
ItemOperationTimeout:  4h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2025-01-12 20:16:59 +0100 CET
Completed:  2025-01-12 20:17:00 +0100 CET

Expiration:  2025-02-11 20:16:59 +0100 CET

Total items to be backed up:  13
Items backed up:              13

Backup Volumes:
  <error getting backup volume info: Get "http://host.minikube.internal:9000/mybucket/backups/mongodb-namespace/mongodb-namespace-volumeinfo.json.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=minioadmin%2F20250112%2Fminio%2Fs3%2Faws4_request&X-Amz-Date=20250112T191907Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=e2384d7839d3121c0be1736633f788408df86824a64227d8ca7ffb4009e1d63f": dial tcp: lookup host.minikube.internal: no such host>

HooksAttempted:  0
HooksFailed:     0

The error says that there is no such host but what's weird is that when i open the minio console i see the backup is created, and when i restore it using this command: velero restore create mongodb-restore --from-backup mongodb-namespace

the namespace is indeed restored and everything works except the database volumes are empty

What did you expect to happen:
The volume should be restored as well with its data.

Environment:
velero version:

Client:
        Version: v1.15.1
        Git commit: 32499fc287815058802c1bc46ef620799cca7392
Server:
        Version: v1.15.1
@blackpiglet blackpiglet added the Area/Storage/Minio For marking the issues where backend storage is minio label Jan 13, 2025
@blackpiglet
Copy link
Contributor

dial tcp: lookup host.minikube.internal: no such host
I think this was caused by the DNS record being hosted by the kube-DNS service in the k8s environment, but the Velero CLI ran outside the k8s environment.
I suggest either you run the Velero CLI in the k8s environment by running something like the following:
kubectl -n velero exec -it [velero-server-pod-name] -- velero backup describe ....
Or you can try to make that DNS record also available to where you run the Velero CLI.

For the backup PartiallyFailed issue, please collect the Velero debug bundle to help investigate. The CLI is velero debug.

@blackpiglet blackpiglet self-assigned this Jan 13, 2025
@revoltez
Copy link
Author

revoltez commented Jan 13, 2025

@blackpiglet I added 192.168.49.1 host.minikube.internal to /etc/hosts and the error of no such hosts was gone.

however there is a new error which is complaining about not having an applicable snapshotter, this is the error i catched when running velero backup logs mongodb-namespace

time="2025-01-13T08:20:48Z" level=info msg="Summary for skipped PVs: [{\"name\":\"pvc-835001c5-6345-4f4a-b651-8cd7cea92ebf\",\"reasons\":[{\"approach\":\"podvolume\",\"reason\":\"opted out due to annotation in pod mongodb-0\"},{\"approach\":\"volumeSnapshot\",\"reason\":\"no applicable volumesnapshotter found\"}]}]" backup=velero/mongodb-namespace logSource="pkg/backup/backup.go:545"
time="2025-01-13T08:20:48Z" level=info msg="Backed up a total of 20 items" backup=velero/mongodb-namespace logSource="pkg/backup/backup.go:549" progress=

and the result of velero backup describe mongodb-namespace is:

Errors:
  Velero:     <none>
  Cluster:   resource: /persistentvolumes name: /pvc-835001c5-6345-4f4a-b651-8cd7cea92ebf message: /Error getting volume snapshotter for volume snapshot location error: /rpc error: code = Unknown desc = missing region in aws configuration
  Namespaces: <none>

which is also complaining about not specifying the region.

PS: I tried to enable the Volumesnaphotter & csi-hostpath-driver addons in minikube and same results.

this is the result of running velero debug
bundle-2025-01-13-09-37-26.tar.gz

@msfrucht
Copy link
Contributor

You need to add during the Velero install command --features=EnableCSI to enable using CSI. https://velero.io/docs/main/csi/#installing-velero-with-csi-support

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area/Storage/Minio For marking the issues where backend storage is minio
Projects
None yet
Development

No branches or pull requests

3 participants