You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository is currently being migrated. It's locked while the migration is in progress.
In my cluster I've been experimenting with adding / removing nodes. Not 100% sure, but I believe I drained all nodes before deleting them from the cluster. When I deployed a new workload with some volumes I noticed I could no longer use the CLI to fetch any node or volume information in that namespace. In #229 (comment) I describe the issues I was getting.
In order to finally get it working again I needed to go into the etcd cluster and forcefully remove all keys of which the volume id was in the key or in it's value. I used the api to fetch which volumes were not attached to any host and started removing them one by one until the CLI was responding to my requests again. By doing this there might be some residue left on some nodes tho.
Below I've include the api response I was using the forcefully remove the volumes from the state. Note that none of the detached volumes listed here were showing up in Kubernetes. They only references to them were in the Ondat api.
I used the following bash script to remove the keys in the etcd cluster:
#!/bin/bash# Check if an argument is providedif [ $#-eq 0 ];thenecho"Usage: $0 <search_string>"exit 1
fi# Store the search string
search_string=$1# Set the etcd endpoint
endpoint="https://storageos-etcd-1.storageos-etcd.storageos:2379"# Get all keys containing the search string
matching_keys=$(etcdctl --endpoints=$endpoint get "" --prefix --keys-only | grep "${search_string}")# Print the matching keys and their valuesecho"Matching keys and values:"forkeyin$matching_keys;do
value=$(etcdctl --endpoints=$endpoint get "$key" --print-value-only)echo -e "${key}\n${value}"done# Get all keys and values, then check if the value contains the search string
all_keys=$(etcdctl --endpoints=$endpoint get "" --prefix --keys-only)forkeyin$all_keys;do
value=$(etcdctl --endpoints=$endpoint get "$key" --print-value-only)if [[ "$value"==*"${search_string}"* ]];then
matching_keys+=$'\n'"$key"echo -e "${key}\n${value}"fidone# Delete all matching keysforkeyin$matching_keys;doecho"Deleting $key"
etcdctl --endpoints=$endpoint del "$key"doneecho"All keys with '${search_string}' in key or value have been deleted."
The text was updated successfully, but these errors were encountered:
I suppose there should be some sort of data validation process that checks if all nodes / volumes are still a valid part of the cluster. They should otherwise be removed from the state.
In my cluster I've been experimenting with adding / removing nodes. Not 100% sure, but I believe I drained all nodes before deleting them from the cluster. When I deployed a new workload with some volumes I noticed I could no longer use the CLI to fetch any node or volume information in that namespace. In #229 (comment) I describe the issues I was getting.
In order to finally get it working again I needed to go into the
etcd
cluster and forcefully remove all keys of which the volume id was in the key or in it's value. I used the api to fetch which volumes were not attached to any host and started removing them one by one until the CLI was responding to my requests again. By doing this there might be some residue left on some nodes tho.Below I've include the api response I was using the forcefully remove the volumes from the state. Note that none of the detached volumes listed here were showing up in Kubernetes. They only references to them were in the Ondat api.
I used the following bash script to remove the keys in the etcd cluster:
The text was updated successfully, but these errors were encountered: