Skip to content
This repository has been archived by the owner on Jun 6, 2024. It is now read-only.

Snapshot volumes #171

Merged
merged 83 commits into from
May 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
83 commits
Select commit Hold shift + click to select a range
b7260b6
create a backup of the snapshot for the PV's
Heavybullets8 May 27, 2024
d92fa39
mkdir
Heavybullets8 May 27, 2024
1c27a97
use a different replace
Heavybullets8 May 27, 2024
d1f293a
attempt restore of snapshots
Heavybullets8 May 27, 2024
0cdef6a
Refactor ZFS snapshot receive command to force overwrite destination
Heavybullets8 May 27, 2024
e15c388
Refactor ZFS snapshot receive command to force overwrite destination
Heavybullets8 May 27, 2024
fcc0ff5
Refactor ZFS snapshot receive command to improve restore process
Heavybullets8 May 27, 2024
195f1dd
Refactor ZFS snapshot receive command to improve restore process
Heavybullets8 May 27, 2024
5ee0b65
Refactor ZFS snapshot receive command to improve restore process
Heavybullets8 May 27, 2024
4e7680a
Refactor ZFS snapshot receive command to improve restore process
Heavybullets8 May 27, 2024
93eb265
Refactor ZFS snapshot receive command to improve restore process
Heavybullets8 May 27, 2024
c97a5a6
remove any existing snapshots on restoration
Heavybullets8 May 27, 2024
409e237
Refactor ZFS snapshot destroy command to handle null characters in sn…
Heavybullets8 May 27, 2024
71afb87
Refactor ZFS snapshot destroy command to handle null characters in sn…
Heavybullets8 May 27, 2024
6585c22
create parent dataset if needed
Heavybullets8 May 27, 2024
d4192e2
restore ix_volumes as well
Heavybullets8 May 27, 2024
a1d147e
use config for backup
Heavybullets8 May 27, 2024
19669ea
define the snapshot name for ix_volume
Heavybullets8 May 27, 2024
4fb54f7
fix method calling
Heavybullets8 May 27, 2024
e34a20d
pass parameter
Heavybullets8 May 27, 2024
35b1b24
Refactor ZFS snapshot restore process for improved efficiency
Heavybullets8 May 27, 2024
f67c75c
correct getting ix volumes dataset from backup parser
Heavybullets8 May 27, 2024
e2c6721
Refactor backup_fetch.py to handle config values as strings
Heavybullets8 May 27, 2024
da3884f
add more debugging logs
Heavybullets8 May 27, 2024
aa30dbe
improve logs slightly,
Heavybullets8 May 27, 2024
cf6bc1f
try just dropping the table each time
Heavybullets8 May 28, 2024
da3d825
drop all objects instead?
Heavybullets8 May 28, 2024
9c73d69
dont restore cnpg databases for testing
Heavybullets8 May 28, 2024
3300658
try killing active connections
Heavybullets8 May 28, 2024
f8e00f2
Refactor restore.py to improve restore command handling
Heavybullets8 May 28, 2024
1ee9329
revert but keep binary read
Heavybullets8 May 28, 2024
0e942a6
check to see if snapshots exist in backup before declaring we will us…
Heavybullets8 May 28, 2024
ab32cf4
refactor snapshot backup
Heavybullets8 May 29, 2024
429bc38
Refactor ZFSSnapshotManager to handle exceptions when getting refer size
Heavybullets8 May 29, 2024
7c868e9
update refer logic
Heavybullets8 May 29, 2024
45eb7b0
try using a tab character to split instead
Heavybullets8 May 29, 2024
98d891c
Refactor ZFSCache to improve snapshot refer size handling
Heavybullets8 May 29, 2024
de2278c
Refactor backup and restore code to improve cleanup process
Heavybullets8 May 29, 2024
041d45a
Refactor backup_manager.py to improve cleanup process and retention h…
Heavybullets8 May 29, 2024
737a5eb
Refactor delete_snapshots method in BackupManager to handle dangling …
Heavybullets8 May 29, 2024
99203a6
use a property for all datasets instead of a function
Heavybullets8 May 29, 2024
b0d1add
Refactor snapshot deletion logic in BackupManager to handle dangling …
Heavybullets8 May 29, 2024
7457acd
reference correct parent list
Heavybullets8 May 29, 2024
09a640e
delete dangling snapshots AFTER we delete the full backups
Heavybullets8 May 29, 2024
2691de0
Refactor backup and restore code to handle dangling snapshots
Heavybullets8 May 29, 2024
ab0fe91
Refactor export cleanup process and retention handling
Heavybullets8 May 29, 2024
d6d8ef5
refactor restore message and volume handling
Heavybullets8 May 29, 2024
51836ce
Refactor backup and restore code to create ZFS dataset for backups
Heavybullets8 May 29, 2024
6a4bf69
Refactor restore_base.py to improve dataset creation and logging
Heavybullets8 May 29, 2024
cc3881d
Refactor restore_base.py to restore CRDs for specified applications
Heavybullets8 May 29, 2024
c07a7cf
Refactor restore_base.py logging to debug level for CRD restoration
Heavybullets8 May 29, 2024
920a67b
Refactor restore_base.py logging to debug level for CRD restoration
Heavybullets8 May 29, 2024
5551e0d
Refactor restore_base.py to set mountpoint to legacy for dataset paths
Heavybullets8 May 29, 2024
6e3682c
Refactor restore_base.py to set mountpoint to legacy for dataset paths
Heavybullets8 May 29, 2024
3d3a0bd
Refactor restore_base.py to use logger for rolling back snapshot
Heavybullets8 May 29, 2024
965b6e3
remove extra indent for secrets
Heavybullets8 May 29, 2024
67747e3
update config
Heavybullets8 May 30, 2024
f51b7fa
attempt new method
Heavybullets8 May 30, 2024
bd9448e
use config parser with allow_no_value
Heavybullets8 May 30, 2024
f2a7c72
Update config with missing sections and options from default config
Heavybullets8 May 30, 2024
98402df
add new options to backup section
Heavybullets8 May 30, 2024
c3cf732
Refactor update_config.py to exclude [databases] section when writing…
Heavybullets8 May 30, 2024
969b0a2
Refactor update_config.py to update missing sections and options from…
Heavybullets8 May 30, 2024
7b25590
make config updater a bit more general
Heavybullets8 May 30, 2024
dde6ad9
Refactor update_config.py to remove sections not in default config
Heavybullets8 May 30, 2024
5d27615
Refactor update_config.py to remove unused sections and options
Heavybullets8 May 30, 2024
facf25e
Refactor update_config.py to remove unused sections and options
Heavybullets8 May 30, 2024
59ecc7e
switch to using configobj
Heavybullets8 May 30, 2024
a145891
convert path obj to strings
Heavybullets8 May 30, 2024
bc011bd
debugging print statements
Heavybullets8 May 30, 2024
e11b191
back up
Heavybullets8 May 30, 2024
da5aafa
Refactor update_config.py to write missing sections and options from …
Heavybullets8 May 30, 2024
5193fa3
Refactor update_config.py to handle missing sections and options from…
Heavybullets8 May 30, 2024
301585c
Refactor update_config.py to write missing sections and options from …
Heavybullets8 May 30, 2024
b8dfc0e
Refactor update_config.py to preserve comments and ensure new lines b…
Heavybullets8 May 30, 2024
4fda946
Refactor update_config.py to handle missing sections and options from…
Heavybullets8 May 30, 2024
3ad08a1
change default config slightly
Heavybullets8 May 30, 2024
7b4723b
update config.
Heavybullets8 May 30, 2024
50e2d23
Refactor backup.py to support configurable snapshot streaming size
Heavybullets8 May 30, 2024
b72d7f2
Refactor max_stream_size in default.config.ini to use shorthand notation
Heavybullets8 May 30, 2024
a3be86d
add some debugging
Heavybullets8 May 30, 2024
2f787a4
Refactor backup.py to support configurable snapshot streaming size
Heavybullets8 May 30, 2024
20739a2
Refactor backup.py to use snapshot name directly in backup process
Heavybullets8 May 30, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .default.config.ini
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,13 @@ ignore=
## true/false options ##
export_enabled=true
full_backup_enabled=true
backup_snapshot_streams=false

## String options ##
# Uncomment the following line to specify a custom dataset location for backups
# custom_dataset_location=

# Maximum size of a backup stream, be careful when setting this higher
# Especially considering PV's for plex, sonarr, radarr, etc. can be quite large
# Example: max_stream_size=10G, max_stream_size=20K, max_stream_size=1T
max_stream_size=1G
148 changes: 104 additions & 44 deletions functions/backup_restore/backup/backup.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from datetime import datetime, timezone
from configobj import ConfigObj
from pathlib import Path
from collections import defaultdict

Expand Down Expand Up @@ -49,21 +50,36 @@ def __init__(self, backup_dir: Path, retention_number: int = 15):

self.backup_dataset_parent = self.backup_dir.relative_to("/mnt")
self.backup_dataset = str(self.backup_dataset_parent)
self._create_backup_dataset(self.backup_dataset)
self._create_backup_dataset()

self.chart_collection = APIChartCollection()
self.all_chart_names = self.chart_collection.all_chart_names
self.all_release_names = self.chart_collection.all_release_names

self.kube_pvc_fetcher = KubePVCFetcher()

def _create_backup_dataset(self, dataset):
# Read configuration settings
config_file_path = str(Path(__file__).parent.parent.parent.parent / 'config.ini')
config = ConfigObj(config_file_path, encoding='utf-8', list_values=False)

self.backup_snapshot_streams = config['BACKUP'].as_bool('backup_snapshot_streams')
self.max_stream_size_str = config['BACKUP'].get('max_stream_size', '10G')
self.max_stream_size_bytes = self._size_str_to_bytes(self.max_stream_size_str)

def _create_backup_dataset(self):
"""
Create a ZFS dataset for backups if it doesn't already exist.
Create a ZFS dataset for backups.
"""
if not self.lifecycle_manager.dataset_exists(dataset):
if not self.lifecycle_manager.create_dataset(dataset):
raise RuntimeError(f"Failed to create backup dataset: {dataset}")
if not self.lifecycle_manager.dataset_exists(self.backup_dataset):
if not self.lifecycle_manager.create_dataset(
self.backup_dataset,
options={
"atime": "off",
"compression": "zstd-19",
"recordsize": "1M"
}
):
raise RuntimeError(f"Failed to create backup dataset: {self.backup_dataset}")

def backup_all(self):
"""
Expand Down Expand Up @@ -144,14 +160,78 @@ def backup_all(self):

dataset_paths = self.kube_pvc_fetcher.get_volume_paths_by_namespace(f"ix-{app_name}")
if dataset_paths:
self.logger.info(f"Backing up {app_name} PVCs...")
snapshot_errors = self.snapshot_manager.create_snapshots(self.snapshot_name, dataset_paths, self.retention_number)
if snapshot_errors:
failures[app_name].extend(snapshot_errors)
for dataset_path in dataset_paths:
pvc_name = dataset_path.split('/')[-1]
self.logger.info(f"Snapshotting PVC: {pvc_name}...")

# Check to see if dataset exists
if not self.lifecycle_manager.dataset_exists(dataset_path):
error_msg = f"Dataset {dataset_path} does not exist."
self.logger.error(error_msg)
failures[app_name].append(error_msg)
continue

# Create the snapshot for the current dataset
snapshot_result = self.snapshot_manager.create_snapshot(self.snapshot_name, dataset_path)
if not snapshot_result["success"]:
failures[app_name].append(snapshot_result["message"])
continue

self.logger.debug(f"backup_snapshot_streams: {self.backup_snapshot_streams}")
self.logger.debug(f"max_stream_size_str: {self.max_stream_size_str}")
self.logger.debug(f"max_stream_size_bytes: {self.max_stream_size_bytes}")

if self.backup_snapshot_streams:
snapshot = f"{dataset_path}@{self.snapshot_name}"
snapshot_refer_size = self.snapshot_manager.get_snapshot_refer_size(snapshot)
self.logger.debug(f"snapshot_refer_size: {snapshot_refer_size}")

if snapshot_refer_size <= self.max_stream_size_bytes:
# Send the snapshot to the backup directory
self.logger.info(f"Sending PV snapshot stream to backup file...")
snapshot = f"{dataset_path}@{self.snapshot_name}"
backup_path = app_backup_dir / "snapshots" / f"{snapshot.replace('/', '%%')}.zfs"
backup_path.parent.mkdir(parents=True, exist_ok=True)
send_result = self.snapshot_manager.zfs_send(snapshot, backup_path, compress=True)
if not send_result["success"]:
failures[app_name].append(send_result["message"])
else:
self.logger.warning(f"Snapshot refer size {snapshot_refer_size} exceeds the maximum configured size {self.max_stream_size_bytes}")
else:
self.logger.debug("Backup snapshot streams are disabled in the configuration.")

# Handle ix_volumes_dataset separately
if chart_info.ix_volumes_dataset:
snapshot = chart_info.ix_volumes_dataset + "@" + self.snapshot_name
if self.backup_snapshot_streams:
snapshot_refer_size = self.snapshot_manager.get_snapshot_refer_size(snapshot)
self.logger.debug(f"ix_volumes_dataset snapshot_refer_size: {snapshot_refer_size}")

if snapshot_refer_size <= self.max_stream_size_bytes:
self.logger.info(f"Sending ix_volumes snapshot stream to backup file...")
backup_path = app_backup_dir / "snapshots" / f"{snapshot.replace('/', '%%')}.zfs"
backup_path.parent.mkdir(parents=True, exist_ok=True)
send_result = self.snapshot_manager.zfs_send(snapshot, backup_path, compress=True)
if not send_result["success"]:
failures[app_name].append(send_result["message"])
else:
self.logger.warning(f"ix_volumes_dataset snapshot refer size {snapshot_refer_size} exceeds the maximum configured size {self.max_stream_size_bytes}")
else:
self.logger.debug("Backup snapshot streams are disabled in the configuration.")

self._create_backup_snapshot()
self._log_failures(failures)
self._cleanup_old_backups()

def _size_str_to_bytes(self, size_str):
size_units = {"K": 1024, "M": 1024**2, "G": 1024**3, "T": 1024**4}
try:
if size_str[-1] in size_units:
return int(float(size_str[:-1]) * size_units[size_str[-1]])
else:
return int(size_str)
except ValueError:
self.logger.error(f"Invalid size string: {size_str}")
return 0

def _log_failures(self, failures):
"""
Expand All @@ -175,34 +255,14 @@ def _create_backup_snapshot(self):
Create a snapshot of the backup dataset after all backups are completed.
"""
self.logger.info(f"\nCreating snapshot for backup: {self.backup_dataset}")
if self.snapshot_manager.create_snapshots(self.snapshot_name, [self.backup_dataset], self.retention_number):
self.logger.error("Failed to create snapshot for backup dataset.")
else:
self.logger.info("Snapshot created successfully for backup dataset.")
snapshot_result = self.snapshot_manager.create_snapshot(self.snapshot_name, self.backup_dataset)

def _cleanup_old_backups(self):
"""
Cleanup old backups and their associated snapshots if the number of backups exceeds the retention limit.
"""
backup_datasets = sorted(
(ds for ds in self.lifecycle_manager.list_datasets() if ds.startswith(f"{self.backup_dataset_parent}/HeavyScript--")),
key=lambda ds: datetime.strptime(ds.replace(f"{self.backup_dataset_parent}/HeavyScript--", ""), '%Y-%m-%d_%H:%M:%S')
)

if len(backup_datasets) > self.retention_number:
for old_backup_dataset in backup_datasets[:-self.retention_number]:
snapshot_name = old_backup_dataset.split("/")[-1]
self.logger.info(f"Deleting oldest backup due to retention limit: {snapshot_name}")
try:
self.lifecycle_manager.delete_dataset(old_backup_dataset)
self.logger.debug(f"Removed old backup: {old_backup_dataset}")
except Exception as e:
self.logger.error(f"Failed to delete old backup dataset {old_backup_dataset}: {e}", exc_info=True)

self.logger.debug(f"Deleting snapshots for: {snapshot_name}")
snapshot_errors = self.snapshot_manager.delete_snapshots(snapshot_name)
if snapshot_errors:
self.logger.error(f"Failed to delete snapshots for {snapshot_name}: {snapshot_errors}")
if snapshot_result.get("success"):
self.logger.info("Snapshot created successfully for backup dataset.")
else:
self.logger.error("Failed to create snapshot for backup dataset.")
for error in snapshot_result.get("errors", []):
self.logger.error(error)

def _backup_application_datasets(self):
"""
Expand All @@ -212,12 +272,12 @@ def _backup_application_datasets(self):
- applications_dataset (str): The root dataset under which Kubernetes operates.
"""
datasets_to_ignore = KubeUtils().to_ignore_datasets_on_backup(self.kubeconfig.dataset)
all_datasets = self.lifecycle_manager.list_datasets()

datasets_to_backup = [ds for ds in all_datasets if ds.startswith(self.kubeconfig.dataset) and ds not in datasets_to_ignore]
datasets_to_backup = [ds for ds in self.lifecycle_manager.datasets if ds.startswith(self.kubeconfig.dataset) and ds not in datasets_to_ignore]
self.logger.debug(f"Snapshotting datasets: {datasets_to_backup}")

snapshot_errors = self.snapshot_manager.create_snapshots(self.snapshot_name, datasets_to_backup, self.retention_number)
if snapshot_errors:
self.logger.error(f"Failed to create snapshots for application datasets: {snapshot_errors}")

for dataset in datasets_to_backup:
# Create snapshot for each dataset
snapshot_result = self.snapshot_manager.create_snapshot(self.snapshot_name, dataset)
if not snapshot_result.get("success"):
self.logger.error(f"Failed to create snapshot for dataset {dataset}: {snapshot_result['message']}")
19 changes: 0 additions & 19 deletions functions/backup_restore/backup/export_.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,25 +66,6 @@ def export(self):
self._convert_json_to_yaml(chart_info_dir / 'values.json')

self.logger.info("Chart information export completed.")
self._cleanup_old_exports()

def _cleanup_old_exports(self):
"""
Cleanup old exports if the number of exports exceeds the retention limit.
"""
export_dirs = sorted(
(d for d in self.export_dir.iterdir() if d.is_dir() and d.name.startswith("Export--")),
key=lambda d: datetime.strptime(d.name.replace("Export--", ""), '%Y-%m-%d_%H:%M:%S')
)

if len(export_dirs) > self.retention_number:
for old_export_dir in export_dirs[:-self.retention_number]:
self.logger.info(f"Deleting oldest export due to retention limit: {old_export_dir.name}")
try:
shutil.rmtree(old_export_dir)
self.logger.debug(f"Removed old export: {old_export_dir}")
except Exception as e:
self.logger.error(f"Failed to delete old export directory {old_export_dir}: {e}", exc_info=True)

def _convert_json_to_yaml(self, json_file: Path):
"""
Expand Down
85 changes: 22 additions & 63 deletions functions/backup_restore/backup_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,12 @@
from base_manager import BaseManager
from backup.backup import Backup
from backup.export_ import ChartInfoExporter
from zfs.snapshot import ZFSSnapshotManager
from utils.logger import get_logger

class BackupManager(BaseManager):
def __init__(self, backup_abs_path: Path):
super().__init__(backup_abs_path)
self.logger = get_logger()
self.snapshot_manager = ZFSSnapshotManager()
self.logger.info(f"BackupManager initialized for {self.backup_abs_path}")

def backup_all(self, retention=None):
Expand All @@ -20,44 +18,27 @@ def backup_all(self, retention=None):
backup = Backup(self.backup_abs_path)
backup.backup_all()
self.logger.info("Backup completed successfully")
self.cleanup_dangling_snapshots()
if retention is not None:
self.delete_old_backups(retention)
self.cleanup_dangling_snapshots()

def export_chart_info(self, retention=None):
"""Export chart information with optional retention."""
self.logger.info("Starting chart information export")
exporter = ChartInfoExporter(self.backup_abs_path)
exporter.export()
self.logger.info("Chart information export completed successfully")
self.cleanup_dangling_snapshots()
if retention is not None:
self.delete_old_exports(retention)

def delete_backup_by_name(self, backup_name: str):
"""Delete a specific backup by name."""
self.logger.info(f"Attempting to delete backup: {backup_name}")
full_backups, export_dirs = self.list_backups()

for backup in full_backups:
if backup.endswith(backup_name):
self.logger.info(f"Deleting full backup: {backup}")
self.lifecycle_manager.delete_dataset(backup)
self.snapshot_manager.delete_snapshots(backup_name)
self.logger.info(f"Deleted full backup: {backup} and associated snapshots")
self.cleanup_dangling_snapshots()
return True

for export in export_dirs:
if export.name == backup_name:
self.logger.info(f"Deleting export: {export}")
shutil.rmtree(export)
self.logger.info(f"Deleted export: {export}")
self.cleanup_dangling_snapshots()
return True

self.logger.info(f"Backup {backup_name} not found")
return False
result = self.delete_backup(backup_name)
if result:
self.logger.info(f"Deleted backup: {backup_name}")
else:
self.logger.info(f"Backup {backup_name} not found")

def delete_backup_by_index(self, backup_index: int):
"""Delete a specific backup by index."""
Expand All @@ -67,30 +48,20 @@ def delete_backup_by_index(self, backup_index: int):

if 0 <= backup_index < len(all_backups):
backup = all_backups[backup_index]
if backup in full_backups:
backup_name = Path(backup).name
self.logger.info(f"Deleting full backup: {backup_name}")
self.lifecycle_manager.delete_dataset(backup)
self.snapshot_manager.delete_snapshots(backup_name)
self.logger.info(f"Deleted full backup: {backup_name} and associated snapshots")
elif backup in export_dirs:
self.logger.info(f"Deleting export: {backup.name}")
shutil.rmtree(backup)
self.logger.info(f"Deleted export: {backup.name}")
self.cleanup_dangling_snapshots()
return True

self.logger.info(f"Invalid backup index: {backup_index}")
return False
backup_name = Path(backup).name
self.logger.info(f"Deleting backup: {backup_name}")
self.delete_backup(backup_name)
self.logger.info(f"Deleted backup: {backup_name}")
else:
self.logger.info(f"Invalid backup index: {backup_index}")

def interactive_delete_backup(self):
"""Offer an interactive selection to delete backups."""
self.logger.info("Starting interactive backup deletion")
selected_backup = self.interactive_select_backup()
if selected_backup:
all_backups = self.list_backups()[0] + self.list_backups()[1]
backup_index = all_backups.index(selected_backup)
self.delete_backup_by_index(backup_index)
backup_name = Path(selected_backup).name
self.delete_backup_by_name(backup_name)

def display_backups(self):
"""Display all backups without deleting them."""
Expand Down Expand Up @@ -118,31 +89,19 @@ def cleanup_dangling_snapshots(self):
full_backups, _ = self.list_backups()
full_backup_names = {Path(backup).name for backup in full_backups}

all_snapshots = self.snapshot_manager.list_snapshots()
pattern = re.compile(r'HeavyScript--\d{4}-\d{2}-\d{2}_\d{2}:\d{2}:\d{2}')
deleted_snapshots = set()

for snapshot in all_snapshots:
for snapshot in self.snapshot_manager.snapshots:
match = pattern.search(snapshot)
if match:
snapshot_name = match.group()
if snapshot_name not in full_backup_names and snapshot_name not in deleted_snapshots:
self.logger.info(f"Deleting dangling snapshot: {snapshot_name}")
self.snapshot_manager.delete_snapshots(snapshot_name)
self.logger.info(f"Deleted snapshot: {snapshot_name}")
deleted_snapshots.add(snapshot_name)

def delete_old_backups(self, retention):
"""Delete backups that exceed the retention limit."""
self.logger.debug(f"Deleting old backups exceeding retention: {retention}")
full_backups, _ = self.list_backups()
if len(full_backups) > retention:
for backup in full_backups[retention:]:
backup_name = Path(backup).name
self.logger.info(f"Deleting old backup: {backup_name}")
self.lifecycle_manager.delete_dataset(backup)
self.snapshot_manager.delete_snapshots(backup_name)
self.logger.info(f"Deleted old backup: {backup_name} and associated snapshots")
if snapshot_name not in full_backup_names:
self.logger.info(f"Deleting dangling snapshot: {snapshot}")
delete_result = self.snapshot_manager.delete_snapshot(snapshot)
if delete_result["success"]:
self.logger.info(f"Deleted snapshot: {snapshot}")
else:
self.logger.error(f"Failed to delete snapshot {snapshot}: {delete_result['message']}")

def delete_old_exports(self, retention):
"""Delete exports that exceed the retention limit."""
Expand Down
Loading
Loading