Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cherry-pick Enhancement: Add gpshrink to support elastic scaling. With some editions by me. (#32) #16

Merged
merged 1 commit into from
Nov 18, 2024

Conversation

diPhantxm
Copy link
Contributor

In order to support gpshrink, similar to gpexpand, we first support "alter table shrink table to " to redistribute data on a specific number of segments.

For gpshrink implementation, it is mainly divided into two stages similar to gpexpand:

  1. Collect the tables that need to be shrink and write them into gpshrink.status_detail.
  2. Perform data redistribution on the tables that need to be shrink, and delete specific segments in gp_segment_configuration.

Here are some reminders before you submit the pull request

  • Add tests for the change
  • Document changes
  • Communicate in the mailing list if needed
  • Pass make installcheck
  • Review a PR in return to support the community

…h some editions by me. (#32)

In order to support gpshrink, similar to gpexpand, we first
support "alter table <tablename> shrink table to <segnum>"
to redistribute data on a specific number of segments.

For gpshrink implementation, it is mainly divided into two
stages similar to gpexpand:
1. Collect the tables that need to be shrink and write them
into gpshrink.status_detail.
2. Perform data redistribution on the tables that need to be
shrink, and delete specific segments in gp_segment_configuration.
@reshke reshke merged commit 2f8184d into OPENGPDB_STABLE Nov 18, 2024
4 of 5 checks passed
@reshke reshke deleted the shrink branch January 29, 2025 17:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants