forked from minio/minio
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fork Sync with main repository #139
Open
github-actions
wants to merge
152
commits into
ILoveAlpine:master
Choose a base branch
from
minio:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
readParts requires that both part.N and part.N.meta files be present. This change addresses an issue with how an error to return to the upper layers was picked from most drives where a UploadPart operation had failed.
rebalance metadata is good to have only, if it cannot be loaded when starting MinIO for some reason we can possibly ignore it and move on and let user start rebalance again if needed.
The items will be saved per target batch and will be committed to the queue store when the batch is full Also, periodically commit the batched items to the queue store based on configured commit_timeout; default is 30s; Bonus: compress queue store multi writes
Disable body recording for... * admin inspect * admin metrics * profiling download Also, if the recorded body is > 10MB, drop it.
Current implementation retries forever until our log buffer is full, and we start dropping events. This PR allows you to set a value until we give up on existing audit/logger batches to proceed to process the new ones. Bonus: - do not blow up buffers beyond batchSize value - do not leak the ticker if the worker returns
AFAICT we send a canceled context to unlock (and thereby releaseAll). This will cause network calls to fail. Instead use background and add 30s timeout.
this cache will be honored only when `prefix=""` while performing ListMultipartUploads() operation. This is mainly to satisfy applications like alluxio for their underfs implementation and tests. replaces #20181
Dont hard error for nonexisting LDAP entries instead of logging them report them via `mc` Signed-off-by: Shubhendu Ram Tripathi <[email protected]>
postUpload() incorrectly saves actual size as '-1' we should save correct size when its possible. Bonus: fix the PutObjectPart() write locker, instead of holding a lock before we read the client stream. We should hold it only when we need to commit the parts.
…itectures (#20424) Download static cURL into release Docker image for all supported architectures. Currently, the static cURL executable is only downloaded for the `amd64` architecture. However, `arm64` and `ppc64le` variants are also [available](https://github.com/moparisthebest/static-curl/releases/tag/v8.7.1).
Signed-off-by: Shubhendu Ram Tripathi <[email protected]>
When the encryption and compression are both enabled, the the server will avoid compressing the data for no apparent reason This commit will enable it and update unit tests.
Currently, it is not possible to remove a tier if it is not accessible or contains some data, add a force flag to make the removal successful in that case.
- PutObject() for multi-pooled was holding large region locks, which was not necessary. This affects almost all slowpoke clients and lengthy uploads. - Re-arrange locks for CompleteMultipart, PutObject to be close to rename()
Closes #20430 Limit allocations from badly formed documents.
windows has decided to be a community support only and source compile-friendly.
CheckParts call can take time to verify 10k parts of a single object in a single drive. To avoid an internal dealine of one minute in the single handler RPC, this commit will switch to streaming RPC instead.
…verify (#20757) Bump golang.org/x/crypto in /docs/debugging/s3-verify Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.23.0 to 0.31.0. - [Commits](golang/crypto@v0.23.0...v0.31.0) --- updated-dependencies: - dependency-name: golang.org/x/crypto dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
we do not need to hold the read locks at the higher layer instead before reading the body, instead hold the read locks properly at the time of renamePart() for protection from racy part overwrites to compete with concurrent completeMultipart().
…pect (#20760) Bump golang.org/x/crypto in /docs/debugging/inspect Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.23.0 to 0.31.0. - [Commits](golang/crypto@v0.23.0...v0.31.0) --- updated-dependencies: - dependency-name: golang.org/x/crypto dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.29.0 to 0.31.0. - [Commits](golang/crypto@v0.29.0...v0.31.0) --- updated-dependencies: - dependency-name: golang.org/x/crypto dependency-type: direct:production ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
… (#20776) If one object has many parts where all parts are readable but some parts are missing from some drives, this object can be sometimes un-healable, which is wrong. This commit will avoid reading from drives that have missing, corrupted or outdated xl.meta. It will also check if any part is unreadable to avoid healing in that case.
Fixes #20781: ``` λ aws --endpoint-url http://127.0.0.1:9001 s3api list-parts --bucket testbucket --key test.testcompress --upload-id "ZDM0YzUwM2YtZWM1Zi00NWI2LTgxMzYtZTIwMGE3Yjc0Y2Y1LjYyMzgyMmFhLWU2N2QtNGUyYS04NDE1LWUzZDFlZmJmMWUyZHgxNzM0NjI1MjgyMDkyNzY4MDAw" { "Parts": [ { "PartNumber": 1, "LastModified": "2024-12-19T16:47:04.334000+00:00", "ETag": "\"7025f242f56479e06c435c0b500cdbb2\"", "Size": 2002 } ], "ChecksumAlgorithm": "", "Initiator": { "ID": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4", "DisplayName": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4" }, "Owner": { "DisplayName": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4", "ID": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4" }, "StorageClass": "STANDARD" } ``` (for whatever reason the python script generated a 2002 byte file ;)
Add profiling potential crash wourkaround Using admin traces could potentially crash the server (or handler more likely) due to upstream divide by 0: felixge/fgprof#34 Ensure the profile always runs 100ms before stopping, so sample count isn't 0 (default sample rate ~10ms/sample, but allow for cpu starvation)
It is possible delete marker was received on old pool as decom move in progress, this PR allows decom retry to ensure these delete markers are moved to new pool so that decommission can be completed. Fixes #20819
Signed-off-by: Andreas Auernhammer <[email protected]>
Before #20575, files could pick up indices from unrelated files if no index was added. This would result in these files not being consistent across a set. When loading, search for the compression indicators and check if they are within the problematic date range, and clean up any parts that have an index but shouldn't. The test validates that the signature matches the one in files stored without an index. Bumps xlMetaVersion, so this check doesn't have to be made for future versions.
When compression is enabled, the final object size is not calculated in that case, we need to make sure that the provided buffer is always more significant than the shard size, the bitrot will always calculate the hash of blocks with shard size, except the last block.
Earlier, cluster and bucket metrics were named `minio_usage_last_activity_nano_seconds`. The bucket level is now named as `minio_bucket_usage_last_activity_nano_seconds` Signed-off-by: Shubhendu Ram Tripathi <[email protected]>
ListBuckets() would result in listing buckets without quorum, this PR fixes the behavior.
Backport of AIStor PR 247. Add support for full object checksums as described here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html New checksum types are fully supported. Mint tests from minio/minio-go#2026 are now passing. Includes fixes from #20743 for mint tests. Add using checksums as validation for object content. Fixes #20845 #20849 Fixes checksum replication (downstream PR 250)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fork Sync with main repository