Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fork Sync with main repository #139

Open
wants to merge 152 commits into
base: master
Choose a base branch
from

Conversation

github-actions[bot]
Copy link

@github-actions github-actions bot commented Sep 6, 2024

Fork Sync with main repository

krisis and others added 30 commits September 6, 2024 03:51
readParts requires that both part.N and part.N.meta files be present.
This change addresses an issue with how an error to return to the upper
layers was picked from most drives where a UploadPart operation 
had failed.
rebalance metadata is good to have only,
if it cannot be loaded when starting MinIO
for some reason we can possibly ignore it
and move on and let user start rebalance
again if needed.
The items will be saved per target batch and will
be committed to the queue store when the batch is full

Also, periodically commit the batched items to the queue store
based on configured commit_timeout; default is 30s;

Bonus: compress queue store multi writes
Disable body recording for...

* admin inspect
* admin metrics
* profiling download

Also, if the recorded body is > 10MB, drop it.
Current implementation retries forever until our
log buffer is full, and we start dropping events.

This PR allows you to set a value until we give
up on existing audit/logger batches to proceed to
process the new ones.

Bonus:
 - do not blow up buffers beyond batchSize value
 - do not leak the ticker if the worker returns
AFAICT we send a canceled context to unlock (and thereby releaseAll). This will cause network calls to fail.

Instead use background and add 30s timeout.
this cache will be honored only when `prefix=""` while
performing ListMultipartUploads() operation.

This is mainly to satisfy applications like alluxio
for their underfs implementation and tests.

replaces #20181
Dont hard error for nonexisting LDAP entries instead of logging them
report them via `mc`

Signed-off-by: Shubhendu Ram Tripathi <[email protected]>
postUpload() incorrectly saves actual size as '-1'
we should save correct size when its possible.

Bonus: fix the PutObjectPart() write locker, instead
of holding a lock before we read the client stream.

We should hold it only when we need to commit the parts.
…itectures (#20424)

Download static cURL into release Docker image for all supported architectures.

Currently, the static cURL executable is only downloaded for the `amd64` architecture. However, `arm64` and `ppc64le` variants are also [available](https://github.com/moparisthebest/static-curl/releases/tag/v8.7.1).
When the encryption and compression are both enabled, the
the server will avoid compressing the data for no apparent reason

This commit will enable it and update unit tests.
Currently, it is not possible to remove a tier if it is not accessible
or contains some data, add a force flag to make the removal successful
in that case.
- PutObject() for multi-pooled was holding large
  region locks, which was not necessary. This affects
  almost all slowpoke clients and lengthy uploads.

- Re-arrange locks for CompleteMultipart, PutObject
  to be close to rename()
Closes #20430

Limit allocations from badly formed documents.
windows has decided to be a community support
only and source compile-friendly.
vadmeste and others added 30 commits December 12, 2024 11:20
CheckParts call can take time to verify 10k parts of a single object in a single drive.
To avoid an internal dealine of one minute in the single handler RPC, this commit will
switch to streaming RPC instead.
…verify (#20757)

Bump golang.org/x/crypto in /docs/debugging/s3-verify

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.23.0 to 0.31.0.
- [Commits](golang/crypto@v0.23.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
we do not need to hold the read locks at the higher
layer instead before reading the body, instead hold
the read locks properly at the time of renamePart()
for protection from racy part overwrites to compete
with concurrent completeMultipart().
…pect (#20760)

Bump golang.org/x/crypto in /docs/debugging/inspect

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.23.0 to 0.31.0.
- [Commits](golang/crypto@v0.23.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.29.0 to 0.31.0.
- [Commits](golang/crypto@v0.29.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
… (#20776)

If one object has many parts where all parts are readable but some parts
are missing from some drives, this object can be sometimes un-healable,
which is wrong.

This commit will avoid reading from drives that have missing, corrupted or
outdated xl.meta. It will also check if any part is unreadable to avoid
healing in that case.
Fixes #20781:

```
λ aws --endpoint-url http://127.0.0.1:9001 s3api list-parts --bucket testbucket --key test.testcompress --upload-id "ZDM0YzUwM2YtZWM1Zi00NWI2LTgxMzYtZTIwMGE3Yjc0Y2Y1LjYyMzgyMmFhLWU2N2QtNGUyYS04NDE1LWUzZDFlZmJmMWUyZHgxNzM0NjI1MjgyMDkyNzY4MDAw"
{
    "Parts": [
        {
            "PartNumber": 1,
            "LastModified": "2024-12-19T16:47:04.334000+00:00",
            "ETag": "\"7025f242f56479e06c435c0b500cdbb2\"",
            "Size": 2002
        }
    ],
    "ChecksumAlgorithm": "",
    "Initiator": {
        "ID": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4",
        "DisplayName": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4"
    },
    "Owner": {
        "DisplayName": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4",
        "ID": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4"
    },
    "StorageClass": "STANDARD"
}
```

(for whatever reason the python script generated a 2002 byte file ;)
Add profiling potential crash wourkaround

Using admin traces could potentially crash the server (or handler more likely) due to upstream divide by 0: felixge/fgprof#34

Ensure the profile always runs 100ms before stopping, so sample count isn't 0 (default sample rate ~10ms/sample, but allow for cpu starvation)
It is possible delete marker was received on old pool as decom
move in progress, this PR allows decom retry to ensure these
delete markers are moved to new pool so that decommission can
be completed.

Fixes #20819
Before #20575, files could pick up indices 
from unrelated files if no index was added.

This would result in these files not being consistent across a set.

When loading, search for the compression indicators and check if they 
are within the problematic date range, and clean up any parts that have 
an index but shouldn't.

The test validates that the signature matches the one in files stored without an index.

Bumps xlMetaVersion, so this check doesn't have to be made for future versions.
When compression is enabled, the final object size is not calculated in
that case, we need to make sure that the provided buffer is always
more significant than the shard size, the bitrot will always calculate 
the hash of blocks with shard size, except the last block.
Earlier, cluster and bucket metrics were named 
`minio_usage_last_activity_nano_seconds`.

The bucket level is now named as 
`minio_bucket_usage_last_activity_nano_seconds`

Signed-off-by: Shubhendu Ram Tripathi <[email protected]>
ListBuckets() would result in listing buckets
without quorum, this PR fixes the behavior.
Backport of AIStor PR 247.

Add support for full object checksums as described here:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html

New checksum types are fully supported. Mint tests from minio/minio-go#2026 are now passing.

Includes fixes from #20743 for mint tests.

Add using checksums as validation for object content. Fixes #20845 #20849

Fixes checksum replication (downstream PR 250)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.