Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SN starts refusing to save some objects after MaxObjectSize increase #2910

Open
cthulhu-rider opened this issue Aug 8, 2024 · 1 comment
Labels
bug Something isn't working I4 No visible changes neofs-storage Storage node application issues S4 Routine U2 Seriously planned
Milestone

Comments

@cthulhu-rider
Copy link
Contributor

cthulhu-rider commented Aug 8, 2024

MaxObjectSize network setting limits payload size of physically stored NeoFS objects. Storage nodes deny bigger objects. By default, value is 64M. It may be increased by the IR

Expected Behavior

objects not bigger than the current setting are always accepted

Current Behavior

objects with payload bigger than the starttime limit are denied after config increase in runtime

Possible Solution

listen to net config changes and tune server according to it

Steps to Reproduce (for bugs)

see test. Briefly:

  1. start network with L1 = 64M object payload limit
  2. increase the limit to any L2 > L1
  3. try to store object of any size > L1

Context

// TODO(@cthulhu-rider): max object size setting may change in general,
// but server configuration is static now

Regression

no

Your Environment

@cthulhu-rider cthulhu-rider added bug Something isn't working neofs-storage Storage node application issues labels Aug 8, 2024
@roman-khimov roman-khimov added this to the v0.43.0 milestone Aug 8, 2024
@roman-khimov roman-khimov added U2 Seriously planned S4 Routine I4 No visible changes labels Aug 8, 2024
cthulhu-rider added a commit that referenced this issue Aug 8, 2024
Since storage node serves `ObjectService.Replicate` RPC, the gRPC server
must be able to accept the biggest allowed object. Previously, node
calculated global message limit for the gRPC server once on startup.
With this behavior, when network setting `MaxObjectSize` was increased,
the node stopped accepting write objects larger than the previous limit.
This manifested itself in a denial of replication service.

From now storage node updates max received gRPC message size (if
needed) on each refresh of the `MaxObjectSize` setting cache and via
Netmap contract's polling done once per minute.

Refs #2910.

Signed-off-by: Leonard Lyubich <[email protected]>
cthulhu-rider added a commit that referenced this issue Aug 8, 2024
Since storage node serves `ObjectService.Replicate` RPC, the gRPC server
must be able to accept the biggest allowed object. Previously, node
calculated global message limit for the gRPC server once on startup.
With this behavior, when network setting `MaxObjectSize` was increased,
the node stopped accepting write objects larger than the previous limit.
This manifested itself in a denial of replication service.

From now storage node updates max received gRPC message size (if
needed) on each refresh of the `MaxObjectSize` setting cache and via
Netmap contract's polling done once per minute.

Refs #2910.

Signed-off-by: Leonard Lyubich <[email protected]>
@roman-khimov roman-khimov modified the milestones: v0.43.0, v0.44.0 Aug 12, 2024
@roman-khimov
Copy link
Member

Please upstream your grpc changes and let's see how it goes there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working I4 No visible changes neofs-storage Storage node application issues S4 Routine U2 Seriously planned
Projects
None yet
Development

No branches or pull requests

2 participants