Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
node/object: Optimize memory allocs for objects with known payload size
Previously, storage nodes slicing user data stream into NeoFS objects did not process payload size when it was specified in the request. Since any NeoFS cluster has a system parameter that limits the maximum size of a physically stored object, the node allocated a buffer equal to its value.For "large" objects this was quite acceptable and optimal. However, the smaller the amount of data that was sliced, the more redundant memory was allocated. According to the protocol, the size of the stored data may be unknown, so we cannot always rely on knowing the size of the streaming data. However, in the case when this parameter is specified in the request, nothing prevents the node from using its value to allocate a small enough buffer. In particular, this will significantly reduce memory consumption during mass streaming of “small” data. The updated version of the NeoFS SDK library supports processing a previously known amount of data in-box. All the application needs is to set `SetPayloadSize` slicing option. It's hard to imagine simpler approach. But there were some nuances. According to NeoFS API protocol, a zero header field is considered an explicit setting of the object without a payload, while an unspecified size is passed as all bits set in the field. However, NeoFS nodes for a long time regarded any value, including zero, as a dynamic size. Since zero also means a “forgotten” field, we cannot suddenly begin to perceive it as an object without a payload. Therefore, for update security, only non-zero values other than the maximum possible are processed as the size set by the user. In total, the applied optimization will not cover all cases, but for some of them it significantly increases performance and efficiency. Refs #2686. Signed-off-by: Leonard Lyubich <[email protected]>
- Loading branch information