You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We encountered an issue when using the library to make split zip files.
We have noticed a bug where, if the size of the metadata is greater than the partion size (for instance, 816Ko > 512ko), the API is unable to store the headers in the final zip file, and loops indefinitely until the JVM encounters an OoM error.
We can go around this bug by increasing the partition size, however our client requests that we find a more stable solution, as they aim to work on very large quantities of data and would like a scalable solution.
Is there a technical possibility to solve the behaviour in this library ?
The text was updated successfully, but these errors were encountered:
Hello.
Our project uses zip4j v2.11.5.
We encountered an issue when using the library to make split zip files.
We have noticed a bug where, if the size of the metadata is greater than the partion size (for instance, 816Ko > 512ko), the API is unable to store the headers in the final zip file, and loops indefinitely until the JVM encounters an OoM error.
We can go around this bug by increasing the partition size, however our client requests that we find a more stable solution, as they aim to work on very large quantities of data and would like a scalable solution.
Is there a technical possibility to solve the behaviour in this library ?
The text was updated successfully, but these errors were encountered: