You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. Since the time the ownership of the original crate was transferred in April, a number of performance regression has been added to the codebase. One such regression (#231) makes the crate multiple thousands of times slower to open zip files than before. This issue was reported more two months ago and since then, no response from the new maintainer has been issued. Given the quality of the added code that caused the issue, I am not very confident the new code received the code review it should have had received.
What criteria do you have for accepting new code for this crate? How do you make sure bad actors are not submitting malicious code? Do you consider performance regressions in orders of magnitude a DoS surface area?
How are you going to resolve the performance issues caused in the recent updates?
What vision do you have for the future of this library? What features do you have planned?
The text was updated successfully, but these errors were encountered:
I intend to maintain the crate to the extent I'm able, but I really need someone else to start committing time as well.
There are too many possible configurations of feature flags and use cases (e.g. large files, many files, deeply nested directories, ZIP32, ZIP64, obscure ZIP features) for automated performance testing to be feasible. Even if I had data on what specific use cases were most popular, it'd be a lot of tests to write for a one-man hobby project that's also had plenty of bugs and feature requests.
The criteria I use to evaluate PRs are:
Does this change provide a missing feature, fix a bug and add a regression test, or obviously improve performance?
Are there ways to improve it that are obvious to me, but that haven't been done?
Do the existing unit tests and fuzz tests pass, and still test what they're designed to test?
If new functionality is added, is it tested?
When logic is changed, is it in compliance with the APPNOTE?
Do all the changes make sense to me?
I'm currently still trying to overhaul the automation of functional testing (see the afl branch) so that pull requests that pass all tests can be merged without timing out in the merge queue. Hopefully with a vacation from work upcoming, I'll have more time to devote to this project; but it likely won't be permanently enough.
Hello. Since the time the ownership of the original crate was transferred in April, a number of performance regression has been added to the codebase. One such regression (#231) makes the crate multiple thousands of times slower to open zip files than before. This issue was reported more two months ago and since then, no response from the new maintainer has been issued. Given the quality of the added code that caused the issue, I am not very confident the new code received the code review it should have had received.
I have the following questions for @Pr0methean :
The text was updated successfully, but these errors were encountered: