You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I know the crate is specifically for multi-threaded encoding/decoding.
I have managed to get sub millisecond encoding per image for my use case of encoding hundreds of small png files concurrently, and I would like to use mtpng to have low level control over Indexed pngs with transparency.
However, server side I do not want to use many threads on each request. Throughput of the server is more important, not time per request, so I think using the current thread would be the best way to do this.
I have looked at the code to see how easy it would be to have rayon as a default (optional) dependency, and be able to add default-features=false. However, I dont understand the code enough to remove the multithreading part in encoder.rs.
Also Im not even sure there would be a significant performance gain over
let pool = rayon::ThreadPoolBuilder::new().num_threads(1).build().unwrap();
(except that creating a thread pool per request seems like a bad idea)
I'd like to get feedback on this, also it could be useful for the WASM issue #13
The text was updated successfully, but these errors were encountered:
Hmmmmmm, well honestly it should be possible to gate stuff on a platform/feature, and it may indeed be useful to have a common API between threaded and non-threaded versions of apps. I'll keep this in mind for my upcoming refactor. :)
I'm planning to move the deflate compression portion out to a separate library, so that's a great chance to do some cleanup on the guts while I'm moving it around. :D
I know the crate is specifically for multi-threaded encoding/decoding.
I have managed to get sub millisecond encoding per image for my use case of encoding hundreds of small png files concurrently, and I would like to use mtpng to have low level control over Indexed pngs with transparency.
However, server side I do not want to use many threads on each request. Throughput of the server is more important, not time per request, so I think using the current thread would be the best way to do this.
I have looked at the code to see how easy it would be to have rayon as a default (optional) dependency, and be able to add default-features=false. However, I dont understand the code enough to remove the multithreading part in encoder.rs.
Also Im not even sure there would be a significant performance gain over
let pool = rayon::ThreadPoolBuilder::new().num_threads(1).build().unwrap();
(except that creating a thread pool per request seems like a bad idea)
I'd like to get feedback on this, also it could be useful for the WASM issue #13
The text was updated successfully, but these errors were encountered: