-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API overhaul #26
Merged
Merged
API overhaul #26
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
scottlamb
force-pushed
the
pr-api
branch
6 times, most recently
from
June 12, 2021 01:26
f04a8c3
to
3b0871f
Compare
BitRead::read_unary1 is faster than count_zeros. Add test for overflow.
* make the bit reader type take a BufRead rather than a slice so we don't have to keep a buffered copy of the RBSP. * reduce "stuttering" by taking the module name out of the struct name. * use a trait so there's less type bounds to deal with in callers. * take a name in all BitReader operations. This will improve error messages and trace logs/println debugging.
My goal is to establish a good baseline for performance impact of upcoming API changes. * include a complete usage of NalSwitch, RbspDecoder, and NAL parsers. The slice header parse will stop decoding RBSP after it's gotten a full slice header. * parse in one push, 184-byte pushes (like MPEG-TS), and 1440-byte pushes (~typical for RTP). RTP doesn't even use Annex B, but it needs RTSP decoding and NAL parsing.
I haven't removed RbspDecoder or adjusted decode_nal yet, and the code is a little ugly as a result. Seems to work though.
It now more closely matches NalAccumulator's interface. Getting ready to plug that in.
No more need to deal with a user context, Box<RefCell<>>, or separate traits for the various handlers. In the simplest case, just a closure will do. This is essentially performance-neutral by itself. It allows RBSP parsing to be lazy though, and after the next commit that will pretty significantly speed up the case where slice NALs are processed in a single push.
* remove RbspDecoder in favor of ByteReader * return ErrorKind::InvalidData on illegal byte sequences. This interface is straightforward now with the std::io interface. I held off until removing the RbspDecoder interface because doing it there was ugly. * re-implement decode_nal on top of ByteReader. behavior change: it now strips the NAL header byte, which I think makes it easier to use. Also replace the unit test with a doctest to better explain what it does. * don't look as far ahead for the next zero byte. This speeds things up in general but particularly the case where a push has a full slice NAL and we're only interested in the header.
* return error rather than panic for unimplemented B frame * set limits for things passed to Vec::with_capacity
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
for #4
I think this is a lot more straightforward to use. It's also faster. Looking at the benchmarks, throughputs in GiB/s: