-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Playback positions and audio device timing information. #279
Comments
Is there any way to count samples being played on an output stream, which I guess would be a direct mapping to the timing information? I'd be interested in working on this feature, since I need it for a music visualizer I am building (need A/V sync for visualizations). What kind of API did you have in mind? |
I don’t have a concrete idea yet, but I think GStreamer’s timer APIs can be used as a good reference. |
I'll take a look at it and try implementing something similar here. I'm pretty new to rust so a review once I'm done would be greatly appreciated. |
Just a heads up that #301 is a large change to interface; you may need to do a big rebase later. If you have any questions about the code, don't hesitate to ask here! |
In general, it would be very useful to have a way to correlate samples with time, and to be able to tell (or at least estimate) when input samples were recorded by the ADCs, when the audio callback was called, and when output samples will be played by the DACs, against a common clock. This allows syncing input streams (and output streams) with each other, and playing latency compensation tricks in interactive applications such as MIDI-controlled virtual instruments. |
After taking a look at #301, I think I will wait on that work to be complete before I start working on this. I'm not sure if this would work on all platforms, but it seems to me that when each particular API produces/requires some additional samples, we increment a new One question I had is how accurate this clock would be. My assumption is that the best we can do is increment the clock every time the underlying API produces/requests samples, which depends on the number of samples it produces/requests (e.g. at 44100 Hz, if the API provides/requests every 64 samples/buffer size 64, We will update the clock roughly around every 1.5 ms). Depending on the use-case, this could be fast enough. I have 3 questions regarding this:
@HadrienG2 Do you have an idea of how we would make this work. It seems like input/output streams (in the new non-blocking API) are quite separate. The GStreamer API does this through the help of pipelines, but I'm not sure if cpal in its current form will support this. Perhaps you can build something on top of whatever interface we provide for this ticket? @ishitatsuyuki In general, does my approach make sense? |
First I want to note that sample buffering and playback positions should be handled separately - callbacks suffer from scheduler drifts for example, and obviously not all APIs provide very small buffer size. The strategy would basically be, directly asking the underlying API for timestamps (snd_pcm_delay in ALSA), or calculate based on system clock and sync with audio clock periodically if it doesn't. Additional measures might be needed to reduce synchronization overhead. |
@Wojtechnology One API which I think does audio timing pretty well, and can be used as inspiration, is JACK. It provides two facilities that are both very useful, clock and transport. Regarding clock:
Regarding transport:
As @ishitatsuyuki pointed out, the two concepts shouldn't necessarily be associated with the same API/callback. In JACK's case, they aren't. When trying to map these concepts into CPAL, two issues may appear:
Personally, I think a minimal integration could be:
|
I'm going to close this now that #397 has landed and #408 is opened. There was some discussion related to transport APIs in this issue, but I think discussion on the scope of such an API might be best done in another issue (my intuition is that this could be built downstream on top of the new time stamp API anyway). |
I'm new to this library but very interested in its wonderful features. Especially I really need the feature of retrieving playback position. I have not read the entire issues concerned yet, but let me ask a question: are the library users already provided with getting the playback position info, or is it yet to be implemented? |
Okay, I've read all the related issue and understood. The playback time is provided in the second argument of the callback functions as |
I'm trying to use cpal to make a simple synth, and getting the time associated with sample n in the buffer my output callback gets is very difficult, the only way i could find is to use an unsafe mutable static, write the samplerate before anything happens, and then read it in the callback, which doesn't seem nice. |
@TonalidadeHidrica i resorted to just putting it in a const, but even then the timestamps i get are StreamInstant and anything i try to do with them is private. can't create one, can't get as nanoseconds, can't do anything apart from adding and subtracting fn run<T: Sample>(data: &mut [T], info: &cpal::OutputCallbackInfo) {
let buf_start = info.timestamp().playback;
for (i, sample) in data.iter_mut().enumerate() {
let t = buf_start.add(Duration::new(0, (i as u32) / SAMPLE_RATE * NANOSECONDS_IN_SECOND)).unwrap(); // this gets me i think the time in a StreamInstant... but thats completely unusable without doing some black magic to cast it into a duration
}
} granted i could keep state and use the first one i get to use |
Why wouldn't you just close over the sample rate when constructing the callback? |
Thanks for your answers, i ended up closing over it |
We should expose the timing information because it’s useful for A/V sync; or simply because most API provide them.
(It’s also useful because audio clocks are not precisely same as CPU clocks, and also underruns could alter the timings which applications should detect.)
The text was updated successfully, but these errors were encountered: