-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assorted comments on RTCAudioPlayoutStats
#742
Comments
It appears this was merged fairly recently in #682. Was it presented to the WG? I don't recall any WG discussion on it, nor could I locate any mention of it in minutes. Perhaps it shouldn't have been merged? |
This was not presented to the WG, it was presented on the bi-weekly stats meeting. In your opinion, what changes do or do not require WG interaction? We've had bi-weekly stats editors meetings for years (since the inception of the getStats API), but as of recently there have been a couple of questions around WG involvement. I don't want to give off the impression that we're going behind the WG's back. Should WG presentation be required any time we add a new stats type? Any time we make "larger" additions? Or do you want to get involved in the bi-weeklies as well / individual metrics? |
We should clarify what to do if there are multiple playout paths. Having multiple IDs is one option, but if this is as niche use case as I think it is, I would prefer if the playout can be represented as a single object. |
Note that RTCAudioPlayoutStats is only applicable to playout paths representing an audio device. I don't imagine many applications plays out the same audio track on multiple devices simultaneously. If your use case is piping it to an AudioContext I think WebAudio-centric metrics may be more suitable than RTCPeerConnection.getStats(). The playout stats are not all-encompassing of any possible playout path |
What other use case is there, playing out on an I think we should remove this, having statistics about output is the responsibility of the objects responsible for rendering, and the This API isn't capable of representing all use-case, niche or not (which is not a strong argument anyway, we can't assume use-cases as API designers, it just needs to work), not flexible enough for authors to understand quality in common scenario, as previously said. Authors that want to understand quality of the rendering will ask the rendering object if the rendering is going well, it's simpler to do and better in every aspect. |
The particular metrics exposed here (samples inserted and removed in order to do rate adaption) are operations that in our current implementation are performed solely within the WebRTC module. Where would you propose exposing them, and what (conceptual) mechanism would you propose to get the information from the WebRTC module to where the information is exposed? |
Even multiple HTMLMediaElement objects are considered the same playout path from WebRTC's perspective. The RTCAudioPlayoutStats implementation that shipped in Chrome M111 (currently Stable) at least is always one, and its samples are counted after mixing and what is sent to the device. But it would also be possible that each inbound-rtp has its own playout path if mixing is not implemented in WebRTC. |
Yes. In Chromium you'll have an N:1 relationship between inbound-rtps and media-playout when you do this.
RTCAudioPlayoutStats is not applicable if you use WebAudio. And if you do, it sounds like WebAudio already solves your problem and getStats() should not get involved. |
Those meetings aren't open to members, and therefore do not represent the view of the WG. My understanding of the role of editors includes discretion to craft and merge editorial PRs, maybe PRs for normative bug fixes where WG opinion can be inferred from earlier discussion, and PRs for issues that have been discussed in the WG that the WG agrees to proceed with. It's been best practice for a while in most specs this WG owns to present new features and even substantive normative fixes to the WG in its monthly online interim meeting to ensure that proper WG discussion has happened.
This document, now governed by the 2 November 2021 W3C Process Document, is auto-published by the chairs to TR, which is the canonical version the public is pointed to, in spite of its disclaimer:
I've emphasized "Working Group intends to include" to highlight that this document purports to represent the current will of the WG, not its editors.
Yes. Other WGs that have a more live document model usually pay the cost of a more rigorous PR process.
If they're normative, yes. This has been happening in the past AFAIK.
Adding one more person to those meetings won't solve things. For disclosure, I was unaware of this recurring meeting until last year, and I haven't been at them. All other WebRTC specs are covered by our weekly WebRTC editors' meeting, which I attend regularly. |
So the same JS might get a stats object in some implementations but not others? We try not to expose implementation details like this, because it's bad for web compat. |
Filed #746 regarding the process and WG questions. |
@henbos if we're not backing this out, can we add a note that clarifies there stats lack consensus? This would seem necessary to satisfy "Possible to merge PRs that may lack consensus, if a note is attached indicating controversy." |
We can add a note for sure. Something like "Feature at risk. This metric currently lacks consensus."? |
This issue was discussed in the minutes of WebRTC March 2023 meeting – 21 March 2023 (Issue #742 Assorted comments on RTCAudioPlayoutStats) |
playoutId
is frequently not implementable, there can be more than one playout path, or none, because aMediaStreamTrack
can be sent to multipleAudioContext
and/orHTMLMediaElement
for rendering.There can also be thing that look like playout paths that are not, e.g., piping a inbound
MediaStreamTrack
to anAudioContext
and just sending the data to anAnalyzerNode
or anAudioWorkletNode
to compute some metrics or simply record the output (e.g. using a Web Worker and Web Codecs, or whatever else).An
AudioContext
can also be connected to anotherAudioContext
, that are on different devices. The actual output can be on the first, second or both. This is another case whereplayoutId
cannot be implemented.All the other metrics are reimplementations of Web Audio API's Audio Render Capacity and latency metrics.
Only application authors know the shape of their audio output path, and therefore they only can determine a quality metric, based on number provided by the object they actually use to render audio.
The text was updated successfully, but these errors were encountered: