- From: Jan-Ivar Bruaroey via GitHub <sysbot+gh@w3.org>
- Date: Thu, 24 Apr 2025 13:11:48 +0000
- To: public-webrtc-logs@w3.org
> This would map essentially 1:1 with the implementation used, and that can already pretty easily be inferred (e.g. via https://www.w3.org/TR/webrtc-stats/#dom-rtcoutboundrtpstreamstats-encoderimplementation or platform+https://www.w3.org/TR/webrtc-stats/#dom-rtcoutboundrtpstreamstats-powerefficientencoder, Those don't seem like great examples as they're blocked on [exposing hardware is allowed](https://www.w3.org/TR/webrtc-stats/#dfn-exposing-hardware-is-allowed), unless we're suggesting adding that requirement here? What are the other examples? > My assumption is that a single measurement frequency would be used for all encoders, this frequency value would be fixed for a given UA instance, and probably for a given UA across all devices it runs on (say a specific version of Chrome). It would be good to clarify this, otherwise I could see potential additional threats. Agreed. Clarifying these assumptions in the guidance can only help. > Can we let the implementor's guideline just be along what has been said above, e.g. "the frequency should be as high as possible as long as the performance impact can be kept negligible". I don't see a reason to change the frequency based on codec type, only by implementation performance overhead. For a given implementation though I don't see a reason to change the frequency - detailing that this should be fixed for a given UA seems fine to me. Doesn't tying it too tightly to performance make it another performance metric? I like the part that it should not vary by codec. -- GitHub Notification of comment by jan-ivar Please view or discuss this issue at https://github.com/w3c/webrtc-stats/pull/794#issuecomment-2827584416 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Thursday, 24 April 2025 13:11:49 UTC