- From: Erik Språng via GitHub <sysbot+gh@w3.org>
- Date: Thu, 24 Apr 2025 12:20:27 +0000
- To: public-webrtc-logs@w3.org
> The fact that some encoders can expose or not PSNR would be new information exposed to the web and could be used for fingerprinting. This would map essentially 1:1 with the implementation used, and that can already pretty easily be inferred (e.g. via https://www.w3.org/TR/webrtc-stats/#dom-rtcoutboundrtpstreamstats-encoderimplementation or platform+https://www.w3.org/TR/webrtc-stats/#dom-rtcoutboundrtpstreamstats-powerefficientencoder, not to mention info from WebCodecs, WebGPU, parsing data from encoded transform, etc etc). So while it might be a new "bit", it doesn't actually provide any new information imo. > Also, this PR gives no implementor's guideline. My assumption is that a single measurement frequency would be used for all encoders, this frequency value would be fixed for a given UA instance, and probably for a given UA across all devices it runs on (say a specific version of Chrome). It would be good to clarify this, otherwise I could see potential additional threats. Can we let the implementor's guideline just be along what has been said above, e.g. "the frequency should be as high as possible as long as the performance impact can be kept negligible". I don't see a reason to change the frequency based on codec type, only by implementation performance overhead. For a given implementation though I don't see a reason to change the frequency - detailing that this should be fixed for a given UA seems fine to me. -- GitHub Notification of comment by sprangerik Please view or discuss this issue at https://github.com/w3c/webrtc-stats/pull/794#issuecomment-2827416655 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Thursday, 24 April 2025 12:20:28 UTC