-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Score for "passing in all browsers" on Interop 2022 dashboard #3005
Comments
Some more info on this - we are pulling the stats on these scores from a CSV created by the results-analysis code, which scores each focus area with a 0-1000 score for each browser and each date interval. Here is the CSV for experimental runs for Interop 2022. The difficulty here comes with determining which tests/subtests pass in every browser. Since the current CSV format only has a score for each focus area, we can't derive this information because we can't see exactly which test/subtests are failing to determine where fails/passes are overlapping. It also depends on how detailed we need to be. Do we want to see the progression of this statistic over time? If we only care about the current state, it might be possible to calc this separately from the current CSV by looking at the latest run. However, my suggestion would be to calc this over time like all the other scores, adding new columns to the CSV for each focus area, similar to a separate browser score. Current CSV format (showing only aspect-ratio focus area as an example):
New column:
This would require a change in the results-analysis code, a UI update on wpt.fyi, as well as a change to the CSV parsing code on the client side of wpt.fyi. This is a manageable change if it is deemed worth implementing. |
Taking your example. Clicking on Aspect Ratio. if I add to the query I get Now let see with something more obvious/informative: Subgrid Aspect-ratio starts to be more usable and subgrid very hard to use in a reliable way. |
Basically, the score will be always lower than the smaller individual browser score. |
Should be closed as this is not implemented as the |
Very right, this score is now available in production as of today 🙂 |
This was suggested by @karlcow in web-platform-tests/interop#198 and discussed in web-platform-tests/interop#209.
@DanielRyanSmith can you look into what it would take to implement this? I suspect that https:/web-platform-tests/results-analysis would need to be updated, we can't get this from the existing data files.
The text was updated successfully, but these errors were encountered: