-
Notifications
You must be signed in to change notification settings - Fork 134
JSR score do not depend on test coverage to claim compatibility #219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Personally, I would not be in favor of including code coverage in the score - though it's sometimes helpful, I think in many cases it's not a great metric to focus on (there are plenty of discussions of this online; suffice it to say that there's a wide range of opinions on the subject). If you like it as a metric, you can put it as a badge in the README. I do think that including "has automated tests" into the score would be good, but it might be too difficult to detect automatically.
Agreed that that would be valuable. See #179 Preview the transpiled build so I can verify runtime compatibility before publishing and #164 Feature request: add more Browser Compatibility package settings which mentions the |
JSR is a chance for a new start. This registry has a great potential and it pushes quality. Although, there's nothing to assure these packages actually work. I think it's a miss, on this particular subject, for the moment. Navigating GitHub to know with tag works, then use x y and z until it works on our project would still be the case even with the efforts of JSR. A lot of people like JS and NPM. But for everyone else, the reason that always comes is:
The 1.0 would benefit to at least have coverage for the symbols exported by the module. Furthermore, do it for every tag, mark/block/auto-yank installation of tags that don't work like they were intended to thanks to tests line coverage of the exported symbols. Yes, you can use GitHub Actions to only publish working tags on NPM today, but JSR is a new chance to do better than NPM, like Deno does over Node. |
You are correct in saying that the score can be artificially inflated by users doing bad things:
I think however, that this is fine. The JSR score is not a bulletproof "definitely great" label - it's more about whether the package author has put in some effort to document the public API. Yes they can cheat - but we expect most people won't. Adding significant hurdles to adoption, namely the requirement for you to upload a bunch of coverage files in addition to your code, would make JSR unusable by most people. This is not something we can do. It is reasonable to try to ascertain whether the user runs CI, and weigh this into the score. If you have ideas of how this could be accomplished in a simple way, let's discuss that in a different issue :) I think the core of the issue you are trying to solve: "how do I report incorrect / misleading" information, and get it corrected / make sure other users are aware. This is a great question, and one we do not yet have an answer to - should we have a "thumbs up" / "thumbs down" system? A comments system on packages? A way to request a review of the claims by a moderator? If you have concrete ideas here, let's discuss in a separate issue. |
A thumbs up / thumbs down system would be awesome, similarly to big “social” websites. A huge problem in package managers, especially on the JavaScript side, is: does this version is even good? The thumbs up system would be even better per version tag, and with comments. Additionally, category tags would allow good discoverability and community comparison. Although I really think that the best idea I had is the code coverage for the symbols visible under the JSR.io “Symbols” tab that are exported by your main modules, often |
We discussed this briefly during today's meeting and we do not see a compelling way of doing this (see discussion in this issue). If you think you have a good idea this could be done, please just open a new issue. |
Problem
I think that the number one question that would be relevant to a package/dependency is:
Code coverage can tell us this. That's particularly relevant in regard to NPM packages compatibility with Deno. Every time someone asked if some NPM package would work, the response would have been:
Not ideal, to say the least. A JSR score, where code coverage is dependent up to 50% of the total score, could even highlight all this automatically to the Deno Team.
Furthermore, someone could falsely increase the score of their package by just declaring compatibility that is not real.
The best solution (I think)
The JSR score should be dependent, up to 50%, of the code coverage, at least via a second “hardcore score”. Alternatively, I would still love to be able to set my profile to some “user-enabled hardcore score view-mode” where the JSR score is 50% dependent on the percentage of code coverage.
Deno have awesome code coverage generation, and handle the
lcov
format. This would assure, I think, that non-Deno users can generate platform-agnostic code coverage reportable to JSR. Maybe this part is not relevant, I am not an expert in format of code coverage.The compatibility claims must be backed by tests in the CI, like GitHub Actions.
For browsers, make it individual per browser, using headless versions in the CI. Then test non browsers runtimes too.
Why?
For instance, currently JavaScript's
Set
composition methods are only available to Chrome 122+, and not Firefox/Node/Deno.Test backed compatibility per browser and server side runtime is a must.
The command that could do the trick
Maybe something like:
These options would pilot the runtimes as headless to see if everything goes as it should or not.
Hosted HTML coverage output
The command

deno coverage --html
can generate nice HTML files that show which lines of code are covered. They should be hosted on JSR.Alternatively or additionally, JSR have a “Symbol” tab, it would be nice if it also displays at least if the exported functions code lines are well covered

The easiest solution (meh)
In two parts:
Example of badge:

Currently, it's already kind of possible to show badges, but only via the README of the package.

Example: https://jsr.io/@badrap/valita
The presence of a README also boost the score. Should it be in the same extent as if the README includes a badge for code coverage or not? I think not.
The text was updated successfully, but these errors were encountered: