Skip to content

JSR score do not depend on test coverage to claim compatibility #219

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ericlery opened this issue Mar 6, 2024 · 5 comments
Closed

JSR score do not depend on test coverage to claim compatibility #219

ericlery opened this issue Mar 6, 2024 · 5 comments

Comments

@ericlery
Copy link

ericlery commented Mar 6, 2024

Problem

I think that the number one question that would be relevant to a package/dependency is:

"Does this even work"?
— Some lazy and experienced software developer

Code coverage can tell us this. That's particularly relevant in regard to NPM packages compatibility with Deno. Every time someone asked if some NPM package would work, the response would have been:

"Clone the repo and launch the tests to see by yourself, then report by opening an issue"

Not ideal, to say the least. A JSR score, where code coverage is dependent up to 50% of the total score, could even highlight all this automatically to the Deno Team.

Furthermore, someone could falsely increase the score of their package by just declaring compatibility that is not real.


The best solution (I think)

The JSR score should be dependent, up to 50%, of the code coverage, at least via a second “hardcore score”. Alternatively, I would still love to be able to set my profile to some “user-enabled hardcore score view-mode” where the JSR score is 50% dependent on the percentage of code coverage.

Deno have awesome code coverage generation, and handle the lcov format. This would assure, I think, that non-Deno users can generate platform-agnostic code coverage reportable to JSR. Maybe this part is not relevant, I am not an expert in format of code coverage.

The compatibility claims must be backed by tests in the CI, like GitHub Actions.

For browsers, make it individual per browser, using headless versions in the CI. Then test non browsers runtimes too.

Why?

For instance, currently JavaScript's Set composition methods are only available to Chrome 122+, and not Firefox/Node/Deno.

Test backed compatibility per browser and server side runtime is a must.

The command that could do the trick

Maybe something like:

jsr compat [—all, —all-browsers, —all-non-browsers, —chrome, —firefox, —safari, —node, —deno, —bun, —cf-workers]

These options would pilot the runtimes as headless to see if everything goes as it should or not.

Hosted HTML coverage output

The command deno coverage --html can generate nice HTML files that show which lines of code are covered. They should be hosted on JSR.
Code coverage view as HTML

Alternatively or additionally, JSR have a “Symbol” tab, it would be nice if it also displays at least if the exported functions code lines are well covered
Example of how symbol coverage would look like


The easiest solution (meh)

In two parts:

  1. Don't count compatibility claims in the score, or way less.
  2. Show some percentage of code coverage as a badge in the package page

Example of badge:
badge example

Currently, it's already kind of possible to show badges, but only via the README of the package.
Example: https://jsr.io/@badrap/valita
image

The presence of a README also boost the score. Should it be in the same extent as if the README includes a badge for code coverage or not? I think not.

@github-project-automation github-project-automation bot moved this to Needs Triage in JSR Mar 6, 2024
@bradenmacdonald
Copy link

Personally, I would not be in favor of including code coverage in the score - though it's sometimes helpful, I think in many cases it's not a great metric to focus on (there are plenty of discussions of this online; suffice it to say that there's a wide range of opinions on the subject). If you like it as a metric, you can put it as a badge in the README.

I do think that including "has automated tests" into the score would be good, but it might be too difficult to detect automatically.

Test backed compatibility per browser and server side runtime is a must.

Agreed that that would be valuable. See #179 Preview the transpiled build so I can verify runtime compatibility before publishing and #164 Feature request: add more Browser Compatibility package settings which mentions the URLPattern example, similar to your Set composition methods example.

@ericlery
Copy link
Author

ericlery commented Mar 7, 2024

Personally, I would not be in favor of including code coverage in the score - though it's sometimes helpful, I think in many cases it's not a great metric to focus on

JSR is a chance for a new start. This registry has a great potential and it pushes quality. Although, there's nothing to assure these packages actually work. I think it's a miss, on this particular subject, for the moment.

Navigating GitHub to know with tag works, then use x y and z until it works on our project would still be the case even with the efforts of JSR.

A lot of people like JS and NPM. But for everyone else, the reason that always comes is:

“It doesn't work, I pass all my time finding which dependencies work with the other one instead of coding. I would have been much more advanced if I just did it myself.”

The 1.0 would benefit to at least have coverage for the symbols exported by the module. Furthermore, do it for every tag, mark/block/auto-yank installation of tags that don't work like they were intended to thanks to tests line coverage of the exported symbols. Yes, you can use GitHub Actions to only publish working tags on NPM today, but JSR is a new chance to do better than NPM, like Deno does over Node.

@lucacasonato
Copy link
Member

You are correct in saying that the score can be artificially inflated by users doing bad things:

  • adding (nearly) empty doc comments to all their exported symbols
  • adding a description consisting of just punctuation
  • adding incorrect runtime compat indicators

I think however, that this is fine. The JSR score is not a bulletproof "definitely great" label - it's more about whether the package author has put in some effort to document the public API. Yes they can cheat - but we expect most people won't.

Adding significant hurdles to adoption, namely the requirement for you to upload a bunch of coverage files in addition to your code, would make JSR unusable by most people. This is not something we can do.

It is reasonable to try to ascertain whether the user runs CI, and weigh this into the score. If you have ideas of how this could be accomplished in a simple way, let's discuss that in a different issue :)

I think the core of the issue you are trying to solve: "how do I report incorrect / misleading" information, and get it corrected / make sure other users are aware. This is a great question, and one we do not yet have an answer to - should we have a "thumbs up" / "thumbs down" system? A comments system on packages? A way to request a review of the claims by a moderator? If you have concrete ideas here, let's discuss in a separate issue.

@ericlery
Copy link
Author

ericlery commented Apr 1, 2024

I think the core of the issue you are trying to solve: "how do I report incorrect / misleading" information, and get it corrected / make sure other users are aware. This is a great question, and one we do not yet have an answer to - should we have a "thumbs up" / "thumbs down" system? A comments system on packages? A way to request a review of the claims by a moderator? If you have concrete ideas here, let's discuss in a separate issue.

A thumbs up / thumbs down system would be awesome, similarly to big “social” websites. A huge problem in package managers, especially on the JavaScript side, is: does this version is even good? The thumbs up system would be even better per version tag, and with comments. Additionally, category tags would allow good discoverability and community comparison.

Although I really think that the best idea I had is the code coverage for the symbols visible under the JSR.io “Symbols” tab that are exported by your main modules, often mod.ts. Not everything needs to be tested, but the library API must be. It's a very good indicator of quality that the public API is tested. To encourage quick and easy score, the actual system is good. Although, to go further for the one who want it, a badge system would add additional scoring, and it will signal quality to those who want to rely on the best dependencies.

@crowlKats crowlKats removed the status in JSR Jan 17, 2025
@lucacasonato
Copy link
Member

We discussed this briefly during today's meeting and we do not see a compelling way of doing this (see discussion in this issue). If you think you have a good idea this could be done, please just open a new issue.

@github-project-automation github-project-automation bot moved this to Done in JSR May 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

3 participants