Skip to content

Use hatch scripts for testing #472

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jessegrabowski opened this issue May 7, 2025 · 5 comments
Open

Use hatch scripts for testing #472

jessegrabowski opened this issue May 7, 2025 · 5 comments
Labels
enhancements New feature or request good first issue Good for newcomers help wanted Extra attention is needed maintenance

Comments

@jessegrabowski
Copy link
Member

Suggested by @theorashid here, copying his comment:

Also we could use hatch scripts for the testing e.g. https://github.com/JaxGaussianProcesses/GPJax/blob/main/pyproject.toml#L93-L112

Context: In #370, we switched from using setuptools to hatch to build our sdist and wheels, so there's more we can do to take advantage of that new tool.

@jessegrabowski jessegrabowski added enhancements New feature or request good first issue Good for newcomers help wanted Extra attention is needed maintenance labels May 7, 2025
@jessegrabowski
Copy link
Member Author

@maresb or @theorashid could you add a bit more context to this one? I'm not familiar with the hatch ecosystem at all

@maresb
Copy link
Collaborator

maresb commented May 8, 2025

I believe @theorashid is suggesting to use Hatch to define the canonical form of all the testing and linting commands, so that you can run hatch run dev:test to run the test command defined in [tool.hatch.envs.dev.scripts] as test = ... from the environment defined in [tool.hatch.envs.dev...], e.g. here.

I personally don't use Hatch (a CLI tool), but instead exclusively Hatchling, its build backend that lets you write sane pyproject.toml files, so I haven't implemented this suggestion for any of my projects.

One minor issue I see is figuring out how to reconcile this with our current usage of pre-commit. (For the GPJax repo linked by @theorashid as an example, it looks like they just don't use pre-commit, and instead run all their checks in CI.) Lots of tools these days, e.g. pytest, let you set all the command-line options from config, so then you can simply run pytest with no arguments. I actually like the config approach because then you're not tied to using hatch as an executor. The downside is manual environment management.

@theorashid
Copy link
Contributor

theorashid commented May 8, 2025

Yes, I was thinking of using hatch CLI tool. The repo template we use at work does this and I find it very useful, especially the matrix feature for setting up different environments with different python versions.

I'm happy to just use it for hatchling building. Reliance on hatch CLI might be unnecessarily different to other pymc projects and just another barrier to entry for potential contributors.

This reminds me: is pre-commit covering everything we want contributors to do before they submit a PR? Our CONTRIBUTING.md is blank. Running ruff and pytest should be the minimum?

@maresb
Copy link
Collaborator

maresb commented May 9, 2025

Thanks for sharing about the matrix feature. I didn't know about that, and it seems pretty useful. I would have done that with pixi, but that would require me to set up each environment manually rather than with a matrix, which is actually a pain point.

Having CONTRIBUTING.md as a placeholder is indeed a bit embarrassing.

Regarding pre-commit, I usually think in two stages: pre-commit vs CI. Namely, what should run on every commit (usually my answer is ruff, plus a few minor things like fixing line endings), and what should run on every PR (usually pytest, sometimes mypy). I personally don't like pytest in pre-commit because it's slow, and I think individual commits should be allowed to break tests (or allowed to create broken tests to subsequently fix).

Thanks a lot for thinking about these things, it'll be good to sort it out.

@jessegrabowski
Copy link
Member Author

Agreed on not running the test suite on each commit. I could see that being a good idea in some cases, but I think it would just cause people to use --no-verify like crazy in our case, since our test suite is quite slow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancements New feature or request good first issue Good for newcomers help wanted Extra attention is needed maintenance
Projects
None yet
Development

No branches or pull requests

3 participants