-
Notifications
You must be signed in to change notification settings - Fork 46
Comparing changes
Open a pull request
base repository: TorchSim/torch-sim
base: v0.2.1
head repository: TorchSim/torch-sim
compare: v0.2.2
- 17 commits
- 129 files changed
- 9 contributors
Commits on May 12, 2025
-
fix installing conflicting optional dependencies via uv in pyproject.…
…toml (#188) * fix installing conflicting deps via uv conflicts section in pyproject.toml * fix lints
Configuration menu - View commit details
-
Copy full SHA for 22aa785 - Browse repository at this point
Copy the full SHA 22aa785View commit details
Commits on May 14, 2025
-
Directly compare ASE vs TorchSim Frechet cell FIRE relaxation (#146)
* adds comparative test for fire optimizer with ase * rm a few comments * updates CI for the test * update changelog for v0.2.0 (#147) * update changelog for v0.2.0 * minor modification for PR template * formatting fixes * formatting and typos * remove contributors bc they aren't linked * update test with a harder system * fix torch.device not iterable error in test_torchsim_vs_ase_fire_mace.py * fix: should compare the row_vector cell, clean: fix changelog typo * clean: delete .coverage and newline for pytest command * Introduce ASE-style `FIRE` optimizer (departing from velocity Verlet in orig FIRE paper) and improve coverage in `test_optimizers.py` (#174) * feat(fire-optimizer-changes) Update fire_step in optimizers.py based feature/neb-workflow * reset optimizers.py to main version prior to adding updated changes * (feat:fire-optimizer-changes) - Added ase_fire_step and renamed fire_step to vv_fire_step. Allowed for selection of md_flavor * (feat:fire-optimizer-changes) - lint check on optimizers.py with ruff * (feat:fire-optimizer-changes) - added test cases and example script in examples/scripts/7_Others/7.6_Compare_ASE_to_VV_FIRE.py * (feat:fire-optimizer-changes) - updated FireState, UnitCellFireState, and FrechetCellFireState to have md_flavor to select vv or ase. ASE currently coverges in 1/3 as long. test cases for all three FIRE schemes added to test_optimizers.py with both md_flavors * ruff auto format * minor refactor of 7.6_Compare_ASE_to_VV_FIRE.py * refactor optimizers.py: define MdFlavor type alias for SSoT on MD flavors * new optimizer tests: FIRE and UnitCellFIRE initialization with dictionary states, md_flavor validation, non-positive volume warnings brings optimizers.py test coverage up to 96% * cleanup test_optimizers.py: parameterize tests for FIRE and UnitCellFIRE initialization and batch consistency checks maintains same 96% coverage * refactor optimizers.py: consolidate vv_fire_step logic into a single _vv_fire_step function modified by functools.partial for different unit cell optimizations (unit/frechet/bare fire=no cell relax) - more concise and maintainable code * same as prev commit but for _ase_fire_step instead of _vv_fire_step * (feat:fire-optimizer-changes) - added references to ASE implementation of FIRE and a link to the original FIRE paper. * (feat:fire-optimizer-changes) switched md_flavor type from str to MdFlavor and set default to ase_fire_step * pytest.mark.xfail frechet_cell_fire with ase_fire flavor, reason: shows asymmetry in batched mode, batch 0 stalls * rename maxstep to max_step for consistent snake_case fix RuntimeError: a leaf Variable that requires grad is being used in an in-place operation: 7. Position / cell update state.positions += dr_atom * unskip frechet_cell_fire in test_optimizer_batch_consistency, can no longer repro error locally * code cleanup * bumpy set-up action to v6, more descriptive CI test names * pin to fairchem_core-1.10.0 in CI * explain differences between vv_fire and ase_fire and link references in fire|unit_cell_fire|frechet_cell_fire doc strings * merge test_torchsim_frechet_cell_fire_vs_ase_mace.py with comparative ASE vs torch-sim test for Frechet Cell FIRE optimizer into test_optimizers.py - move `ase_mace_mpa` and `torchsim_mace_mpa` fixtures into `conftest.py` for wider reuse * redirect MACE_CHECKPOINT_URL to mace_agnesi_small for faster tests * on 2nd thought, keep test_torchsim_frechet_cell_fire_vs_ase_mace in a separate file (thanks @CompRhys) * define MaceUrls StrEnum to avoid breaking tests when "small" checkpoints get redirected in mace-torch --------- Co-authored-by: Orion Cohen <[email protected]> Co-authored-by: Janosh Riebesell <[email protected]> Co-authored-by: Rhys Goodall <[email protected]> Co-authored-by: Myles Stapelberg <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8211e08 - Browse repository at this point
Copy the full SHA 8211e08View commit details -
Add
pbar: bool | dict = False
keyword tooptimize
,integrate
, `……static` runners (#181) * fix wrong dtype compare * make sure sevennet type_map exists * refactor * bugfix: static runner correctly restores the order * verbose => pbar * refactor pbar count * static runner returns SimState with properties attached * prefer next over single elem slice * pbar default to False & more clear docs * revert static runner * remove redundant line * lint: ignore complexity in optimize * fix: pin fairchem tests * tweak: pbar_tracker -> tqdm_pbar * change type to pbar: dict[str, Any] --------- Co-authored-by: Rhys Goodall <[email protected]> Co-authored-by: Janosh Riebesell <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 92e8994 - Browse repository at this point
Copy the full SHA 92e8994View commit details -
stricter dead link checks: swap
markdown-link-check
forlychee
(#194Configuration menu - View commit details
-
Copy full SHA for 981e4ee - Browse repository at this point
Copy the full SHA 981e4eeView commit details -
* pull SimState into top-level init so it can be accessed as ts.SimState * replace SimState imports with ts.SimState * move MaceUrls StrEnum to torch_sim.models.mace and import everywhere for SSoT for checkpoint URLs * add test_mace_urls_enum
Configuration menu - View commit details
-
Copy full SHA for 6440b56 - Browse repository at this point
Copy the full SHA 6440b56View commit details -
Fix Fairchem import possibly failing silently until it reaches setup_…
…imports() (#187) * issue warning with traceback.format_exc() on failed fairchem.core import as suggested by @orionarcher fixes Fairchem import failing silently until it reaches setup_imports() * do the other models too addresses #187 (comment) * refactor typing imports
Configuration menu - View commit details
-
Copy full SHA for 8ab0a18 - Browse repository at this point
Copy the full SHA 8ab0a18View commit details
Commits on May 15, 2025
-
Test stresses in the ase consistency test. (#190)
* fea: test stress * fix: fix orb stress for single batched system * fix: equal nan needed to pass stress tests on benezene * fix: sevennet sign convention for stress wrong * fix: sevennet needs to reorder it's internal 6-component representation to match voigt order * fix: assign eps volume for benzene on missing cell but skip the test due to numerical instability, fea: allow different tol for energy and forces etc * fea: remove all non-default tol for graphpes * fix: increase orb energy tol for consistency tests * fea: fix metatensor stress calculations * fix: use smaller max_iterations in ci, chore:install cpu torch in ci, * fix: revert cpu torch as it breaks mattersim torchvision dependency * fea: faster running vv vs ase example
Configuration menu - View commit details
-
Copy full SHA for 65360d2 - Browse repository at this point
Copy the full SHA 65360d2View commit details -
more robust ordering checks for
ts.static
(#196)* `test_runners.py` create more diverse `SimState` objects in static test for robuster binning checks - better trajectory reporting and assertions to ensure unique potential energies and correct file creation * retype `lj_model: Any` as `(Unbatched)LennardJonesModel`
Configuration menu - View commit details
-
Copy full SHA for dc6fcfd - Browse repository at this point
Copy the full SHA dc6fcfdView commit details
Commits on May 16, 2025
-
Fix unit and frechet cell FIRE optimizers not rescaling atom position…
…s after each cell update (#199) * tweak doc strings: replace MdFlavor with string literals * fix FIRE relaxation not updating atom positions when cell changes by adding temp variable old_row_vector_cell to store the prev state of row_vector_cell and use it to scale atomic positions with torch.bmm(inv_old_cell_batch, current_new_row_vector_cell) after each cell update * test_optimizers_vs_ase.py tighten energy_diff, avg_displacement, cell_diff now that we get better torch-sim ASE agreement * new tests for OsN2 and distorted FCC Al structures with Frechet and Unit Cell FIRE optimizers in test_optimizers_vs_ase.py - new fixtures for initial SimState of rhombohedral OsN2 and distorted FCC Al - new tests comparing torch-sim's Frechet Cell FIRE and Unit Cell FIRE optimizers with ASE's implementations * loosen tolerance of 1 test failing in CI * make ASE comparison tests in test_optimizers_vs_ase.py more stringent by comparing ASE and torch-sim EFS + cell at multiple checkpoints during each relaxation - _run_and_compare_optimizers now accepts a list of checkpoints instead of a single n_steps parameter * loosen tolerance on 2 tests failing in CI even though passing locally * move osn2_sim_state and distorted_fcc_al_conventional_sim_state fixtures to root conftest * reorder fixtures in conftest.py to collocate SimState and ASE Atoms fixtures
Configuration menu - View commit details
-
Copy full SHA for b626d09 - Browse repository at this point
Copy the full SHA b626d09View commit details
Commits on May 22, 2025
-
Difference in ASE FrechetCellFilter vs Torch-Sim (both md_flavors) + …
…enhanced convergence testing (#200) * updated ASE_to_VV to test FrechetCellFilter as well and compare directly to ASE * updated script to compute all 6 structures * fixed labels for ASE Native * clean: remove Atoms comment * split run_optimization into run_optimization_ts and run_optimization_ase #200 (comment) * swap matplotlib for plotly in 7.6_Compare_ASE_to_VV_FIRE.py * lint --------- Co-authored-by: Rhys Goodall <[email protected]> Co-authored-by: Janosh Riebesell <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ef5c912 - Browse repository at this point
Copy the full SHA ef5c912View commit details
Commits on May 30, 2025
-
Pin Metatensor as tests broke with Metatrain release (#204)
* fix: pin metatensor because it is more unstable. * fix: pin the dependencies for the metatensor tutorial
Configuration menu - View commit details
-
Copy full SHA for 2c3a862 - Browse repository at this point
Copy the full SHA 2c3a862View commit details
Commits on Jun 1, 2025
-
Configuration menu - View commit details
-
Copy full SHA for e3a1711 - Browse repository at this point
Copy the full SHA e3a1711View commit details
Commits on Jun 3, 2025
-
Fix discrepancies between
FIRE
optimisations inASE
and `torch-si……m` (#203) * fea: use batched vdot --------- Co-authored-by: Janosh Riebesell <[email protected]> * clean: remove ai slop * clean: further attempts to clean but still not matching PR * fix: dr is vdt rather than fdt * typing: fix typing issue * wip: still not sure where the difference is now * update forces per comment * Fix ASE pos only implementation * Initialize velocities to None in the pos-only case, see previous changes to the optimizers. (ensures that the correct `dt` is used) * Change the order of the increase of `n_pos`. Again, this ensures the usage of the correct `dt` compared to ASE * Fix torch-sim ASE-FIRE (Frechet Cell) * Remove rescaling of positions when updating cell, it's not relevant * Correctly rescale the positions with respect to the deformation gradient * Consider the `cell_forces` in the convergence when doing cell optimizations * linting * test: still differ significantly after step 1 for distorted structures * Fix test comparing ASE and torch-sim optimization * Include the `cell_forces` in the convergence check * Fix the number of iterations that are performed. `steps_between_swaps` is set to 1, so the number of iterations is equal to the number of swaps. In the previous version, less iterations would have been performed when reaching the maximum number of swaps. For example, when trying to run 32 steps with `steps_between_swaps=5`, the optimization would have stopped after 30 iterations, i.e., 6 swaps. * Fix `autobatching.py`. The if statement would have been triggered for `max_attempts=0`, which was the case when running one iteration and `steps_between_swaps=5` * Fix `optimizers` when using `UnitCellFilter` * Fix the `None` initialization * Fix the cell update when using `UnitCellFilter` * fix test_optimize_fire * allow FireState.velocities = None since it's being set to None in multiple places * safer `batched_vdot`: check dimensionality of input tensors `y` and `batch_indices` - fix stale docstring mentioning is_sum_sq kwarg * generate_force_convergence_fn raise informative error on needed but missing cell_forces * pascal case VALID_FIRE_CELL_STATES->AnyFireCellState and fix non-f-string error messages * fix FireState TypeError: non-default argument 'dt' follows default argument * allow None but don't set default for state.velocities * fix bad merge conflict resolution * tweaks --------- Co-authored-by: Rhys Goodall <[email protected]> Co-authored-by: Janosh Riebesell <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 2e4a408 - Browse repository at this point
Copy the full SHA 2e4a408View commit details
Commits on Jun 9, 2025
-
* clean: move pair fn logic into batched model code * clean: begin to remove unbatched scripts * clean: update the dynamics scripts, remove old code, refactor the integrators into folder * fix: all unit tests run * fix: remove circular import * wip: address more issues with removing unbatched code * wip: another attempt to fix examples * fix: mace needs the shifts? we should add a unit test that needs them as the MaceModel unit tests didn't fail removing them. * metatensor models have been renamed to metatomic models (#205) * Fix discrepancies between `FIRE` optimisations in `ASE` and `torch-sim` (#203) * fea: use batched vdot --------- Co-authored-by: Janosh Riebesell <[email protected]> * clean: remove ai slop * clean: further attempts to clean but still not matching PR * fix: dr is vdt rather than fdt * typing: fix typing issue * wip: still not sure where the difference is now * update forces per comment * Fix ASE pos only implementation * Initialize velocities to None in the pos-only case, see previous changes to the optimizers. (ensures that the correct `dt` is used) * Change the order of the increase of `n_pos`. Again, this ensures the usage of the correct `dt` compared to ASE * Fix torch-sim ASE-FIRE (Frechet Cell) * Remove rescaling of positions when updating cell, it's not relevant * Correctly rescale the positions with respect to the deformation gradient * Consider the `cell_forces` in the convergence when doing cell optimizations * linting * test: still differ significantly after step 1 for distorted structures * Fix test comparing ASE and torch-sim optimization * Include the `cell_forces` in the convergence check * Fix the number of iterations that are performed. `steps_between_swaps` is set to 1, so the number of iterations is equal to the number of swaps. In the previous version, less iterations would have been performed when reaching the maximum number of swaps. For example, when trying to run 32 steps with `steps_between_swaps=5`, the optimization would have stopped after 30 iterations, i.e., 6 swaps. * Fix `autobatching.py`. The if statement would have been triggered for `max_attempts=0`, which was the case when running one iteration and `steps_between_swaps=5` * Fix `optimizers` when using `UnitCellFilter` * Fix the `None` initialization * Fix the cell update when using `UnitCellFilter` * fix test_optimize_fire * allow FireState.velocities = None since it's being set to None in multiple places * safer `batched_vdot`: check dimensionality of input tensors `y` and `batch_indices` - fix stale docstring mentioning is_sum_sq kwarg * generate_force_convergence_fn raise informative error on needed but missing cell_forces * pascal case VALID_FIRE_CELL_STATES->AnyFireCellState and fix non-f-string error messages * fix FireState TypeError: non-default argument 'dt' follows default argument * allow None but don't set default for state.velocities * fix bad merge conflict resolution * tweaks --------- Co-authored-by: Rhys Goodall <[email protected]> Co-authored-by: Janosh Riebesell <[email protected]> * bump pre-commit hooks and fix new errors * fix MACE examples not using ts.io state conversion utilities * fix unused shifts_list not torch.cat()-ed in mace.py * fix examples/scripts/3_Dynamics/3.9_MACE_NVT_staggered_stress.py using nvt_langevin in place of nvt_nose_hoover_invariant remove debug print in torch_sim/runners.py * just calc total energy manually in nvt_langevin_invariant in torch_sim/integrators/nvt.py also fix broadcasting bug in Ornstein-Uhlenbeck (ou_step) * fix calculate_momenta() in npt_nose_hoover_init also fix 3.8_MACE_NPT_Nose_Hoover.py broken imports from torch_sim.integrators.npt * fix unpack of n_particles, dim = state.positions.shape * fix missing external_pressure in npt_nose_hoover call in 3.8_MACE_NPT_Nose_Hoover.py * update cell_position in npt_nose_hoover before downstream usage * refactor (no bug fixes) * fix npt_nose_hoover transposing cell that's already in column-vector convention * feat: Add batching support to NPT Nose-Hoover integrator - Update NPTNoseHooverState cell properties to support batch dimensions - reference_cell: [n_batches, 3, 3] instead of [3, 3] - cell_position, cell_momentum, cell_mass: [n_batches] instead of scalar - Fix tensor broadcasting in exp_iL1, exp_iL2 with proper atom-to-batch mapping - Update compute_cell_force for per-batch kinetic energy and stress calculations - Fix npt_nose_hoover_init to properly initialize batched cell variables - Update npt_nose_hoover_invariant for per-batch energy conservation - Replace pbc_wrap_general with pbc_wrap_batched for proper PBC handling - Fix example script 3.8_MACE_NPT_Nose_Hoover.py output formatting Enables multiple independent NPT systems in a single simulation while maintaining backward compatibility for single-batch systems. * fix: Remove cell dimension squeezing in NVT Nose-Hoover integrator - Remove problematic cell.squeeze(0) that breaks batching support - Fix calculate_momenta function call to use correct signature with batch parameter - Resolves RuntimeError when using MACE with NVT Nose-Hoover thermostat Fixes example script 3.5_MACE_NVT_Nose_Hoover.py which was failing due to neighbor list function receiving wrong tensor shapes when cell batch dimension was incorrectly removed. * fix: Handle scalar kT and batched stress tensors in NPT Nose-Hoover - Convert scalar kT to tensor before accessing .ndim attribute in npt_nose_hoover_init and update_cell_mass - Fix stress tensor trace computation in compute_cell_force to handle 3D batched tensors - Use torch.diagonal().sum() for batched stress tensors instead of torch.trace() Fixes Lennard-Jones NPT Nose-Hoover script that was failing with: - AttributeError: 'float' object has no attribute 'ndim' - RuntimeError: trace: expected a matrix, but got tensor with dim 3 * fix: Handle batched cell tensors in get_fractional_coordinates - Replace deprecated .T with .mT for matrix transpose on 3D tensors - Add support for batched cell tensors with shape [n_batches, 3, 3] - Extract first batch cell matrix when cell.ndim == 3 - Maintains backward compatibility with 2D cell matrices Fixes batched silicon workflow script that was failing with: - UserWarning about deprecated .T usage on >2D tensors - RuntimeError: linalg.solve: A must be batches of square matrices The get_fractional_coordinates function now properly handles both single [3,3] and batched [n_batches,3,3] cell tensors, enabling a2c_silicon_batched.py workflow to run * replace .transpose(-2, -1) with .mT everywhere * fix nvt + npt integrators: ensure consistent batch dimensions in kinetic energy calculations - Add missing `batch=state.batch` parameter to `calc_kinetic_energy` calls in NPT integrator - Enhance NVT/NPT invariant functions with explicit broadcasting for chain variables - Replace inefficient manual loops with batch-aware kinetic energy calculation in NPT invariant - Fix undefined variable reference in NPT invariant function (n_batches → state.n_batches) Resolves broadcasting issues where scalar kinetic energy was incorrectly combined with batched energy and temperature tensors, ensuring all energy terms have consistent batch dimensions before addition. * fix transforms.py: prevent silent data corruption in get_fractional_coordinates Replace problematic batched cell handling that only processed the first batch (cell[0]) with explicit NotImplementedError. This prevents silent data corruption where multi-batch systems would incorrectly use only the first batch's cell parameters for all coordinate transformations. - Raise NotImplementedError for 3D batched cell tensors instead of silently ignoring batches 1, 2, ... N - Preserve existing functionality for 2D cell matrices - Add clear error message indicating limitation and suggesting workarounds Breaking change: Code that previously silently failed will now explicitly error, but this prevents incorrect results in multi-batch scenarios. * fix nvt_langevin: handle None gamma parameter correctly * fix npt: missing PBC check in Nose-Hoover position update - Check state.pbc before applying ts.transforms.pbc_wrap_batched - Return unwrapped positions when state.pbc is False - Ensure consistent behavior with NPT Langevin implementation * Initialize cur_deform_grad to prevent UnboundLocalError * fix nvt: compute degrees of freedom per batch in Nose-Hoover init Replace count_dof() with proper batch-aware DOF calculation using torch.bincount(state.batch) to ensure consistency with invariant function. Previously ignored batch structure, causing incorrect DOF for batched systems. * fix /5.1_a2c_silicon_batched.py: restore single-batch cell tensors support in transforms.py's get_fractional_coordinates - Handle batched cell tensors with shape [1, 3, 3] by auto-squeezing to [3, 3] - Improve error messages for multi-batch cases to be more informative - Add comprehensive tests for batched cell tensor scenarios - Fixes NotImplementedError in examples/scripts/5_Workflow/5.1_a2c_silicon_batched.py:159 - Maintains full backward compatibility with existing 2D cell matrix usage --------- Signed-off-by: Janosh Riebesell <[email protected]> Co-authored-by: Janosh Riebesell <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 6c7bc81 - Browse repository at this point
Copy the full SHA 6c7bc81View commit details
Commits on Jun 10, 2025
-
generate_force_convergence_fn
defaultinclude_cell_forces
to True…… (matching ASE) (#209) * pyproject.toml use uv_build for build backend * add test coverage for symmetric strain tensor calculation (u + u.mT)/2 Addresses missing test coverage for the symmetric strain tensor calculation in torch_sim/elastic.py:815 with tests for zero, pure shear, and hydrostatic strain cases. Verify consistency of strains produced by elementary deformations. * `generate_force_convergence_fn` default to include cell forces in the convergence check (matching ASE) + test coverage - parameterized tests for `generate_force_convergence_fn` covering different tolerance levels and cell forces * fix return annotation and docstring of convergence_fn in torch_sim/runners.py * fix: replace >= on boolean tensors with explicit logical operations Avoid undefined broadcast behavior by using torch.logical_or and torch.logical_not in tolerance ordering test * update TorchSim package treemap in readme following removal of unbatched code * avoid floating-point precision issues in test_get_elementary_deformations_strain_consistency
Configuration menu - View commit details
-
Copy full SHA for db4782d - Browse repository at this point
Copy the full SHA db4782dView commit details -
Remove debug print statements from tests and replace them with assert…
…ions (#210) * remove debug print statements from tests and replace them with assertions, adjust ruff config to flag prints in package code * remove unused script docs/_static/get_module_graph_dot_file.py * replace assert statements with ValueError in multiple files * fix self-documenting f-string unsupported in torchscript * replace logging.info with print in autobatching.py * don't ruff ignore INP001 implicit-namespace-package * codecov unignore torch_sim/unbatched * primitive_neighbor_list raise RuntimeError not AssertionError on no atoms provided
Configuration menu - View commit details
-
Copy full SHA for d1ce1b2 - Browse repository at this point
Copy the full SHA d1ce1b2View commit details -
Configuration menu - View commit details
-
Copy full SHA for 3d2bea0 - Browse repository at this point
Copy the full SHA 3d2bea0View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v0.2.1...v0.2.2