Skip to content

TYP: Type annotations, part 4 #313

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 15 commits into
base: main
Choose a base branch
from
Open

Conversation

crusaderky
Copy link
Contributor

@crusaderky crusaderky commented Apr 18, 2025

Follow-up to #288
Get mypy mostly green everywhere except torch.

@Copilot Copilot AI review requested due to automatic review settings April 18, 2025 13:48
@crusaderky crusaderky marked this pull request as draft April 18, 2025 13:48
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request updates and standardizes type annotations throughout the array‑API‑compat codebase, ensuring that union types, keyword arguments, and collection type hints conform to the latest Python syntax and best practices.

  • Changes include replacing deprecated type imports, updating union syntax (using the “|” operator), and adding explicit types for **kwargs across multiple modules.
  • Several backend-specific modules (torch, numpy, dask, cupy, and common) now consistently annotate parameters and return types.

Reviewed Changes

Copilot reviewed 22 out of 22 changed files in this pull request and generated no comments.

Show a summary per file
File Description
array_api_compat/torch/fft.py Updated import and kwargs annotation for FFT functions.
array_api_compat/torch/_aliases.py Refined type annotations for promotion and result functions.
array_api_compat/numpy/* Standardized type hints and error messages in linalg, _typing, _info.
array_api_compat/dask/array/* Applied uniform type annotations and minor updates in error texts.
array_api_compat/cupy/* Updated type annotations and internal alias handling.
array_api_compat/common/* Adjusted typing in helper functions and internal modules.
Comments suppressed due to low confidence (1)

array_api_compat/dask/array/linalg.py:56

  • Typo in the error message: 'full_matrics' should be corrected to 'full_matrices'.
raise ValueError("full_matrics=True is not supported by dask.")

@crusaderky crusaderky changed the title Type annotations, part 4 TYP: Type annotations, part 4 Apr 18, 2025

# TODO: Account for other backends.
return isinstance(x, sparse.SparseArray)


def is_array_api_obj(x: object) -> TypeIs[_ArrayApiObj]: # pyright: ignore[reportUnknownParameterType]
def is_array_api_obj(x: object) -> TypeGuard[_ArrayApiObj]:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the current definition of _ArrayApiObj, TypeIs would cause downstream failures for all unknown array api compliant libraries.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't SupportsArrayNamespace cover all downstream array types?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SupportsArrayNamespace just states that the object has a __array_namespace__ method, nothing more.
It's missing all the other methods and properties of an Array.
I would much rather NOT write the full Array protocol here (I did it in array-api-extra and I regret it), as this is squarely in scope for array-api-types.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would much rather NOT write the full Array protocol here

Oh no please don't. I doubt that it would even work.

But what I was trying it ask, is whether there are any "array api objects" that don't have an __array_namespace__ method (because I honestly don't know).

@@ -70,72 +69,11 @@ def shape(self, /) -> _T_co: ...
DTypeKind: TypeAlias = _DTypeKind | tuple[_DTypeKind, ...]


# `__array_namespace_info__.dtypes(kind="bool")`
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

holy overengineering Batman!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was planning on making these generic on their dtype-types. Then, when combined with the array-api-namespace protocol, it becomes possible to statically type the individual dtypes of any array-api library. I don't see any other way to make that happen.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not see much benefit at all in breaking down which dict keys each of the methods of dtypes() can return. From both a maintainer and a end user perspective, I can see very little to no gain for any work in addition to -> Mapping[str, DType].

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two comments above, I explain how you can determine the type of e.g. boolean and integer dtypes, and how this batmobile is the (only) way of getting there. With this, it becomes possible to annotate an array-api function that rejects boolean arrays, yet accepts integer arrays.

from numpy import _CopyMode # noqa: F401
except ImportError:
pass
from .linalg import matrix_transpose, vecdot # type: ignore[no-redef] # noqa: F401
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reverts regression from #288

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what was the issue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#288 accidentally re-added lines (I assume after a bad merge from main) that were recently deleted in #302.

**kwargs,
dtype: DType | None = None,
device: Device | None = None,
copy: py_bool | None = _copy_default,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is plain wrong and requires a dedicated follow-up

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@crusaderky crusaderky marked this pull request as ready for review April 18, 2025 15:09
@lucascolley lucascolley requested a review from jorenham April 19, 2025 14:09
Copy link
Contributor

@jorenham jorenham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many (if not most) of the changes here revert work that I've done in #288. Why didn't you address those while it was still open? Then I could've explained my reasons for it, which would have saved us both a lot time.

Seeing large portions of my work being deleted with comments like "holy overengineering Batman!" is pretty frustrating, to say the least. Because this way you're literally wasting the time and effort I've put into it, without even knowing why I made those decisions.

@@ -720,7 +722,7 @@ def iinfo(type_: DType | Array, /, xp: Namespace) -> Any:
"finfo",
"iinfo",
]
_all_ignore = ["inspect", "array_namespace", "NamedTuple"]
_all_ignore = ["is_cupy_namespace", "inspect", "array_namespace", "NamedTuple"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we have the __dir__ functions, are these _all_ignore's still needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. I hadn't noticed __dir__. It's a nice idea but it completely neuters test_all.
I will revert the changes here and write a separate PR to address the issue.

def _isscalar(a: object) -> TypeIs[int | float | None]:
return isinstance(a, (int, float, type(None)))
def _isscalar(a: object) -> TypeIs[float | None]:
return isinstance(a, int | float | NoneType)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With

Suggested change
return isinstance(a, int | float | NoneType)
return a is None or isinstance(a, int | float)

we avoid the types import while simultaneously accentuating the violent dissonance between the Python runtime and its type-system, given that the sole purpose of a type-system is to accurately describe the runtime behavior...

Copy link
Contributor Author

@crusaderky crusaderky Apr 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we avoid the types import? This is a runtime check.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This NoneType is the only reason why from types import NoneType is needed in this module. So replacing its use here with a is None makes that import no longer necessary

)

_API_VERSIONS_OLD: Final = frozenset({"2021.12", "2022.12", "2023.12"})
_API_VERSIONS: Final = _API_VERSIONS_OLD | frozenset({"2024.12"})


def _is_jax_zero_gradient_array(x: object) -> TypeGuard[_ZeroGradientArray]:
def _is_jax_zero_gradient_array(x: object) -> TypeIs[_ZeroGradientArray]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TypeGuard was intentional. Because even if x is a _zeroGradientArray, and therefore a npt.NDArray[np.void], the function might still return False, in which case the TypeIs would narrow x to be not npt.NDArray[np.void], whereas a TypeGuard wouldn't.

So it's better to revert this change (and the one below here)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reverting


# TODO: Account for other backends.
return isinstance(x, sparse.SparseArray)


def is_array_api_obj(x: object) -> TypeIs[_ArrayApiObj]: # pyright: ignore[reportUnknownParameterType]
def is_array_api_obj(x: object) -> TypeGuard[_ArrayApiObj]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't SupportsArrayNamespace cover all downstream array types?

from numpy import _CopyMode # noqa: F401
except ImportError:
pass
from .linalg import matrix_transpose, vecdot # type: ignore[no-redef] # noqa: F401
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what was the issue?

@@ -139,7 +141,7 @@ def default_dtypes(
self,
*,
device: Device | None = None,
) -> dict[str, dtype[intp | float64 | complex128]]:
) -> DefaultDTypes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a lot less precise, so why did you change it?

Copy link
Contributor Author

@crusaderky crusaderky Apr 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ultra-precise annotation was giving neither final users nor maintainers any benefit.
Why did you define a TypeAlias if you're not using it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ultra-precise annotation was giving neither final users nor maintainers any benefit.

Users can always decide to broaden the returned type themselves, using their own annotations. But the other way around requires nasty things like cast.
But I don't feel very strongly about this one, so I'll stop myself from ranting about a bunch of other benefits....

Why did you define a TypeAlias if you're not using it?

If it's not needed, then feel free to remove it. But using it just because it exists feels a bit backwards to me.

@crusaderky
Copy link
Contributor Author

Why didn't you address those while it was still open? Then I could've explained my reasons for it, which would have saved us both a lot time.

I couldn't review that PR in time, apologies.

Seeing large portions of my work being deleted with comments like "holy overengineering Batman!" is pretty frustrating, to say the least.

I apologise, that was an emotional comment and I should not have made it.

@lucascolley lucascolley requested a review from jorenham May 10, 2025 21:50
@crusaderky
Copy link
Contributor Author

@jorenham can we get some traction back into this?

@jorenham
Copy link
Contributor

@jorenham can we get some traction back into this?

I'll take a look tonight

def _isscalar(a: object) -> TypeIs[int | float | None]:
return isinstance(a, (int, float, type(None)))
def _isscalar(a: object) -> TypeIs[float | None]:
return isinstance(a, int | float | NoneType)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This NoneType is the only reason why from types import NoneType is needed in this module. So replacing its use here with a is None makes that import no longer necessary


# TODO: Account for other backends.
return isinstance(x, sparse.SparseArray)


def is_array_api_obj(x: object) -> TypeIs[_ArrayApiObj]: # pyright: ignore[reportUnknownParameterType]
def is_array_api_obj(x: object) -> TypeGuard[_ArrayApiObj]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would much rather NOT write the full Array protocol here

Oh no please don't. I doubt that it would even work.

But what I was trying it ask, is whether there are any "array api objects" that don't have an __array_namespace__ method (because I honestly don't know).

device: Device,
/,
stream: int | Any | None = None,
) -> _CupyArray:
import cupy as cp # pyright: ignore[reportMissingTypeStubs]
) -> cp.ndarray:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'm not sure I follow. How this this relate to missing imports and the linter?


For a bit of context: The _CupyArray was used as a placeholder for cp.ndarray, because (at least) pyright is not able to use it for static type inference. Those "unresolved reference" are pretty annoying to deal with, because they often snowball into more errors.

But don't get me wrong: If there's no need for that alias, then I'm all for this change.

@@ -8,7 +8,7 @@
if np.__version__[0] == "2":
from numpy.lib.array_utils import normalize_axis_tuple
else:
from numpy.core.numeric import normalize_axis_tuple
from numpy.core.numeric import normalize_axis_tuple # type: ignore[no-redef]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mypy's allow_redefinition will be enabled by default in mypy 2. So perhaps we could enable it project-wide? I believe that this ignore also wouldn't be necessary anymore.

class SupportsArrayNamespace(Protocol[_T_co]):
def __array_namespace__(self, /, *, api_version: str | None) -> _T_co: ...
class SupportsArrayNamespace(Protocol):
def __array_namespace__(self, /, *, api_version: str | None) -> Namespace: ...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the Namespace that is a goldmine of static typing information. For instance, it implements the dtypes method (docs), which can be used to extract the dtype type of an array-api object. A very rough example of that could look as follows:

class ArrayNamespaceInfo[DTypeT](Protocol):
    def dtypes(self, *, device: Any = ..., kind: Any = ...) -> dict[Any, DTypeT]: ...

def get_dtype[DTypeT](
    a: SupportsArrayNamespace[ArrayNamespaceInfo[DTypeT]],
    /,
) -> DTypeT: ...

Using similar structural-typing trickery, it is also possible to extract the device type.
And if the library annotates its dtypes methods so that the return type depends on the kind literal passed to it (as I've tried to do here), then you would be able to distinguish between e.g. boolean and integer dtypes. That in turn allows you to write functions that reject booleans, but accept integers.



class HasShape(Protocol[_T_co]):
@property
def shape(self, /) -> _T_co: ...
def shape(self, /) -> tuple[_T_co, ...]: ...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit of a work in progress right now, but I'm working an implementing shape-typing in numpy at the moment. But the python type system is pretty limited, so some tricks are needed to make shape-typing actually useful.

My approach is to create @type_check_only subtypes of several (integer) tuple's, e.g. Rank0 <: tuple[()], Rank2 <: tuple[int, int], and Rank1N <: tuple[int, *tuple[int, ...]] (1-d or more).
I then attach some non-existent (type-check-only) methods onto those Rank subtypes, that tell you which rank types are smaller, and which ones are larger than itself. It might sound trivial, but this information isn't possible to statically deduce from the tuple types themselves. When broadcasting arrays, this information is essential for determining what the output shape type will be. And broadcasting is everywhere within numpy, so shape-typing would be pretty useless without it.

Here's the current implementation btw (which is probably going to change a lot): https://github.com/numpy/numtype/blob/d7dcd8d6114cdbc56e0d43b9b834e3a49d1d147a/src/_numtype/_rank.pyi#L82-L137

But perhaps the tests might be more interesting: https://github.com/numpy/numtype/blob/d7dcd8d6114cdbc56e0d43b9b834e3a49d1d147a/src/_numtype/%40test/generated/test_rank.pyi

What information are we losing? Where would it be useful?

To make a long story short: shape-typing in numpy

@@ -70,72 +69,11 @@ def shape(self, /) -> _T_co: ...
DTypeKind: TypeAlias = _DTypeKind | tuple[_DTypeKind, ...]


# `__array_namespace_info__.dtypes(kind="bool")`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two comments above, I explain how you can determine the type of e.g. boolean and integer dtypes, and how this batmobile is the (only) way of getting there. With this, it becomes possible to annotate an array-api function that rejects boolean arrays, yet accepts integer arrays.

@@ -139,7 +141,7 @@ def default_dtypes(
self,
*,
device: Device | None = None,
) -> dict[str, dtype[intp | float64 | complex128]]:
) -> DefaultDTypes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ultra-precise annotation was giving neither final users nor maintainers any benefit.

Users can always decide to broaden the returned type themselves, using their own annotations. But the other way around requires nasty things like cast.
But I don't feel very strongly about this one, so I'll stop myself from ranting about a bunch of other benefits....

Why did you define a TypeAlias if you're not using it?

If it's not needed, then feel free to remove it. But using it just because it exists feels a bit backwards to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants