<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
- Does this PR follow our AI policy
(https://github.com/astral-sh/.github/blob/main/AI_POLICY.md)?
-->
## Summary
This implements structural promotion of tuple size in the inferred type
of a collection literal.
The promotion only applies to a very specific circumstance: when tuple
literals in an inferred collection element type produce a union of
homogeneous fixed-length tuples of differing lengths, and only literal
tuple sources have contributed to that type, then we widen that union to
a single variadic tuple (e.g., `tuple[str] | tuple[str, str]` is widened
to `tuple[str, ...]`).
The result is that this scenario described in
https://github.com/astral-sh/ty/issues/2620 succeeds:
```python
languages = {
"python": (".py", ".pyi"),
"javascript": (".js", ".jsx", ".ts", ".tsx"),
}
# This no longer errors after this change, because the type of languages is `dict[str, tuple[str, ...]` rather than `dict[str, tuple[str, str]] | tuple[str, str, str, str]]`
languages["ruby"] = (".rb",)
```
Closes https://github.com/astral-sh/ty/issues/2620.
### Approach
- I created a new submodule that encapsulates the tuple size promotion
policy. It exposes a `TupleSizePromotionConstraints` struct that we use
during inference to record the scenarios in which we should **not**
attempt to promote a tuple. If no such disqualifying scenarios are
encountered, then tuple size promotion is attempted. The set of
disqualifying scenarios is documented in new mdtests.
- I think the policy for when to promote unions that involve empty
tuples deserves particular scrutiny. Since empty tuples do not have an
element type, they present a special case. The rule I've chosen is that
empty tuples do not contribute to evidence of different tuple lengths.
That means that a union containing an empty tuple must also contain
other tuples of differing lengths to trigger promotion (i.e., `[(),
(1,)]` remains `list[tuple[()] | tuple[int]]`, but `[(), (1,), (1,2)]`
is promoted to `list[tuple[int, ...]]`. This is conservative and, I
hope, useful for modeling situations in which the size of a tuple is
specifically meant to be 0 or N.
## Test Plan
Please see new and updated mdtests.
<!-- How was it tested? -->
## Summary
This fixes the known issues with handling recursive aliases, as
described
[here](https://github.com/astral-sh/ty/issues/3195#issuecomment-4184926298)
and elsewhere.
```py
from typing import reveal_type
type A = list[A]
def foo(x: A):
reveal_type(x[0]) # main: list[Any] -> this PR: list[A]
```
This allows us to safely remove the `MAX_RECURSION_DEPTH` limit that was
associated with `CycleDetector`.
The point is that in the previous implementation, we simply compared
types using hash values to guard against recursion, but this was
insufficient. By manually adding equality checks, the depth limit is no
longer necessary.
Stacked on https://github.com/astral-sh/ruff/pull/24803
## Test Plan
mdtest updated
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Support narrowing for a few more already-supported sites, but in the
context of a walrus, as in:
```python
def f(t: tuple[int, int] | tuple[None, None]):
if (first := t[0]) is not None:
reveal_type(first) # int
reveal_type(t) # tuple[int, int]
else:
reveal_type(first) # None
reveal_type(t) # tuple[None, None]
```
## Summary
Stringified annotations were being guarded by the `includeDeclaration`
flag, so `MyClass` inside `"MyClass"` wasn't being treated as a
reference, which seems unintentional.
Closes https://github.com/astral-sh/ty/issues/3386.
## Summary
This PR adds the infrastructure to surface diagnostics during project
and program settings resolution. The motivating use-case is: when we see
an unsupported Python version in an editor, we want to surface a
diagnostic rather than limiting to a `tracing` warning, as in:
```
warning[unsupported-python-version]: Ignoring unsupported inferred Python version `3.16`; ty will use Python 3.14 instead.
--> venv/pyvenv.cfg:2:16
|
2 | version_info = 3.16.0
| ^^^^^^
3 | home = base/bin
|
info: Expected one of `3.7`, `3.8`, `3.9`, `3.10`, `3.11`, `3.12`, `3.13`, `3.14`, `3.15`.
info: Set `python-version` explicitly to override the inferred version.
info: The version was inferred from your virtual environment metadata.
```
## Summary
This PR adds initial support for `functools.partial`, including:
- Constructor-time checking of bound arguments (e.g., `partial(f, "x")`
should report an immediate error if `"x` is not a valid type for the
parameter)
- Reduced signatures for partials (e.g., `def f(a: int, b: str, *, c:
bool) -> bytes` with `partial(f, 1)` becomes `partial[(b: str, *, c:
bool) -> bytes]`).
- Support for overloads, assignability checks, and more.
There are a few things that are _not_ covered and were instead cordoned
off into separate commits, namely:
- Preserving unprovided generic type variables in the returned partial
signature (fixed in: https://github.com/astral-sh/ruff/pull/24583). As
of this commit, we get:
```python
from functools import partial
from typing import TypeVar
T = TypeVar("T")
U = TypeVar("U")
def combine(a: T, b: U) -> tuple[T, U]:
return (a, b)
# partial[(b: Unknown) -> tuple[Literal[1], Unknown]]
p = partial(combine, 1)
```
- Keyword overrides in generics (e.g., `partial(combine, b=1)` can later
be called as `p("x", b="y")`, since keyword arguments can be overridden
at call time -- TIL!).
- Constructor modeling (`__new__`, etc.)
But this gets us much of the way there. After this PR, I believe our
handling of `functools.partial` is generally ahead of Mypy and Pyright
with the significant exception of generic modeling, where ty is behind.
(I choose to include tests for the above in
`crates/ty_python_semantic/resources/mdtest/call/functools_partial.md`,
with TODOs, which get resolved in subsequent PRs.)
See: https://github.com/astral-sh/ty/issues/1536.
## Summary
We parse `something, not = (1, 2)` as (on the LHS) a name target
(`something`) and a unary target (`not`) followed by an empty name. As a
result, we never visit the `not` or its name, which means we never infer
or record a type for that malformed subtree in `UnpackResult`.
Later, an `expression_type(...)` lookup for any subexpression can miss
and panic.
Closes https://github.com/astral-sh/ty/issues/3283.
## Summary
This PR adds a recursion guard for signature comparisons (keyed by
(source definition, target definition, relation)), used to prevent a
stack overflow in structural protocol matching. Previously, recursive
protocols would just recurse with a new specialization; now, we assume
success when we see the same pair in a single check, and continue from
there.
Closes https://github.com/astral-sh/ty/issues/3208.
## Summary
Prior to this change, for `TypeIs`, we only stored the
`T.top_materialization()` of the type. So for assignability, we checked
against the materialized narrowing type, rather than the user-declared
type. The two are conflated.
I think this causes problems for cases like:
```python
static_assert(is_assignable_to(TypeIs[Sequence[int]], TypeIs[Sequence[Any]]))
static_assert(not is_assignable_to(TypeIs[Sequence[int]], TypeIs[Sequence[object]]))
```
On `main`, the first assertion fails.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
- Does this PR follow our AI policy
(https://github.com/astral-sh/.github/blob/main/AI_POLICY.md)?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This prevents semantic tokens from a quoted type annotation from leaking
across notebook cells. Previously, such a leak would cause wonky syntax
highlighting as described in
https://github.com/astral-sh/ty/issues/3307.
The root cause of the issue was accidental omission of a cell's source
range from the filter used to determine the bounds of the semantic token
request.
Closes https://github.com/astral-sh/ty/issues/3307.
## Test Plan
Please see added regression test.
<!-- How was it tested? -->
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
In `.py` and `.pyi` files, we now only flag cases in which the return
type is the enclosing class, like:
```python
class A:
def __iadd__(self) -> A:
return self
```
As opposed to:
```python
class A:
def __iadd__(self) -> int:
return self
```
Closes https://github.com/astral-sh/ruff/issues/24462.
## Summary
This PR fixes several ParamSpec variance and gradual-specialization edge
cases that fell out of https://github.com/astral-sh/ruff/pull/24319.
We also now treat `typing_extensions.ParamSpec` defaults like
`typing.ParamSpec` defaults, which I think was an oversight.
## Summary
This is an attempt to improve the error message for something like
```py
a = 1
def f(a):
global a
```
I realize that the previous wording ("name 'a' is parameter and global")
is used by CPython itself, but unless we are trying to be consistent
with CPython, I think we can improve upon this? The previous version
seemed a bit cryptic to me when I first saw it.
## Test Plan
Updated snapshot tests
## Summary
These values are typically accessed together (e.g.,
`ArgumentMatcher::new`), and the construction now halves the number of
vector allocations.
## Summary
Given a generic specialization, we were rebuilding the constraints after
every argument, rather than all-at-once.
E.g., for:
```python
def combine[T](a: T, b: T, c: T, d: T) -> T:
return a
combine(("name", 1), ("id", 2), ("flag", True), ("size", 4))
```
Each argument constrains the same type variable `T`:
```python
T = tuple[Literal["name"], Literal[1]]
T = tuple[Literal["id"], Literal[2]]
T = tuple[Literal["flag"], Literal[True]]
T = tuple[Literal["size"], Literal[4]]
```
On main, we then compute (roughly):
```
T = A
T = union(A, B)
T = union(union(A, B), C)
T = union(union(A, B, C), D)
```
Now, we create a builder and construct at the end. This has a
significant impact on functions with many arguments, but also reduces
memory on real-world projects, which is great.
This adds some salsa-caching to the new constraint set solver. There are
many call paths that build up a constraint set for an assignability
check, and then solve that constraint set to get a specialization, and
we often have to perform this multiple times on the same two types.
It is not just constraint set construction that is worth caching; we
also want to cache as much of the solution extraction as we can. So this
PR refactors the solving code slightly so that `PathBounds` is the
result of the now-salsa-cached method. This is a vec with an element for
each satisfiable path in the constraint set BDD, recording the combined
lower and upper bound for each typevar mentioned on that path. This
caches us much of the work as we can, while still allowing different
callers to provide different `choose` callbacks if they need to override
how to choose a specific type within that lower/upper bound, or if they
need to record additional information about the solution paths.
## Summary
Given, e.g., `def f() -> TD: return dict(**src)`, we now infer
`dict(**src)` as matching `TD` if `src` is `TD`, for example. So the
following are accepted, whereas on main they all produce diagnostics:
```python
from typing import TypedDict
class TD(TypedDict):
x: int
y: str
src: TD = {
"x": 1,
"y": "foo",
}
x: TD = dict(**src)
def f() -> TD: return dict(**src)
```
## Summary
This PR adds stricter validation for uses of `Unpack[...]`, namely to
ensure that we still reject `Unpack` if it's in a `**kwargs` annotation
but _not_ the top-level construct (e.g., `**kwargs: list[Unpack[TD]]`,
`**kwargs: Unpack[TD] | int`).
## Summary
We already expose `is_classmethod`, etc., on function type, and those
methods already support aliasing, so the LSP just uses those directly
now.
Closes https://github.com/astral-sh/ty/issues/3358.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
We now detect Liskov violations when a parent-child pair have attributes
with a differing `ClassVar` status.
In general, we interpret an attribute without a `ClassVar` annotation as
an instance attribute, with the exception of cases like the following,
where we "allow" the child to "inherit" the annotation to adhere to the
conformance suite:
```python
class ProtoB(Protocol):
z: ClassVar[int]
class ProtoBImpl(ProtoB):
z = 0
```
There are a few other tricky cases to consider.
### Methods
Like Mypy and Pyright, we don't flag the following:
```py
from typing import Any, Callable, ClassVar
class Base:
f: ClassVar[Callable[..., Any]]
class Sub(Base):
def f(self) -> int:
return 1
```
### Descriptors
Like Mypy and Pyright, we don't flag the following:
```python
class Descriptor:
def __get__(self, obj: object, owner: type[object]) -> int:
return 1
class Base:
attr = Descriptor()
class Sub(Base):
attr: int
```
### Properties
Like Mypy and Pyright, we _do_ flag the following, since it changes the
class-vs.-instance contract:
```python
from typing import ClassVar
class Base:
attr: ClassVar[int]
class Sub(Base):
@property
def attr(self) -> int: # error: [invalid-attribute-override]
return 1
```
#### Final
We don't flag the following, because it already has a dedicated
diagnostic:
```python
from typing import Final
class Base:
attr: Final[int] = 1
class Sub(Base):
attr = 2 # error: [override-of-final-variable]
```
Closes https://github.com/astral-sh/ty/issues/3093.
## Summary
We now support `Unpack[TypedDict]` as an annotation on `**kwargs`, as in
the following example:
```python
from typing_extensions import TypedDict, Unpack
class MovieKwargs(TypedDict):
title: str
year: int
def show_movie(**kwargs: Unpack[MovieKwargs]) -> None:
...
show_movie(title="Alien", year=1979) # OK
show_movie(title="Alien") # missing required key
show_movie(name="Alien", year=1979) # unknown keyword
```
This is a small refactor, and means that we no longer have to manually
compute the constraint set solutions just to change the default
specialization of a given mapping.
## Summary
If a function doesn't contain any annotations or default arguments, we
don't need to do deferred inference for the signature; and if the
function isn't decorated, we don't need to store a separate
`undecorated_type`. This avoids a deferred-definition entry, an empty
deferred inference query, and (oftena) an `DefinitionInferenceExtra`
allocation.