Skip to content

Can we have a generic Type[C]? #107

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
NYKevin opened this issue May 8, 2015 · 65 comments
Closed

Can we have a generic Type[C]? #107

NYKevin opened this issue May 8, 2015 · 65 comments
Milestone

Comments

@NYKevin
Copy link

NYKevin commented May 8, 2015

The type object occupies a rather strange place in the type hierarchy:

>>> type(type) is type
True

(I'm pretty sure that's a flat lie, since you can't instantiate something from itself, but regardless...)

>>> isinstance(type, object)
True
>>> isinstance(object, type)
True
>>> isinstance(type, type)
True

In Java, the (very rough) equivalent is a class, specifically Class<T>. It's also generic; the type variable refers to the instance. Java has it easy because they don't support metaclasses. Classes are not first class in Java, so their type hierarchy doesn't have to deal with the strange loops shown above.

I realize metaclasses in their full generality are out of scope (at least for now), but a generic Type[T] like Java's would be nice to have. So far as I can tell from the Mypy documentation, it doesn't currently exist.

Here's some example code which might like to have this feature:

def make_foo(class_: Type[T]) -> T:
    # Instantiate and return a T
@gvanrossum
Copy link
Member

Unless this is trivial to add to mypy I propose to punt this to a future version of type hints.

FWIW type(X) is a shorthand for X.__class__ and type.__class__ is indeed type. Not a lie.

@NYKevin
Copy link
Author

NYKevin commented May 9, 2015

Yeah, but I'm pretty sure you have to reassign type.__class__ at some point to get there...

Anyway, I recognize that this is probably a lot more complex than it looks, and that the first 3.5 beta is rather alarmingly close, but I do feel this is essential for anyone using first-class classes in any significant way. Otherwise, you just have a type object, which is basically Callable[[???], object] (any idea what to put for the args?). You'd need a lot of casting to make that useful. I mean, I suppose you can treat it like Type[Any]/Callable[Any, Any], but then you're just being too permissive instead of too strict.

@gvanrossum
Copy link
Member

Any is your friend. :-)

@agronholm
Copy link
Contributor

Here's an example use case, straight from the code of my new framework:

@typechecked
def register_extension_type(ext_type: str, extension_class: type, replace: bool=False):
    """
    Adds a new extension type that can be used with a dictionary based configuration.

    :param ext_type: the extension type identifier
    :param extension_class: a class that implements IExtension
    :param replace: ``True`` to replace an existing type
    """

    assert_subclass('extension_class', extension_class, IExtension)
    if ext_type in extension_types and not replace:
        raise ValueError('Extension type "{}" already exists'.format(ext_type))

    extension_types[ext_type] = extension_class

I would like to declare the second argument as extension_class: Type[IExtension] (or Class[IExtension], doesn't matter to me). Likewise, the type hint for extension_types should be Dict[str, Type[IExtension]].

@ilevkivskyi
Copy link
Member

I understand the time pressure but without Type[T] typehinting system looks a bit incomplete. I hope it will be added soon.

@gvanrossum
Copy link
Member

Could you submit a patch to mypy? That would make a world of difference.

Until then, incompleteness of the typesystem doesn't bother me that much -- unlike statically-typed languages, Python programs don't have to be fully specified, you can always use Any.

@ilevkivskyi
Copy link
Member

I totally agree, one of the good points of gradual typing is the possibility to introduce it gradually :-) IMO it is perfectly pythonic - one could take exactly as much typing as one wants, practicality beats purity here. And yes I will try to make a patch.

@NYKevin
Copy link
Author

NYKevin commented May 19, 2015

@ilevkivskyi If you're serious about making a patch, make sure type inference of this line doesn't crash or break things too badly:

x = type

You could infer x to be a Type[Type[Type[...]]], but that's probably not a Good Idea. I'd just make it Type[object] or Type[Any]. Not sure whether this is an actual problem without looking at mypy's source, but it's something to watch out for.

@JukkaL
Copy link
Contributor

JukkaL commented May 22, 2015

Another issue related to Type[X] is whether a value with such type should be callable. Since __init__ is special (it is often overridden with an incompatible signature) just taking the signature of __init__ is not safe, unless there is a way to annotate __init__ to require a compatible override in all subclasses.

For example, consider x with type Type[object]. If it would be callable, x() should probably be accepted, as object() is fine. Now the runtime valueof x could be slice, as it's a subclass of object -- like any class is. However, in this case the call would fail at runtime since slice() is not valid.

@NYKevin
Copy link
Author

NYKevin commented May 22, 2015

@JukkaL It gets worse:

class Foo:
    def __init__(self, args):
        # do something
    @classmethod
    def bar(cls, args):
        # cls is Type[Foo]
        self = cls(args)  # type: Foo
        # do something else
        return self

I agree with Jukka that this code is wrong. We can easily fix it by splitting the "do something" off into a separate (underscore-prefixed) method and calling super().__new__() from bar(). But unlike the examples we've been discussing so far, this is something the average user would actually run into. If we talk about Type[Foo] in the error message, we'll induce exploding head syndrome in the end user. It would be nice if we could avoid that.

@gvanrossum
Copy link
Member

The pattern (helper class methods that create and return instances) is indeed very popular. I think it's usually done for classes that won't be subclassed, or where the subclasses don't override __init__ (at least not in an incompatible way). In the former case one could use a static method and name the class explicitly, but the second use case is popular in some circles.

I wonder if a good type checker should be complaining only about an __init__ override that changes the signature, and only when the base class constructs instances of itself through a class method's cls argument.

@agronholm
Copy link
Contributor

Frankly I don't understand what the fuss is about. Type[Foo] would not guarantee anything beyond the type being a subclass of Foo. It would not guarantee compatibility of method signatures. Just like when annotating an argument as Foo, it would still accept an instance of any Foo's subclass, which may have arbitrary method overrides, yes?

@NYKevin
Copy link
Author

NYKevin commented May 22, 2015

@agronholm That's why we have the Liskov substitution principle. Usually, constructors and initializers are exempt from that rule since they're (effectively) static, but when you start to allow covariant first-class classes, that decision becomes a bit more questionable. Should static/class methods be exempt from LSP at all?

@agronholm
Copy link
Contributor

My point was that Python does not adhere to the LSP since you can arbitrarily override any methods, even if the overridden signatures are incompatible. In Java, this would simply cause the superclass method to be called in such cases. Are you proposing that the LSP should now be enforced somehow?

@ilevkivskyi
Copy link
Member

@NYKevin, One of the ideas could be to make first-class classes invariant and add a keyword, something like Type[Foo, nonstrict=True] that would make them covariant. The word nonstrict sounds scary, so that people who are going to use it, will be prepared to have some errors at runtime :-)

@agronholm
Copy link
Contributor

I don't think that is valid Python syntax.

@ilevkivskyi
Copy link
Member

Good point :-) One can of course define a constant nonstrict in typing.py and write Type[Foo, nonstrict], but I believe all these are details. One needs first to implement it (Type[Foo]) in any form then do some fixes.

@JukkaL I think Callable[..., X] is a good first approximation for a value with type Type[X].

@NYKevin
Copy link
Author

NYKevin commented May 22, 2015

@agronholm The only reason Java calls the superclass implementation in that case is because it supports overloading. Python doesn't, so I would expect LSP adherence in practice. It may be difficult to enforce statically, however (cf. the Circle-Ellipse problem). But that doesn't mean you can get away with violating it for non-static methods. Things will break, whether the linter catches it or not.

@ilevkivskyi That seems a rather odd syntax to me. I'm not sure invariant Type[T] is necessarily all that useful anyway. What's the point of a type variable if you know statically what its value will be?

@ilevkivskyi
Copy link
Member

@NYKevin, I agree, it is not very useful, but one can use it for something like def foo(cls: Type[T]) -> T: ... as you originally proposed.

@gvanrossum
Copy link
Member

This seems to run into some of the same objections as handling the classmethod first argument does in #292, where we also have the problem of how to type-check calling the class, since we don't know what the signature of a subclass's constructor is.

Maybe we can cut through these objections using a similar approach: Never mind the non-Liskov properties of constructor signatures, at least in the first round. If someone defines a class C with a certain constructor signature, and they define an API that takes an argument c annotated with Type[C], then suitable argument values for c are the class C and any subclass thereof. In the API, one can call class methods of C on the argument c, and one can also call c(), with the same signature as C().

In the future we can come up with some way to handle subclasses of C with a different constructor signature. Perhaps we could give Type an optional second parameter that constrains the constructor signature, so that e.g. Type[C, [int, int]] would mean subclasses of C whose constructor takes two integers. Hm... maybe it should be Type[[int, int], C] so it matches Callable? In any case, Type[C] should probably then be limited to subclasses whose constructor signature matches that of C. (To unconstrain the constructor signature, Type[C, ...] or Type[..., C] could be used, similar to Callable[..., C].)

@NYKevin
Copy link
Author

NYKevin commented Mar 20, 2016

Perhaps we could give Type an optional second parameter that constrains the constructor signature, so that e.g. Type[C, [int, int]] would mean subclasses of C whose constructor takes two integers.

If I needed something like that, I would just write Callable[[int, int], C], which is nicely duck-typed. I'm not sure it's a great idea to overcomplicate things here.

I'd much rather assume something like this: "Well-behaved constructors support cooperative multiple inheritance; they should consume the arguments they recognize and pass unrecognized arguments up the inheritance chain via super()." Not everybody does that, unfortunately, but it does seem like good design for larger or more complex class hierarchies. Unfortunately, I'm not sure that actually limits the non-Liskov tendencies enough to make this amenable to static analysis.

Another option would be to ban calling types entirely. If you want a callable, you should take a callable. Type[T] would then be limited to purely Liskov-compatible operations like calling classmethods (which may well be a better design for this kind of thing anyway). It might then also make sense to have a CallableType[[args], T] as you suggest, but I'm not actually aware of any use case where you specifically need both those things at once, and you can always just multiply inherit from Type and Callable on a case-by-case basis.

@gvanrossum
Copy link
Member

I would just write Callable[[int, int], C]

That's great if all you need is a factory, and if that's all we needed we wouldn't need Type[C] at all. But what if you're also using it as a class? E.g. call a class method, use it with isinstance() or issubclass(), or possibly even use it as a base class for a dynamically created new class.

Well-behaved constructors support cooperative multiple inheritance

IMO that's a dangerous myth -- it is essentially claiming a very specific protocol for constructors as the one true way. That's okay for a library or framework, but I wouldn't want Python itself to enforce it, and I don't think it's appropriate for a type checker either. (Different frameworks may have different religions about constructors and a type checker should be able to support them all equally well.)

(Also, why shouldn't a recognized argument also be passed via super()? That's how non-constructor methods do it after all.)

Another option would be to ban calling types entirely

I originally started out thinking that's a reasonable alternative, but looking at actual code where developers have asked for Type[C] I found that they are often constructing instances (but not just that, so Callable isn't a substitute).

However: I am totally fine with punting on all this and just supporting Type[C] for now. (I am also considering making it so that at runtime you can call Type[] with any number of parameters -- this would make it easier to evolve the rules in mypy without having to distribute a new version of typing.py.)

@NYKevin
Copy link
Author

NYKevin commented Mar 21, 2016

(Also, why shouldn't a recognized argument also be passed via super()? That's how non-constructor methods do it after all.)

Because object.__init__() doesn't take arguments. Every argument has to be swallowed at some point in the MRO before you reach object, or it errors out. There are exactly two sensible ways to do that: swallow them as you recognize them, or swallow them all at once right before object. The latter requires some root type that always appears second-to-last in the MRO, which is needless complexity. It also makes it significantly harder to detect too-many-arguments, where as swallow-as-you-go gives you that for free.

(Of course, in practice, a lot of code out there takes the third approach of "swallow everything immediately and don't even call super().__init__()," which I call bad design but you might just as easily call "YAGNI." This is definitely a matter of opinion.)

(Different frameworks may have different religions about constructors and a type checker should be able to support them all equally well.)

Fair enough, though I believe the system I just described is implicitly endorsed by the design of super(). But some people might reasonably disagree, and even if I am right, most developers won't want to go back and change their constructors just to make the type checker happy.

I originally started out thinking that's a reasonable alternative, but looking at actual code where developers have asked for Type[C] I found that they are often constructing instances (but not just that, so Callable isn't a substitute).

I would refactor that code into a classmethod, if possible.

But again, we don't want to demand that developers refactor their code in arbitrary ways just to use the type checker, so we'd need to support this use case even if we were sure it was always a bad idea (which we're not).

(I am also considering making it so that at runtime you can call Type[] with any number of parameters -- this would make it easier to evolve the rules in mypy without having to distribute a new version of typing.py.)

I like this, as a compromise. Maybe throw in a warning in strict mode (is strict mode still a thing?).

@gvanrossum
Copy link
Member

Because object.__init__() doesn't take arguments. Every argument has to be swallowed at some point in the MRO before you reach object, or it errors out. [...] The latter requires some root type that always appears second-to-last in the MRO, which is needless complexity.

Why would such a root class be a bad idea? To me it sounds more sensible that assuming you can just multiply-inherit from anything that inherits from object. That's patently false anyways (try MI from list and dict :-).

MI is a great idea but it needs to be tamed. We used to have a meme that mix-in classes were the only way to tame it. That was probably true in the Python 2 days of classic classes. But nowadays a framework-specific root class sounds way better. You're never going to be successful using MI with two base classes that weren't designed to cooperate. Sharing a framework-specific root class sounds like a perfect way to signal that you're expecting to cooperate, and that you're using the framework's protocol for constructor super-calls. The MRO mostly d

Fair enough, though I believe the system I just described is implicitly endorsed by the design of super().

The design of super() is primarily meant to support "Liskov" for ordinary (non-constructor) methods. It is additionally constrained by the need for backwards compatibility, e.g. you can choose not to use it. If you don't call your super method you're implicitly constraining yourself to single inheritance. If you do call your super method you may support MI, if you do it right. For ordinary methods that's simple. For constructors, Python doesn't provide any additional support, but the needs and conventions are nevertheless different. Again, for SI you can do it any way you like (Liskov is not needed), and for MI you have to agree on a protocol. But I don't think that that protocol has to be the one implemented by object.

Type[] with any number of parameters

I like this, as a compromise. Maybe throw in a warning in strict mode (is strict mode still a thing?).

The idea is that at runtime it's completely silent, but a type checker can require a specific format. So if mypy doesn't support Type[X, Y] it should certainly flag that as an error. And we should all agree on what Type[X, Y] means before supporting it -- but my hope is that we won't be constrained by what the runtime library in 3.5.2 supports, since that library couldn't care less. (It's much easier to roll out a new version of mypy than to roll out a new version of typing.py, since the latter is baked into the stdlib of 3.5.x, and a PyPI package can't override a stdlib module.)

@sametmax
Copy link

Are types and classes different ? Would we need Type to accept stuff like str, dict, int and custom classes, while Class would accept only classes ?

@gvanrossum
Copy link
Member

gvanrossum commented May 12, 2016

Hmm... The distinction between "type" and "class" as we're trying to distinguish in PEP 484 is that a class is a runtime concept while a type is a concept in the type checker. So str, dict etc. are classes. They are also types, but only because basically every class is considered a type -- and then there are some things that are types but not classes, such as Any, Union[int, str], Tuple[int, int], and type variables. (However, List[int] is both, though for fairly obscure reasons.)

I don't recall whether we considered Class[C]; I kind of like Type[C] because we can make Type equal to Type[object] and which is exactly what type means. OTOH you've got a point, an argument annotated with Type[C] is really a class object that subclasses C, and giving it a type that's not a class makes little sense -- I wouldn't want to be allowed to allow Type[Any] or Type[Union[...]].

Still I'm reluctant to give up the pun or rhyme with type.

@bintoro
Copy link
Contributor

bintoro commented May 12, 2016

Right, the ability to substitute Type for type in an annotation is pretty neat — didn't think of that.

@gvanrossum
Copy link
Member

Everyone of python-ideas seems to prefer Type[C] over Class[C], so let's not fret over the naming.

Next we need to come up with some text for PEP 484, and a PR for typing.py (both the Python 3 version and the Python 2 backport). All preferably before PyCon, i.e. before May 28 or so (because right after PyCon I'll be depeleted and the 3.5.2 RC will be on June 12, i.e. really soon afterwards).

gvanrossum pushed a commit that referenced this issue May 13, 2016
This addresses #107 (but it doen't close it because the issue also
calls for an implementation in typing.py).
gvanrossum added a commit that referenced this issue May 18, 2016
This addresses #107 (but it doen't close it because the issue also calls for an implementation in typing.py).
@gvanrossum
Copy link
Member

I think my text about covariance "Type[T] should be considered covariant, since for a concrete class C , Type[C] matches C and any of its subclasses" is bogus.

Looking at the actual class definition in the PR, it seems the type variable used (CT) represents a metaclass. The "variance" allowed here is subclassing the metaclass. I think it's still correct to state that it's covariant, but the text needs to clarify that this refers to the metaclass. HELP!!

@bintoro
Copy link
Contributor

bintoro commented May 19, 2016

With generic collections, the type parameter denotes the type of the contained elements. But for Type, the type parameter must denote the runtime value (the class) itself, not its type (the metaclass).

What to do here may depend on the definition of a "generic class". If Generic[X] is supposed to always mean that an acceptable runtime value encloses an instance of X, then Type can't be generic.

FWIW, I assumed all along that Type would be a special case like Tuple.

@ilevkivskyi
Copy link
Member

ilevkivskyi commented May 19, 2016

I think the problem here is about classes vs types. It is not safe to talk about variance with respect to subclassing, one should talk about variance of subtyping. Type[t1] clearly does not represent a proper class. But it is also not a proper metaclass, in the same way as any generic type is not a proper class. However, Type[t1] is a well defined type.

Therefore I would propose wording like this:

"Type is covariant in its parameter, because Type[Derived] is a subtype of Type[Base]. Indeed, all members of the former type, i.e. classes that subclass Derived are also members of the latter type, since they all also subclass Base."

@ilevkivskyi
Copy link
Member

ilevkivskyi commented May 19, 2016

@bintoro Although Tuple is not a generic type, it still behaves perfectly covariant in its parameters. Also I would propose to avoid using term "generic class" to avoid confusions. Only MyGeneric[Any] (or equivalently MyGeneric without parameter) should be called a class, MyGeneric[int], etc. are all types.

@bintoro
Copy link
Contributor

bintoro commented May 19, 2016

@ilevkivskyi I did not refer to Tuple for its behavior.

I would propose to avoid using term "generic class" to avoid confusions.

I wouldn't because generic classes are a thing. A class that inherits from Generic[T] is a generic class in my books. I would propose to avoid calling them types to avoid confusion.

@ilevkivskyi
Copy link
Member

@gvanrossum In addition to my first explanation, I think CT does not represent metaclass, it still represents a type, you could even substitute a Union in place of it.

@gvanrossum
Copy link
Member

Hm, I agree CT does not represent a metaclass. The example Type[User] makes this pretty clear. It's Type itself that's like a metaclass (it inherits from 'type'). This is also pretty clear from the new_user() example:

def new_user(user_class: XXX):
    ...
new_user(User)

Compare this to

def foo(arg: int):
    ...
foo(42)

Here, 42 is an instance of int, just like in the previous example User is an instance of type. So where the annotation for arg is a class, the annotation for user_class is a metaclass.

Now back to variance. For this we need a different "regular" example. Let's use

def bar(args: Sequence[int]):
    ...

If we had a subclass of int, MyInt, then Sequence[MyInt] would be a valid type for a call to bar(), and that's what covariance means.

So in our Type example, we have Type[User] as the argument's annotation. For Type to be covariant would mean that if we have another function with an argument declared as Type[ProUser], we could call new_user() with that argument:

def new_pro_user(pro_user_class: Type[ProUser]):
    user = new_user(pro_user_class)
    ...

Indeed that sounds fine to me, so I agree intuitively (without having proven so rigorously) that Type is indeed covariant. (That's a big deal because previously I was just repeating "Type is covariant" because that's what the experts said; I hadn't actually visualized what that statement meant. :-)

Having gotten this far, I like Ivan's first sentence:

"Type is covariant in its parameter, because Type[Derived] is a subtype of Type[Base]."

I don't think the second sentence adds much (even though I agree it's true).

@rwbarton
Copy link

I agree that Type should be covariant. Here's my logic, which is basically an expanded version of your "if we have another function with an argument declared as Type[ProUser], we could call new_user() with that argument: [...] Indeed that sounds fine to me".

To test whether Type[Derived] should be a subtype of Type[Base], we need to check whether a value of type Type[Derived] is legal in any context where a value of type Type[Base] is. In order to do that we should start by writing down the typing rules for using values of type Type[T]. My suggestion is:

  • If c has type Type[T], C is a class, and T is a subtype of C, then if the expression C(...) type checks, then c(...) also type checks and has type T. [Construction via class variable]
  • If c has type Type[T], C is a class, and T is a subtype of C, then if the expression C.m(...) type checks, then c.m(...) also type checks and has the same type as C.m(...). [Invocation of class method via class variable]

(Why do these rules not start simply "If c has type Type[C], then if..."? The argument to Type might not be a class, but instead a Union or a type variable. In the case of a type variable, we need to be able to invoke class methods of the upper bound of the type variable (if it is a class). So I think we are forced into rules of this form.)

Now suppose S is a subtype of T and d has type Type[S]. Take the first rule and substitute d for c leaving the rest of the statement unchanged. Then S is a subtype of C, because S <: T <: C, and so we can apply the typing rule for d and S to conclude that d(...) has type S. Again since S <: T, d(...) also has type T. So, the value d also satisfies the typing rule for T, meaning it can be substituted for c. Similar logic applies to the second rule. So according to these rules, Type should indeed be covariant.

@rwbarton
Copy link

One problem with the typing rules above is that they don't actually correspond to python's runtime semantics. For example, every class is a subclass of object, but object's constructor can take 0 arguments, which a typical class's constructor cannot. So we expect a = A to have type Type[A], and then a() should be legal according to the above rule (with C = object), but it could fail at runtime. This is the LSP issue mentioned earlier in this thread. And this issue is unavoidable; we want the new_user example from the PEP to type check, but it really could fail at runtime if BasicUser's constructor isn't compatible with User's.

The language in the PEP about compatibility of method signatures is the assumption that is needed for the typing rule to match the runtime reality. mypy currently checks compatibility for ordinary methods, but not for the constructor. The true covariance rule for runtime behavior is something like "Type[D] is a subtype of Type[C] when D is a subclass of C which overrides the constructor and methods of C in a compatible manner".

Operationally I think mypy should implement these rules by just picking C to be the most specific class that is known to be a supertype of the type T, since that is the class that c is most likely to be compatible with (more so than any of C's superclasses, like object).

@rwbarton
Copy link

Also, I quite like @gvanrossum's type_map example. A related example would be code that does something like

class A: ...
class AImpl1(A): ...
class AImpl2(A): ...

if some_condition:
    aImpl = AImpl1  # type: Type[A]
else:
    aImpl = AImpl2

# use class methods on aImpl

Technically this does not require variance since the rule for typing classes could be "C has type Type[T] if T is a supertype of C", but it's nicer to just say "C has type Type[C]" and make use of covariance.

@gvanrossum
Copy link
Member

OK, thanks for the (semi-?)formal proof, it helps to know that I didn't miss a case (reasoning about variance just doesn't come naturally to me, I'm probably a closet Eiffel programmer :-).

Of course the weakness of the whole scheme is that the constructor signature of a subclass doesn't have to match that of the base class -- but that's not specific to the argument for covariance. There's already text in the PEP that promises to get back to this issue in the future.

The other thought that this triggered for me is that a class method might itself be a factory that uses Type[T] in its return type, for some type variable T whose upper bound is C. Then c.m() needn't have the same type as C.m() -- wherever the return type of C.m() uses C, the return type of c.m() uses c. But honestly I don't want to weigh the PEP down with this formalism anyways, so we can hand-wave this away.

@rwbarton
Copy link

It should probably reject isinstance(..., Type[...]) and issubclass(.., Type[...]).

I'm not sure which "it" you're referring to here: mypy or typing.py's runtime implementation. If these are rejected at runtime, then I would tend to prefer that they be rejected by mypy too.

Trying this out, mypy already gives an error

x/type.py:7: error: Generic type is prohibited as a runtime expression (use a type alias or '# type:' comment)

but with a type alias mypy accepts the isinstance call since Type[T] is a subtype of type. Seems like it would take a special case for mypy to reject this form, so maybe it should be allowed at runtime too?

Running under python gave a baffling error, looks like an unrelated issue?:

Traceback (most recent call last):
  File "x/type.py", line 7, in <module>
    isinstance(1, Type_int)
  File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/typing.py", line 996, in __instancecheck__
    return self.__subclasscheck__(instance.__class__)
TypeError: __subclasscheck__() takes exactly one argument (0 given)

For context, that __instancecheck__ is a method of GenericMeta and the full program I am running is

from typing import TypeVar, Generic

T = TypeVar('T')
class Type(type, Generic[T], extra=type): pass

Type_int = Type[int]
isinstance(1, Type_int)

@rwbarton
Copy link

The other thought that this triggered for me is that a class method might itself be a factory that uses Type[T] in its return type, for some type variable T whose upper bound is C. Then c.m() needn't have the same type as C.m() -- wherever the return type of C.m() uses C, the return type of c.m() uses c. But honestly I don't want to weigh the PEP down with this formalism anyways, so we can hand-wave this away.

Something like this crossed my mind too; I think it might fit better with the SelfType stuff discussed elsewhere and we should not worry about it right now.

@gvanrossum
Copy link
Member

(Our remarks about constructor signatures and LSP crossed, but I think we're in agreement. Note that object is even more special than most other classes, because it also has a wacko rule about compatibility between __new__ and __init__ if you implement one of them but not the other. But I digress.)

@gvanrossum
Copy link
Member

I'm not sure which "it" you're referring to here

Me neither, but most likely I was thinking about the type checker. Or perhaps the PEP. Since at runtime isinstance() involving special stuff is almost always forbidden, and issubclass() is forbidden by the PEP (though the runtime still allows it -- ripping it out is still a task (#136) but it's stalled by some unforeseen problems).

@ilevkivskyi
Copy link
Member

@gvanrossum I agree that my second sentence does not add much, but I do not see a shorter way of explaining the covariance than @rwbarton did. I think probably it would be better to simply add a short example, illustrating that a value of type Type[Derived] is legal in a context where a value of type Type[Base] is. Something like this:

``Type`` is covariant in its parameter, because ``Type[Derived]`` is a subtype of ``Type[Base]``::
  def new_pro_user(pro_user_class: Type[ProUser]):
      user = new_user(pro_user_class) # OK
    ...

@gvanrossum
Copy link
Member

Done! I've also added a similar call to new_user() to the earlier Union example (implicitly acknowledging covariance there without mentioning it, to avoid scaring people unnecessarily).

@gvanrossum
Copy link
Member

Closing -- both the PEP and typing.py have been updated. We're waiting for mypy but that's also very close (and not essential to this issue): python/mypy#1569

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants