Skip to content

Tweak and implement weak conformance replacement #5371

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Nov 12, 2018

Conversation

odersky
Copy link
Contributor

@odersky odersky commented Nov 2, 2018

No description provided.

@odersky odersky requested a review from sjrd November 2, 2018 15:01
@SethTisue
Copy link
Member

SethTisue commented Nov 4, 2018

     Array(b, 33, 5.5)      : Array[Double] // b is an inline val
     Array(f(), 33, 5.5)    : Array[AnyVal] // f() is not a constant
     Array(5, 11L)          : Array[Long]
     Array(5, 11L, 5.5)     : Array[AnyVal] // Long and Double found
     Array(1.0f, 2)         : Array[Float]
     Array(1.0f, 1234567890): Array[AnyVal] // loss of precision
     Array(b, 33, 'a')      : Array[Char]
     Array(5.toByte, 11)    : Array[Byte]

I know there is rhyme and reason underlying this list of examples, but it's extraordinarily difficult for the casual eye to discern. I worry that anyone looking at this list will find it quite strange and off-putting

in cases where isn't obvious what the inferred type should be, can we make it a compile error unless a type parameter is explicitly supplied? is that a design path that's been explored?

@dwijnand
Copy link
Member

dwijnand commented Nov 4, 2018

Is this revisiting #2451?

@smarter
Copy link
Member

smarter commented Nov 5, 2018

The context is #5358

@odersky
Copy link
Contributor Author

odersky commented Nov 5, 2018

in cases where isn't obvious what the inferred type should be, can we make it a compile error unless a type parameter is explicitly supplied? is that a design path that's been explored?

The issue we are addressing here is that we want to follow intuition where it is obvious. Mixing elements of different numeric types gives an AnyVal, as usual. Except that if you write

Array(2.0, 3.0, 0, 4.0)

you really want it to mean Array[Double]. Everything else is just silly. Note this also aligns with how int constants are treated elsewhere in Java and Scala. E.g.

val x: Byte = 33

is accepted, since 33 is implicitly converted to a Byte. Again, everything else would just be needlessly pedantic.

Array(5, 11L, 5.5) : Array[AnyVal] // Long and Double found
Array(1.0f, 2) : Array[Float]
Array(1.0f, 1234567890): Array[AnyVal] // loss of precision
Array(b, 33, 'a') : Array[Char]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this changed back from AnyVal to Char? Didn't we agree that Chars should never be considered as numeric types for the purpose of harmonization?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we agreed that a Char literal should not be converted to another numeric type, same as a Float or a Double is not converted.

But this case is different: It's an Int constant that is converted to a Char type. Same as it could be converted to a Short or Byte type. I agree that Char feels strange, but the fact is it is a member of Java's numeric hierarchy, so I believe it's better to be consistent.

We do allow:

scala> val c: Char = 65
c: Char = A

So it's inconsistent to not also allow this in harmonization.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see now that Char is not in the spec.

if all the other expressions have the same numeric type T (which can be one of Byte, Short, Int, Long, Float, Double)

Somehow I overlooked this before. So we'd either have to put Char back in the spec, or change the implementation to not convert to Char. At this stage it's all the same for me. Consistency with assignment is nice to have but if people feel strongly agaist I won't argue any longer.

val cls = targetClass(ts, NoSymbol, false)
if (cls.exists) {
def lossOfPrecision(n: Int): Boolean =
cls == defn.FloatClass && n.toFloat.toInt != n
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about Bytes and Shorts? (and Chars, if they are considered numeric after all?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are covered under c.convertTo(cls.typeRef) != null

@ShaneDelmore
Copy link
Contributor

Array(2.0, 3.0, 0, 4.0)

you really want it to mean Array[Double]. Everything else is just silly.

If I wrote this it would be a mistake. The compiler would infer what I meant, and every future reader would need to infer what I mean, but what I really want is for an error to remind me to either add .0, or just specify the type so nobody has to play computer in their head and infer the type when it is so trivial to just correct the oversight.

I understand this may not be what others want, just offering another perspective.

When running puzzle from Tasty it still harmonizes the types
in the old way.

Here's the diff:
```
~/workspace/dotty/tests/run> diff puzzle.decompiled puzzle.decompiled.out
4,5c4,5
<     scala.Predef.println(if (false) 5.0 else 53.0)
<     val x: scala.Double = if (false) 5.0 else 53.0
---
>     scala.Predef.println(if (false) 5.0 else '5')
>     val x: scala.AnyVal = if (false) 5.0 else '5'
11c11
< }

```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants