Skip to content

Some further optimizations #3223

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 1, 2017
Merged

Conversation

odersky
Copy link
Contributor

@odersky odersky commented Oct 1, 2017

No description provided.

Hash-consing used to use `equals`, except in some cases, where
it used `eq`. This was inconsistent, and `equals` is also needlessly
expensive.
@@ -606,6 +606,7 @@ object Contexts {
/** A table for hash consing unique types */
private[core] val uniques = new util.HashSet[Type](Config.initialUniquesCapacity) {
override def hash(x: Type): Int = x.hash
override def isEqual(x: Type, y: Type) = x.eql(y)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the override also be used in the other subclasses of HashSet ? That is, AppliedUniques and NamedTypeUniques

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, equals is never called on those sets. Everything is based on eq for them.

@@ -381,25 +377,31 @@ trait TypedTreeInfo extends TreeInfo[Type] { self: Trees.Instance[Type] =>

def isPureExpr(tree: Tree)(implicit ctx: Context) = exprPurity(tree) == Pure
def isIdempotentExpr(tree: Tree)(implicit ctx: Context) = exprPurity(tree) >= Idempotent
def isReadOnlyExpr(tree: Tree)(implicit ctx: Context) = exprPurity(tree) >= ReadOnly
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This commit would benefit from some testcases. I suggest adding some tests like https://github.com/lampepfl/dotty/blob/1e0863708e48339759f73f38464a410df850df15/compiler/test/dotty/tools/backend/jvm/DottyBytecodeTests.scala#L168-L189 which check that two methods end up with the same bytecode generated.

@smarter
Copy link
Member

smarter commented Oct 1, 2017

The second commit results in additional incorrect warnings:

[warn] -- Warning: /tmp/3/compiler/src/dotty/tools/dotc/core/tasty/NameBuffer.scala:47:4 
[warn] 47 |    op
[warn]    |    ^^
[warn] | a read-only expression does nothing in statement position

I suggest breaking off the second commit in a separate PR since it's not an optimization, and we can get the optimization in now.

@odersky
Copy link
Contributor Author

odersky commented Oct 1, 2017

@smarter Yes, good idea.

@odersky
Copy link
Contributor Author

odersky commented Oct 1, 2017

test performance please

@dottybot
Copy link
Member

dottybot commented Oct 1, 2017

performance test scheduled: 1 job(s) in queue, 0 running.

@dottybot
Copy link
Member

dottybot commented Oct 1, 2017

Performance test finished successfully:

Visit http://dotty-bench.epfl.ch/3223/ to see the changes.

Benchmarks is based on merging with master (b570a62)

@smarter
Copy link
Member

smarter commented Oct 1, 2017

As before, these data points have to be looked at carefully because they're not including today's merged PRs. @liufengyun are the PR data points generated after every PR is merged or daily?

@liufengyun
Copy link
Contributor

@smarter The daily points are generated daily at mid-night.

@smarter
Copy link
Member

smarter commented Oct 1, 2017

Would it be possible to generate them everytime a PR is merged instead?

@liufengyun
Copy link
Contributor

Yes, we can change it to run immediately after merge. I'll make it the next TODO in bench.

@smarter smarter merged commit 1dda6e1 into scala:master Oct 1, 2017
@allanrenucci allanrenucci deleted the try-optimize-3 branch December 14, 2017 19:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants