Description
On my workstation, the following program runs in ~30s.
If I reduce the list size by removing the * 8
it takes ~8s.
Now I don't know if this program is in any way realistic.
How would I tell if my app behaviour has this behaviour?
(i.e. the combination of large lists and mutation)
I instrumented the builtin hash map and hash set in runtime/lib to print list allocation sizes of 1M and over.
dart2js compiles of a certain large app do create at least two hash maps with lists containing 2M elements. These are likely to be long-lived.
(I thought of breaking the hash map into a multi-level hash map, but this is unlikely to help since there will be multiple smaller lists with random access that add up to 2M elements. A segmented approach would work better with lists that have exploitable access patterns, e.g. appending.)
const N = 128 * 1024 * 8;
var a = new List(N);
main() {
int ix = 0;
var root = new Link(null);
var sw = new Stopwatch()..start();
for (int i = 0; i < 1000; i++) {
for (int j = 0; j < 1000; j++) {
a[ix] = new Object();
var p = root;
for (int k = 0; k < 1000; k++) {
p = new Link(p);
}
a[ix] = p.length;
ix = (ix + 14999981).remainder(a.length);
}
}
print(sw.elapsed);
}
class Link {
Link next;
Link(this.next);
int get length {
var i = 0;
for (Link p = this; p != null; p = p.next) i++;
return i;
}
}