@akdubya
According to README.md:
var suite = new uubench.Suite({
min: 1000, // each benchmark should run for at least 1000ms
...
});
I looked up the source code:
function adaptive() {
if (--pend === 0) {
var elapsed = new Date() - start;
if (elapsed < min) {
self.run(iter*2);
} else {
self.callback({iterations: iter, elapsed: elapsed});//since "start" is reset whenever "Bench.run" is invoked, then the stats is inaccurate
}
}
}
The key point is that the value of "start" is reset whenever "Bench.run" is invoked:
Bench.prototype.run = function(iter) {
var self = this, fn = self.test,
checkfn = self.options.type === "adaptive" ? adaptive : fixed,
i = iter, pend = i,
min = self.options.min, start;
if (self.loop) {
pend = 1;
start = new Date(); //start is reset
fn(checkfn, i);
} else {
start = new Date(); //start is reset
while (i--) {
fn(checkfn);
}
}
So the stats is inaccurate, I think the value of "start" should be set at the very begining of "Suite.prototype.runOne", and don't reset it in Bench.run:
Suite.prototype.runOne = function(idx) {
var self = this;
setTimeout(function() {
self.tests[idx].start = new Date().getTime(); //set "start" value here, so "min" is the accumulative runtime for benchmark
self.tests[idx].run(self.options.iterations);
}, self.options.delay);
}
I'm very curious to this issue. Please give me a hint!
Finally, many thanks to you for your great work and the sharing :)
@akdubya
According to README.md:
I looked up the source code:
The key point is that the value of "start" is reset whenever "Bench.run" is invoked:
So the stats is inaccurate, I think the value of "start" should be set at the very begining of "Suite.prototype.runOne", and don't reset it in Bench.run:
I'm very curious to this issue. Please give me a hint!
Finally, many thanks to you for your great work and the sharing :)