-
Notifications
You must be signed in to change notification settings - Fork 12.8k
Report aggregate statistics for solution as well as some solution perf numbers #49285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@typescript-bot pack this |
Heya @sheetalkamat, I've started to run the tarball bundle task on this PR at 2726b29. You can monitor the build here. |
Hey @sheetalkamat, I've packed this into an installable tgz. You can install it for testing by referencing it in your
and then running There is also a playground for this build and an npm module you can use via |
@typescript-bot pack this |
Heya @sheetalkamat, I've started to run the tarball bundle task on this PR at a7be227. You can monitor the build here. |
Hey @sheetalkamat, I've packed this into an installable tgz. You can install it for testing by referencing it in your
and then running There is also a playground for this build and an npm module you can use via |
…printing diagnostics
@typescript-bot pack this |
Heya @sheetalkamat, I've started to run the tarball bundle task on this PR at d0fe796. You can monitor the build here. |
Hey @sheetalkamat, I've packed this into an installable tgz. You can install it for testing by referencing it in your
and then running There is also a playground for this build and an npm module you can use via |
@amcasey do we want this or just as a branch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sheetalkamat Can you please post sample output? I'm having trouble visualizing the end result.
|
@sheetalkamat I like the aggregate report for solution builds, but those new rows are a little hard to parse for non-experts. Maybe we'd be better off adding tracepoints? |
I guess, personally, I'd rather avoid having both "extended" and "really extended" (i.e. solution). My vote would be to clean these up and then incorporate them into the regular
Nitpicking
|
@amcasey I have cleanup the PR per your feedback. Moved the project build and projects in scope at top. Also removed some of the times that i had added only for my experiment and checking of #48784 to see if i am making improvements in some of the things i was addressing. I dont think those are useful as such to user. Tracing is good idea but i havent updated tracing to handle solution (just like how performance was updated). we can do it when we need it or i can add it now, let me know. Also output file writing was just something i needed as it also included reading output files which my PR addressed so i was just looking to see if it really benefits or not. It isnt interesting as such since all it does is write files on the disk. (And currently read d.ts files to determine if they changed or not (which with my PR isnt needed any more)
Build time is end to end build time for all projects.. includes all bookkeeping and actual program build time. This is how result looks now:
Interestingly total time comes before
|
Thanks! That looks great!
No rush.
Maybe we can just drop the current
Per my comment above, I'd rather delete (or rename?) the existing measure. |
@amcasey one thing that i was worried was how it looks when running build without
|
This is how it looks now:
|
@sheetalkamat I love it, but you might want to confirm with @rbuckton that it won't break any of his tooling. |
@rbuckton Can you please review the change and see if it is ok with the perf tooling. Thanks |
If the perf tools track Total, which I don't think they do, the time may jump, now that it's more accurate. I'm almost certain no one cared about the Node counts (chiefly because no one complained when I split it into separate rows). |
@amcasey i didnt change total for |
Is this going to print Also, it is possible to run benchmarks with |
I should clarify this. When the benchmark tool runs, it intercepts standard output to match anything that looks like a
|
I guess we could prefix each measure in the summary with "Aggregate"? If we add another flag - and I'd still prefer not to - I'd want that flag to disable the summary output because I want users to retain the mental model that |
So do we prefer option that prints aggregate result or We prepend each entry with |
I prefer the prefix. |
I will also change back
|
Result now looks like this:
|
Does this indicate which project each set of diagnostics is related to? It's not necessary for benchmarks, per se, but seems like an important piece of information missing in the output above. |
You get that if you pass |
@typescript-bot pack this |
Heya @sheetalkamat, I've started to run the tarball bundle task on this PR at 150c981. You can monitor the build here. |
Hey @sheetalkamat, I've packed this into an installable tgz. You can install it for testing by referencing it in your
and then running There is also a playground for this build and an npm module you can use via |
@rbuckton is this now ready to merge.. Any other feedback. Thanks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of ts.performance
is intended to mirror the native User Timings API so that we can leverage the native API with other tooling (such as generating .cpuprofiles, running with node --prof
, etc.), and I'd like to stay as close to that API as possible. I think there are alternatives to this approach that don't involve duplicating the ts.performance
namespace just to handle an additional use case.
const count = counts.get(markName) ?? 0; | ||
counts.set(markName, count + 1); | ||
marks.set(markName, timestamp()); | ||
performanceImpl?.mark(markName); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This calls out to the native PerformanceHooks
API for use with CPU profiles and a few other things in ts-perf. Is it intentional that both performance
and buildPerformance
call into this?
export const performance = createPerformanceTracker(); | ||
export const buildPerformance = createPerformanceTracker(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason we need two different performance instances? Do these two overlap WRT mark and measure names?
The original ts.performance
API was designed to mirror the native User Timings API and essentially act as an adapter between the DOM and NodeJS implementations, but the user timings API doesn't support multiple instances.
buildPerformance.forEachCount((count, name) => statistics.push({ name, value: count, type: StatisticType.count })); | ||
buildPerformance.forEachStatistics(s => statistics.push(s)); | ||
buildPerformance.forEachMeasure((duration, name) => statistics.push({ name: `${name} time`, value: duration, type: StatisticType.time })); | ||
buildPerformance.disable(); | ||
buildPerformance.enable(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than introducing a separate buildPerformance
object, couldn't we just prefix builder-specific marks and measures and use that prefix as a filter when printing? If it's about resetting marks and measures between builds in watch mode, I'd rather we implement clearMarks(name)
and clearMeasures(name)
from the native User timings API:
clearMarks(name?)
- https://developer.mozilla.org/en-US/docs/Web/API/Performance/clearMarksclearMeasures(name?)
- https://developer.mozilla.org/en-US/docs/Web/API/Performance/clearMeasures
export interface Statistic { | ||
name: string; | ||
value: number; | ||
type: StatisticType | ||
} | ||
|
||
export enum StatisticType { | ||
time, | ||
count, | ||
memory, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not clear on what this is for, given that we already have a concept of "count" (i.e., "marks"), and "time" (i.e., "measure"). Memory usage isn't generally tracked by the native User Timings API but can be accomplished by providing additional metadata to a mark. We could probably extend our version of mark
to support this behavior.
*/ | ||
function measure(measureName: string, startMarkName: string, endMarkName: string) { | ||
if (enabled) { | ||
durationMarks.add(startMarkName).add(endMarkName); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are these excluded here? They wouldn't be excluded by the native User Timings API.
* @param endMarkName The name of the ending mark. If not supplied, the current timestamp is | ||
* used. | ||
*/ | ||
function measure(measureName: string, startMarkName: string, endMarkName: string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This signature no longer aligns with the native User Timings API. Neither a startMarkName
nor an endMarkName
are required.
Closed in favor of #50267 |
Creating draft for changes i have done to measure build perf for #48784 and others
cc: @amcasey
Sorry i didnt realise that i had not updated description when i marked this ready for review.
This change under
--extendedDiagnostics
aggregates the diagnostics from all projects built and reports it at the end. Apart from that it also outputs some measurements for work that happens intsc --build
like finding if projects are uptodate etc.This is how output looks: