Improve performance of Debug.format functions#49487
Improve performance of Debug.format functions#49487jakebailey merged 3 commits intomicrosoft:mainfrom
Conversation
|
I guess the saving grace is that while debugging, these are called a lot if you have the nice debugger helpers enabled, so that should get faster. |
|
@jakebailey out of curiosity - are these called by methods like |
|
Some of them, yes, but other helpers like that are written without using the format functions. I haven't looked into the performance of everything in Debug yet, this PR was mainly to enable removal of duplicate SyntaxKind formatting code without a performance hit. |
| } | ||
| if (isFlags) { | ||
| let result = ""; | ||
| const result: string[] = []; |
There was a problem hiding this comment.
I'm not sure if it's still the case, but in earlier v8 versions if you avoid allocating the array until you have the first element the performance is improved because it doesn't have to re-type the array.
const ar1 = []; // typed internally as a JS_ARRAY_TYPE of PACKED_SMI_ELEMENTS
ar1.push("foo"); // deoptimizes, re-types, now it's internally typed as an array of strings
// vs
const ar2 = ["foo"]; // typed internally as an array of stringsAs a result, we have a lot of code that does either of the following:
let array: Foo[] | undefined;
...
if (array) {
array.push(foo);
}
else {
array = [foo];
}
// or
...
array = append(array, foo); // same as above, but in an inlinable functionThat said, since this code isn't performance critical, it probably wouldn't help much.
There was a problem hiding this comment.
That's an interesting idea; I know that this string addition itself was definitely bad, but I had never (when fixing this kind of code after pprof showing huge allocations) thought about creating a single array with one element to get v8 to do the right thing; the performance boost from just doing join was the main significant item.
There was a problem hiding this comment.
IIRC, this was a performance optimization employed by @vladima back in the day. It still seems to be effective from what I can tell.
| // Assuming enum objects do not change at runtime, we can cache the enum members list | ||
| // to reuse later. This saves us from reconstructing this each and every time we call | ||
| // a formatting function (which can be expensive for large enums like SyntaxKind). | ||
| const existing = enumMemberCache.get(enumObject); |
There was a problem hiding this comment.
As you mentioned, this is one of those things that only really matters for tests. In tsc there will never be a cached value since these functions should only ever be called right before a crash due to a debug assertion. It can possibly benefit the language service, but I'm not sure how often that would happen in practice.
There was a problem hiding this comment.
The main benefit at runtime is the debug helpers that we define to use in VS Code's debugger in settings.json, where these helpers are called over and over to show the node types. Some things can be improved later, though.
There was a problem hiding this comment.
is there any document describing how you debug TypeScript etc? I have my own ways (mostly using __debugTypeToString and __debugGetText in the debugger) - but I wonder if I'm missing some better techniques.
There was a problem hiding this comment.
Making sure that customDescriptionGenerator is enabled in your launch.json is one thing related to this PR, but past that, I don't have any cool tricks besides a collection of helpers I used in the watch window (https://github.com/jakebailey/ts-debug-helpers), which is mainly about printing out nodes/the parser's scanner to more easily see what code is being operated on. Not type related at all.
These functions are not normally performance critical, however, the cleanup in #49485 removes the test harness' duplicate
SyntaxKindformatter in favor ofDebug's (which is more accurate, but slower), and this recovers the perf loss this causes.This technically speeds up tests by 25%, but that is only there because #49485 slows down the tests by 25%. Sigh.