-
Notifications
You must be signed in to change notification settings - Fork 63
Simplify model error message in test | test(torchlib) #882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
Remove the duplicate check_model message when displaying [ghstack-poisoned]
Codecov Report
@@ Coverage Diff @@
## gh/justinchuby/32/base #882 +/- ##
==========================================================
+ Coverage 76.57% 76.59% +0.02%
==========================================================
Files 112 112
Lines 13413 13408 -5
Branches 1349 1348 -1
==========================================================
- Hits 10271 10270 -1
+ Misses 2805 2801 -4
Partials 337 337
|
) Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #883 * #882 * __->__ #881 The torchscript ONNX graph generator creates numeric value names by default (`0`, `1`). These are not legal ONNX tensor names, since ONNX requires the names to be valid C variable names. This change updates the names by prepending a prefix `_val_` or `_const_` to make them valid ONNX names. It also improves readability by making the names less likely to be confused with shape values. I decided to use the `_` prefix to reduce the chance of name collision with FX names. After: ``` < ir_version: 8, opset_import: ["" : 18], producer_name: "pytorch", producer_version: "2.1.0" > torch_jit (float[5,5,5,5] input_0, int64[2] input_1_3) => (float[5,5,5,2] _val_10) { _val_2 = Transpose <perm = [0, 1, 2, 3]> (input_0) _val_3 = Max (input_1_3) _val_4 = Shape <start = 0> (_val_3) _val_5 = Expand (input_1_3, _val_4) _const_6 = Constant <value = int64 {-1}> () _val_7 = Unsqueeze (_val_5, _const_6) _val_8 = Concat <axis = -1> (_val_7) _val_9 = GatherND <batch_dims = 0> (_val_2, _val_8) _val_10 = Transpose <perm = [0, 1, 2, 3]> (_val_9) } ``` Before: ``` < ir_version: 8, opset_import: ["" : 18], producer_name: "pytorch", producer_version: "2.1.0" > torch_jit (float[5,5,5,5] input_0, int64[2] input_1_3) => (float[5,5,5,2] 10) { 2 = Transpose <perm = [0, 1, 2, 3]> (input_0) 3 = Max (input_1_3) 4 = Shape <start = 0> (3) 5 = Expand (input_1_3, 4) 6 = Constant <value = int64 {-1}> () 7 = Unsqueeze (5, 6) 8 = Concat <axis = -1> (7) 9 = GatherND <batch_dims = 0> (2, 8) 10 = Transpose <perm = [0, 1, 2, 3]> (9) } ```
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't find the check_model
the PR description mentions that was deleted. It seems some kind of duplicate of #881
looks like the diff included previous changes as well |
Stack from ghstack (oldest at bottom):
_val
and_const
| feat(torchlib) #881Remove the duplicate check_model message when displaying