Skip to content

PyTorch/XLA shouldn't crash on XLA errors #9096

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
zhanyong-wan opened this issue May 5, 2025 · 2 comments
Open

PyTorch/XLA shouldn't crash on XLA errors #9096

zhanyong-wan opened this issue May 5, 2025 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@zhanyong-wan
Copy link
Collaborator

🐛 Bug

If XLA fails to compile a program and returns a Status error, PyTorch/XLA should handle it gracefully (e.g. by raising an appropriate Python exception) instead of crashing.

To Reproduce

See

executable =
client_->CompileAndLoad(mlir_module, compile_options).value();
StableHloCompileCounter()->AddValue(1);
} else {
executable =
client_->CompileAndLoad(instance.computation, compile_options)
.value();
- CompileAndLoad(...).value() will crash if CompileAndLoad(...) returns an error. We should check for error before getting the value.

@zhanyong-wan zhanyong-wan self-assigned this May 5, 2025
@ysiraichi ysiraichi added the enhancement New feature or request label May 6, 2025
@ysiraichi
Copy link
Collaborator

Do you mean to wrap them in ConsumeValue()?

@zhanyong-wan
Copy link
Collaborator Author

No, I mean to fail gracefully by raising a python exception. ConsumeValue() would still crash, although with a nicer message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants