[torchlib] Implement aten::_scaled_dot_product_efficient_attention.default
#1160
Labels
contribution welcome
We welcome code contributions for this
module: torchlib
Related to the torch/aten function lib in development
From models. This op is emitted from cuda export only.
From https://github.com/microsoft/onnx-converters-private/issues/196
cc @justinchuby
The text was updated successfully, but these errors were encountered: