Skip to content

Don't let tensor creation Ops constant fold too soon #895

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ricardoV94 opened this issue Jul 7, 2024 · 0 comments
Open

Don't let tensor creation Ops constant fold too soon #895

ricardoV94 opened this issue Jul 7, 2024 · 0 comments

Comments

@ricardoV94
Copy link
Member

ricardoV94 commented Jul 7, 2024

Description

It's useful to know when arange or eye is in a graph, but if these have constant inputs they will be constant folded, and then we would need to analyze all constants in a graph to find whether they correspond to arange/eye, which can be rather costly to do.

We should prevent these from being constant folded until a later phase? Also we may want to do a single pass at the beginning that converts constants to equivalent arange, eye, alloc. As I mentioned this can be costly, but at least this way it's "cached" because it becomes part of the graph. Whereas a rewrite looking for aranges would otherwise have to do it for every constant every time it is called (or equally unsatisfactory, ignore constants).

This is actually one of the hardest aspects of the kind of destructive/eager optimization we do, as constant_fold destroys information but can also simplify graphs very quickly.

For reference, note that there already exists special logic is alloc to try to not constant_fold it in a few cases where it is worse not to. This is just a bit myopic and won't scale for the kind of rewrites we are interested in #573

def do_constant_folding(self, fgraph, node):
clients = fgraph.clients[node.outputs[0]]
if not clients:
return False
for client, idx in clients:
if client == "output":
# If the output is a constant, it will have to be deepcopied
# each time the function is called. So we do not fold.
return False
# Allow alloc to be lifted out of Elemwise before constant folding it
elif isinstance(client.op, Elemwise):
return None
# Same for Blockwise, unless it has no batch_dims
elif isinstance(client.op, Blockwise) and client.op.batch_ndim(client):
return None
elif (
# The following ops work inplace of their input id 0.
idx == 0
and isinstance(
client.op,
pytensor.tensor.subtensor.IncSubtensor
| pytensor.tensor.subtensor.AdvancedIncSubtensor1
| pytensor.tensor.subtensor.AdvancedIncSubtensor
| pytensor.tensor.blas.Gemv
| pytensor.tensor.blas_c.CGemv
| pytensor.tensor.blas.Ger
| pytensor.tensor.blas_c.CGer
| pytensor.tensor.blas_scipy.ScipyGer,
)
):
# Ops that will work inplace on the Alloc. So if they
# get constant_folded, they would copy the constant
# and this is less efficient.
# Not doing the constant folding could also lower the
# peak memory use, as the "constant" won't always exist.
return False
return True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant