Skip to content

Don't let tensor creation Ops constant fold too soon #895

Open
@ricardoV94

Description

@ricardoV94

Description

It's useful to know when arange or eye is in a graph, but if these have constant inputs they will be constant folded, and then we would need to analyze all constants in a graph to find whether they correspond to arange/eye, which can be rather costly to do.

We should prevent these from being constant folded until a later phase? Also we may want to do a single pass at the beginning that converts constants to equivalent arange, eye, alloc. As I mentioned this can be costly, but at least this way it's "cached" because it becomes part of the graph. Whereas a rewrite looking for aranges would otherwise have to do it for every constant every time it is called (or equally unsatisfactory, ignore constants).

This is actually one of the hardest aspects of the kind of destructive/eager optimization we do, as constant_fold destroys information but can also simplify graphs very quickly.

For reference, note that there already exists special logic is alloc to try to not constant_fold it in a few cases where it is worse not to. This is just a bit myopic and won't scale for the kind of rewrites we are interested in #573

def do_constant_folding(self, fgraph, node):
clients = fgraph.clients[node.outputs[0]]
if not clients:
return False
for client, idx in clients:
if client == "output":
# If the output is a constant, it will have to be deepcopied
# each time the function is called. So we do not fold.
return False
# Allow alloc to be lifted out of Elemwise before constant folding it
elif isinstance(client.op, Elemwise):
return None
# Same for Blockwise, unless it has no batch_dims
elif isinstance(client.op, Blockwise) and client.op.batch_ndim(client):
return None
elif (
# The following ops work inplace of their input id 0.
idx == 0
and isinstance(
client.op,
pytensor.tensor.subtensor.IncSubtensor
| pytensor.tensor.subtensor.AdvancedIncSubtensor1
| pytensor.tensor.subtensor.AdvancedIncSubtensor
| pytensor.tensor.blas.Gemv
| pytensor.tensor.blas_c.CGemv
| pytensor.tensor.blas.Ger
| pytensor.tensor.blas_c.CGer
| pytensor.tensor.blas_scipy.ScipyGer,
)
):
# Ops that will work inplace on the Alloc. So if they
# get constant_folded, they would copy the constant
# and this is less efficient.
# Not doing the constant folding could also lower the
# peak memory use, as the "constant" won't always exist.
return False
return True

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions