Skip to content

Incomplete pruning for OpenAI CLIP model #507

@sudhir-mcw

Description

@sudhir-mcw

Issue Summary

While trying to prune the clip model, only Linear layers are getting pruned, while the other layers seems to remain unaltered. The issue remains to persist even after setting the ignored_layers to empty.

Setup

torch-pruning == 1.6.0
clip == clip @ git+https://github.com/openai/CLIP.git@dcba3cb2e2827b402d2701e7e1c7d9fed8a20ef1

Observations

Comparission of model architectures pre and post pruning

  • without ignored layers link

We also noticed that the CLIP model from OpenAI seems to use custom wrapper layers instead of torch.nn layers (for some layers)
link

Code setup

device="cpu"
model, preprocess = clip.load("ViT-L/14@336px", device=device)
model.eval()
dummy_image = torch.randn(1, 3, 336, 336).to(device)
dummy_text = clip.tokenize(["a photo of a cat"]).to(device)
example_inputs=(dummy_image, dummy_text)
ignored_layers = []
print("model",model)
num_heads = {} 
for m in model.modules():
    if isinstance(m, nn.MultiheadAttention):
        num_heads[m] = m.num_heads
print("len of num_heads",num_heads)
# init pruner
pruner = tp.BasePruner(
    model=model,
    example_inputs=example_inputs,
    importance=tp.importance.GroupMagnitudeImportance(),
    pruning_ratio=0.5,
    iterative_steps=1,
    num_heads=num_heads,
    head_pruning_ratio=0.5,
    ignored_layers=[],
    round_to=None
)
base_macs, base_nparams = tp.utils.count_ops_and_params(model, example_inputs)
print("base Macs",base_macs, "base Params", base_nparams)
for group in pruner.step(True):
    print("group",group)
    group.prune()
pruned_macs, pruned_nparams = tp.utils.count_ops_and_params(model, example_inputs)
print("pruned Macs",pruned_macs, "pruned Params", pruned_nparams)
print("model",model)

files

Question

Only linear layers seems to be altered while other layers like conv2d doesn't seem to be altered, we need to use OpenAI's clip model for our use case and not the transformers version, I also noticed the same issue was mentioned in (#101), (#229)

  1. Does the CLIP's wrapper layers affect in parsing of the model
  2. What modifications can be done in order to make the clip model prune as expected

@VainF could you help us understand the reason for the behaviour and any suggestions on how to make the pruning of all layers work as expected

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions