This is an unofficial implementation for the paper Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Forked from the original repo Mixture-of-depths
Date: 2025-10-14
Newly adapted to support higher version of transformers (>=4.50.0).
Minimal Demo is added in the demo.py
Latest requirements added in requirements.txt
| Model | Supported? |
|---|---|
| LLama3.2 | ✅ |
from transformers import AutoModelForCausalLM
from MoD import apply_mod_to_hf
# Initialize your model from an available hf model
model= AutoModelForCausalLM.from_pretrained("some-repo/some-model")
# Convert the model to include the mixture of depths layers
model = apply_mod_to_hf(model)
# train the model
# ...
# save the model
model.save_pretrained('some_local_directory')Before calling the hf generate() method please explicitly use eval() on the model
We welcome contributions from the community, whether it's adding new features, improving documentation, or reporting bugs. Please refer to our contribution guidelines before making a pull request.
This repo is open-sourced under the Apache-2.0 license.