Repo for QViT-KD, produced for submission at MICCAI2025
STEPS: (1) run "conda create -n QViT-KD python=3.11.7" and activate conda environment
(2) Clone the repo.
(3) run "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118"
(4) run "pip install -r requirements.txt"
(5) Download OASIS dataset from "https://www.kaggle.com/datasets/ninadaithal/imagesoasis" and place in OASIS/data
For fine-tuning TinyViT 5m:
- cd src
- use "finetune_tinyvit.py"
For running QViT and ViT training from scratch
- cd src
- For MedMNIST, use "scratch.py"
- For OASIS, use "OASIS.py" (This runs both from scratch and with KD, and also fine-tunes TinyViT)
- NOTE: PennyLane tends to try to use all available CPUs. To limit this, we found setting the specific CPUs to be used with taskset was useful, eg "taskset -c 0-3 python scratch.py"
For running QViT-KD and ViT-KD training code
- cd src
- use "distill.py"
- NOTE: PennyLane tends to try to use all available CPUs. To limit this, we found setting the specific CPUs to be used with taskset was useful, eg "taskset -c 0-3 python distill.py"