[TPAMI' 2025] Harnessing Lightweight Transformer with Contextual Synergic Enhancement for Efficient 3D Medical Image Segmentation
Authors: Xinyu Liu, Zhen Chen, Wuyang Li, Chenxin Li, Yixuan Yuan
This repository contains the implementation of our Light-UNETR for efficient 3D medical image segmentation with contextual synergic enhancement (CSE).
- Python 3.10+
- CUDA-compatible GPU
- torch version 2.4.1
Clone the repository:
git clone https://github.com/CUHK-AIM-Group/code_cse.git
cd code_cseCreate conda environment:
conda create -n lightunetr python=3.12
conda activate lightunetrInstall dependencies:
pip install -r requirements.txt- LA (Left Atrium): Left atrium segmentation from cardiac MRI
- Pancreas-CT: Pancreas segmentation from abdominal CT scans
- BraTS 2019: Brain tumor segmentation from multimodal MRI
- LA: Download from LA data
- Pancreas: Download from Pancreas and preprocess following here
- BraTS 2019: Download from BraTS 2019
After preprocessing, organize your data in the following structure:
datasets/
├── brats/
│ ├── data/
│ │ ├── BraTS19_2013_0_1.h5
│ │ └── ...
│ ├── test.list
│ ├── train.list
│ ├── train_lab25.list
│ └── train_unlab25.list
├── la/
│ ├── 2018LA_Seg_Training Set/
│ │ ├── 0RZDK210BSMWAA6467LU/
│ │ │ └── mri_norm2.h5
│ │ └── ...
│ ├── test.list
│ ├── train.list
│ ├── train_lab16.list
│ ├── train_lab4.list
│ ├── train_lab8.list
│ ├── train_unlab16.list
│ ├── train_unlab4.list
│ └── train_unlab8.list
└── pancreas/
├── data/
│ ├── data0001.h5
│ └── ...
├── test.list
├── train.list
├── train_lab12.list
├── train_lab6.list
├── train_unlab12.list
└── train_unlab6.list
Semi-supervised training with different label numbers:
# LA dataset with 4 labeled samples
python ./code_cse/train_cse_withval.py --dataset LA --exp train_cse --model lightunetr --labelnum 4 --gpu 0
# LA dataset with 8 labeled samples
python ./code_cse/train_cse_withval.py --dataset LA --exp train_cse --model lightunetr --labelnum 8 --gpu 0
# Pancreas dataset with 6 labeled samples
python ./code_cse/train_cse_withval.py --dataset pancreas --exp train_cse --model lightunetr --labelnum 6 --gpu 0
# Pancreas dataset with 12 labeled samples
python ./code_cse/train_cse_withval.py --dataset pancreas --exp train_cse --model lightunetr --labelnum 12 --gpu 0
# BraTS dataset with 25 labeled samples
python ./code_cse/train_cse_withval.py --dataset brats --exp train_cse --model lightunetr --labelnum 25 --gpu 0Fully supervised training (upper bound):
# LightUNETR models
python ./code_cse/train_supervised.py --dataset LA --exp train_supervised --model lightunetr --gpu 0
python ./code_cse/train_supervised.py --dataset brats --exp train_supervised --model lightunetr --gpu 0
python ./code_cse/train_supervised.py --dataset pancreas --exp train_supervised --model lightunetr --gpu 0
# LightUNETR-Large models
python ./code_cse/train_supervised.py --dataset LA --exp train_supervised --model lightunetr_large --gpu 1
python ./code_cse/train_supervised.py --dataset brats --exp train_supervised --model lightunetr_large --gpu 2
python ./code_cse/train_supervised.py --dataset pancreas --exp train_supervised --model lightunetr_large --gpu 3--dataset: Choose frompancreas,LA, orbrats--exp: Experiment name for logging and checkpoints--model: Model architecture (lightunetrorlightunetr_large)--labelnum: Number of labeled samples for semi-supervised learning--gpu: GPU device ID
For fully supervised training on other datasets, please refer to ./fullysup.
This project is licensed under the MIT License - see the LICENSE file for details.
Pre-trained models are available on Hugging Face:
| Dataset | Labeled num | Model | Download Link |
|---|---|---|---|
| BraTS 2019 | 25 labels | LightUNETR | lightunetr_best_model_brats_25lab.pth |
| LA | 4 labels | LightUNETR | lightunetr_best_model_la_4lab.pth |
| LA | 8 labels | LightUNETR | lightunetr_best_model_la_8lab.pth |
| Pancreas | 6 labels | LightUNETR | lightunetr_best_model_pancreas_6lab.pth |
| Pancreas | 12 labels | LightUNETR | lightunetr_best_model_pancreas_12lab.pth |
Download the desired model and use it with the test script:
# Example: Test BraTS model
python test_cse.py --dataset brats --model lightunetr --checkpoint lightunetr_best_model_brats_25lab.pth --gpu 0If you find this work useful, please cite our paper:
@article{liu2025harnessing,
title={Harnessing Lightweight Transformer with Contextual Synergic Enhancement for Efficient 3D Medical Image Segmentation},
author={Liu, Xinyu and Chen, Zhen and Li, Wuyang and Li, Chenxin and Yuan, Yixuan},
year={2025}
}We sincerely appreciate SSL4MIS, Slim UNETR, BCP, MedNeXt, FUSSNet, MIC, and volumentations for their awesome codebases. If you have any questions, contact xinyuliu@link.cuhk.edu.hk or open an issue.