This repository implements the ASHT-KD framework for Visual Place Recognition (VPR) based on the paper "Feature-Level Knowledge Distillation for Place Recognition based on Soft-Hard Labels Teaching Paradigm".
ASHT-KD is a multi-teacher knowledge distillation framework designed for robust all-day visual place recognition tasks. It incorporates soft and hard label teaching strategies to transfer knowledge from multiple teacher models to a lightweight student model, enabling efficient place recognition under various environmental conditions.
- Multi-teacher knowledge distillation: Combines knowledge from multiple teacher models to improve the generalization of the student model.
- Lightweight student model: A compact model designed for mobile robots to enable real-time performance with minimal computational cost.
- Feature-level distillation: Leverages knowledge distillation at the feature level to ensure robust learning across varying illumination and environmental conditions.
- Multi-teacher and lightweight student model training.
- Adaptive soft-hard label teaching.
- Easy-to-use modular structure for training and evaluation.
- Flexible hyperparameter configurations for batch size, learning rate, and more.
-
Clone the repository:
git clone https://github.com/CV4RA/ASHT-KD.git cd ASHT-KD -
Install dependencies:
pip install -r requirements.txt
-
Ensure you have the necessary datasets (e.g., KITTI, Tokyo 24/7, VPRICE, Nordland) available for training and evaluation.
python main.py --mode train --data_dir /path/to/dataset --epochs 20 --batch_size 32python main.py --mode evaluate --data_dir /path/to/test/data --checkpoint /path/to/model/checkpoint
