This repository implements a dark-enhanced network for robust visual place recognition in low-light conditions. The project is based on the research paper which introduces ResEM (Residual Enhancement Module) and DSPFormer (Dual-Level Sampling Pyramid Transformer) to enhance image quality and extract discriminative features in challenging environments.
Datasets will be released later.
Visual Place Recognition (VPR) is crucial for mobile robots and autonomous systems, particularly in low-light environments where standard methods struggle. This repository implements the following:
- ResEM (Residual Enhancement Module): A lightweight GAN-based module that enhances image quality under low-light conditions.
- DSPFormer (Dual-Level Sampling Pyramid Transformer): A transformer-based network that extracts robust features for place recognition.
The network is trained using a combination of Triplet Loss and Adversarial Loss to improve performance under challenging conditions.
To set up the project, follow these steps:
-
Clone the repository:
git clone https://github.com/CV4RA/Dark-enhanced-VPR-Net.git cd Dark-enhanced-VPR-Net/ -
Set up a virtual environment:
python -m venv vpr_env source vpr_env/bin/activate -
Install the dependencies:
pip install -r requirements.txt
models/: Contains the implementation of ResEM, DSPFormer, and the loss functions.data/: Data loading scripts for training and evaluation.train/: Training scripts.eval/: Evaluation scripts.requirements.txt: Python dependencies.
To train the model, run the following command:
python train_dark_vpr.pyTo evaluate the model, run the following command:
python eval_dark_vpr.py