Skip to content

Commit 4f35dfe

Browse files
committed
modified readme
1 parent 4b71cde commit 4f35dfe

File tree

1 file changed

+15
-13
lines changed

1 file changed

+15
-13
lines changed

README.md

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,26 @@
1-
# Baseline Model for Underwater Military Munitions (UWMM) Detection
1+
# Thematic2.5D: A Toolkit for Evaluating 2D and 3D Feature Effects in Supervised Classification
22

3+
![Python](https://img.shields.io/badge/python-3.12-blue.svg)](https://docs.python.org/3/whatsnew/3.12.html)
34
[![Unit Tests](https://github.com/CIRS-Girona/uwmm-baseline/actions/workflows/python-app.yml/badge.svg)](https://github.com/CIRS-Girona/uwmm-baseline/actions/workflows/python-app.yml)
45

5-
This project implements a baseline model for the detection of Under-Water Military Munitions (UWMM), replicating the methodology presented in the paper "Improved supervised classification of underwater military munitions using height features derived from optical imagery" by Gleason et al. (2015). This Python implementation is used to process and analyze different data modalities, including optical imagery (2D), geometric data (3D), and a combined 2.5D representation, to evaluate their effectiveness in identifying Unexploded Ordnance (UXO) in underwater environments.
6+
This project implements a modular toolkit for supervised classification and thematic mapping, inspired by the methodology presented in the paper "Improved supervised classification of underwater military munitions using height features derived from optical imagery" by Gleason et al. (2015). The package processes and analyzes multi-modal data, including optical imagery (2D), geometric data (3D), and a combined 2.5D representation, to evaluate the effectiveness of different feature modalities in classifying objects in complex environments.
67

78
## Purpose
89

910
The primary objectives of this project are to:
1011

1112
* Replicate the findings of Gleason et al. (2015) using Python-based tools.
12-
* Compare the performance of SVM models trained on 2D-derived features (color, texture), 3D-derived features (curvature, rugosity), and combined optical and depth features (2.5D) for UWMM detection.
13-
* Establish a modular framework for building datasets, training classification models, and conducting inference for UWMM detection tasks.
13+
* Provide a flexible framework for supervised classification and thematic mapping using multi-modal data.
14+
* Compare the performance of SVM models trained on 2D-derived features (color, texture), 3D-derived features (curvature, rugosity), and combined optical and depth features (2.5D) to assess their relative contributions to classification accuracy.
15+
* Establish a modular framework for building datasets, training classification models, and conducting inference tasks.
1416

1517
## Key Features
1618

1719
* **Dataset Generation:** Processes original image, depth, and mask data to create training patches.
18-
* **Multi-Modal Data Handling:** Supports the use of optical imagery and depth information for model training and evaluation.
19-
* **SVM Classification:** Implements SVM models for classifying potential UWMM based on extracted features.
20-
* **Trainable Models:** Provides functionality to train classification models on the generated dataset.
21-
* **Inference Pipeline:** Enables the application of trained models to new underwater imagery for UXO detection.
20+
* **Multi-Modal Data Handling:** Supports optical imagery and depth information for model training and evaluation.
21+
* **SVM Classification:** Implements SVM models for classifying objects based on extracted features.
22+
* **Trainable Models:** Provides functionality to train classification models on generated datasets.
23+
* **Inference Pipeline:** Enables the application of trained models to new imagery for object detection and thematic mapping.
2224

2325
## Getting Started
2426

@@ -30,7 +32,7 @@ This step provides instruction on how to install the project and test the models
3032

3133
**Setting Up the Project:**
3234

33-
This project has been solely tested on Python version 3.12. It is recommended that a virtual environment is used when running the pipeline. The following is one approach to setup the project using Python's `venv` environment:
35+
It is recommended that a virtual environment is used when running the pipeline. The following is one approach to setup the project using Python's `venv` environment:
3436

3537
```bash
3638
python -m venv venv && source venv/bin/activate
@@ -39,7 +41,7 @@ pip install -r requirements.txt
3941

4042
**Testing the Pipeline:**
4143

42-
A testing script along with data samples organized in the required format are provided in the `tests/` directory. Before running the `test.py` script, please ensure that all the paths found in the config file are pointing correctly to the provided sample dataset and that the UXO codes are unchanged. The default config file is already setup to be run using the `test.py` script from the get-go.
44+
A testing script along with data samples organized in the required format are provided in the `tests/` directory. Before running the `test.py` script, please ensure that all the paths found in the config file are pointing correctly to the provided sample dataset and that the object codes are unchanged. The default config file is already setup to be run using the `test.py` script from the get-go.
4345

4446
Please make sure that the current working directory is the root directory of the repository before running the `test.py` script. The script can be run using the following command:
4547

@@ -57,7 +59,7 @@ The project expects original data to be organized as images within a directory s
5759

5860
* `images`: Contains original 2D imagery.
5961
* `depths`: Contains corresponding depth maps, formatted as 1-channel, 16-bit PNGs.
60-
* `masks`: Contains corresponding masks indicating the location of potential UXOs, formatted as 1-channel, 8-bit PNGs.
62+
* `masks`: Contains corresponding masks indicating the location of target objects, formatted as 1-channel, 8-bit PNGs.
6163

6264
***Example:***
6365

@@ -110,7 +112,7 @@ The training process utilizes the dataset created in the previous step, located
110112

111113
### 3. Inference
112114

113-
This step involves using a trained model to detect potential UXOs in a new, unseen image.
115+
This step involves using a trained model to detect objects in a new, unseen image.
114116

115117
**Input:**
116118

@@ -139,7 +141,7 @@ The evaluation results will be saved in a file named `meanIoU.txt` located withi
139141

140142
## Results
141143

142-
The experimental results, consistent with the findings of Gleason et al. (2015), highlight the varying performance of models trained on different data modalities:
144+
The experimental results, consistent with the findings of Gleason et al. (2015), highlight the varying performance of models trained on different data modalities for classifying unexploded ordnances (UXOs):
143145

144146
* **2D Model:** Models trained solely on optical imagery exhibited a tendency to misclassify non-UXO objects, such as scales placed in the scene for measurement or rusted chains lying on the sea floor, as UXOs.
145147
* **3D Model:** Models trained exclusively on depth information demonstrated a high false positive rate, often identifying structures with similar shapes to UXOs as potential targets. These models also struggled with accurately classifying actual UXOs in some instances.

0 commit comments

Comments
 (0)