You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project implements a baseline model for the detection of Under-Water Military Munitions (UWMM), replicating the methodology presented in the paper "Improved supervised classification of underwater military munitions using height features derived from optical imagery" by Gleason et al. (2015). This Python implementation is used to process and analyze different data modalities, including optical imagery (2D), geometric data (3D), and a combined 2.5D representation, to evaluate their effectiveness in identifying Unexploded Ordnance (UXO) in underwater environments.
6
+
This project implements a modular toolkit for supervised classification and thematic mapping, inspired by the methodology presented in the paper "Improved supervised classification of underwater military munitions using height features derived from optical imagery" by Gleason et al. (2015). The package processes and analyzes multi-modal data, including optical imagery (2D), geometric data (3D), and a combined 2.5D representation, to evaluate the effectiveness of different feature modalities in classifying objects in complex environments.
6
7
7
8
## Purpose
8
9
9
10
The primary objectives of this project are to:
10
11
11
12
* Replicate the findings of Gleason et al. (2015) using Python-based tools.
12
-
* Compare the performance of SVM models trained on 2D-derived features (color, texture), 3D-derived features (curvature, rugosity), and combined optical and depth features (2.5D) for UWMM detection.
13
-
* Establish a modular framework for building datasets, training classification models, and conducting inference for UWMM detection tasks.
13
+
* Provide a flexible framework for supervised classification and thematic mapping using multi-modal data.
14
+
* Compare the performance of SVM models trained on 2D-derived features (color, texture), 3D-derived features (curvature, rugosity), and combined optical and depth features (2.5D) to assess their relative contributions to classification accuracy.
15
+
* Establish a modular framework for building datasets, training classification models, and conducting inference tasks.
14
16
15
17
## Key Features
16
18
17
19
***Dataset Generation:** Processes original image, depth, and mask data to create training patches.
18
-
***Multi-Modal Data Handling:** Supports the use of optical imagery and depth information for model training and evaluation.
19
-
***SVM Classification:** Implements SVM models for classifying potential UWMM based on extracted features.
20
-
***Trainable Models:** Provides functionality to train classification models on the generated dataset.
21
-
***Inference Pipeline:** Enables the application of trained models to new underwater imagery for UXO detection.
20
+
***Multi-Modal Data Handling:** Supports optical imagery and depth information for model training and evaluation.
21
+
***SVM Classification:** Implements SVM models for classifying objects based on extracted features.
22
+
***Trainable Models:** Provides functionality to train classification models on generated datasets.
23
+
***Inference Pipeline:** Enables the application of trained models to new imagery for object detection and thematic mapping.
22
24
23
25
## Getting Started
24
26
@@ -30,7 +32,7 @@ This step provides instruction on how to install the project and test the models
30
32
31
33
**Setting Up the Project:**
32
34
33
-
This project has been solely tested on Python version 3.12. It is recommended that a virtual environment is used when running the pipeline. The following is one approach to setup the project using Python's `venv` environment:
35
+
It is recommended that a virtual environment is used when running the pipeline. The following is one approach to setup the project using Python's `venv` environment:
34
36
35
37
```bash
36
38
python -m venv venv &&source venv/bin/activate
@@ -39,7 +41,7 @@ pip install -r requirements.txt
39
41
40
42
**Testing the Pipeline:**
41
43
42
-
A testing script along with data samples organized in the required format are provided in the `tests/` directory. Before running the `test.py` script, please ensure that all the paths found in the config file are pointing correctly to the provided sample dataset and that the UXO codes are unchanged. The default config file is already setup to be run using the `test.py` script from the get-go.
44
+
A testing script along with data samples organized in the required format are provided in the `tests/` directory. Before running the `test.py` script, please ensure that all the paths found in the config file are pointing correctly to the provided sample dataset and that the object codes are unchanged. The default config file is already setup to be run using the `test.py` script from the get-go.
43
45
44
46
Please make sure that the current working directory is the root directory of the repository before running the `test.py` script. The script can be run using the following command:
45
47
@@ -57,7 +59,7 @@ The project expects original data to be organized as images within a directory s
57
59
58
60
*`images`: Contains original 2D imagery.
59
61
*`depths`: Contains corresponding depth maps, formatted as 1-channel, 16-bit PNGs.
60
-
*`masks`: Contains corresponding masks indicating the location of potential UXOs, formatted as 1-channel, 8-bit PNGs.
62
+
*`masks`: Contains corresponding masks indicating the location of target objects, formatted as 1-channel, 8-bit PNGs.
61
63
62
64
***Example:***
63
65
@@ -110,7 +112,7 @@ The training process utilizes the dataset created in the previous step, located
110
112
111
113
### 3. Inference
112
114
113
-
This step involves using a trained model to detect potential UXOs in a new, unseen image.
115
+
This step involves using a trained model to detect objects in a new, unseen image.
114
116
115
117
**Input:**
116
118
@@ -139,7 +141,7 @@ The evaluation results will be saved in a file named `meanIoU.txt` located withi
139
141
140
142
## Results
141
143
142
-
The experimental results, consistent with the findings of Gleason et al. (2015), highlight the varying performance of models trained on different data modalities:
144
+
The experimental results, consistent with the findings of Gleason et al. (2015), highlight the varying performance of models trained on different data modalities for classifying unexploded ordnances (UXOs):
143
145
144
146
***2D Model:** Models trained solely on optical imagery exhibited a tendency to misclassify non-UXO objects, such as scales placed in the scene for measurement or rusted chains lying on the sea floor, as UXOs.
145
147
***3D Model:** Models trained exclusively on depth information demonstrated a high false positive rate, often identifying structures with similar shapes to UXOs as potential targets. These models also struggled with accurately classifying actual UXOs in some instances.
0 commit comments