A robust and scalable multi-task learning (MTL) framework that integrates outlier task detection into a structured gradient boosting process. Built with Python and scikit-learn, R-MTGB is designed to generalize well across heterogeneous task sets and is resilient to task-level noise.
R-MTGB (Robust Multi-Task Gradient Boosting) is a novel ensemble-based learning framework developed to handle task heterogeneity and task-level noise in multi-task learning settings. The model introduces a three-stage boosting architecture:
- Shared Representation Learning: Learns features common across all tasks.
- Outlier Task Detection & Weighting: Optimizes regularized, task-specific parameters to dynamically down-weight noisy or outlier tasks.
- Task-Specific Fine-Tuning: Refines models individually to capture task-specific nuances.
- Multi-task learning with task-specific and shared components.
- Automatic outlier task detection.
- Gradient boosting-based architecture with interpretability.
- Compatible with various loss functions (regression/classification).
- Performance analysis with per-task metrics.
- Synthetic data generator for benchmarking.
- Scikit-learn compatible design.
Clone the repository and install dependencies using requirements
git clone https://github.com/GAA-UAM/R-MTGB.git
cd R-MTGB
pip install -r requirements.txtThe package is licensed under the GNU Lesser General Public License v2.1.
If you use R-MTGB in your research or work, please consider citing this project using the following citation format.
@article{EMAMI2025130696,
title = {Robust-Multi-Task Gradient Boosting},
journal = {Expert Systems with Applications},
pages = {130696},
year = {2025},
issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2025.130696},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425043118},
author = {Seyedsaman Emami and Gonzalo {Mart\'{\i}nez-Mu\~noz} and Daniel Hern\'{a}ndez-Lobato}
}
To get started with this project, please refer to the Wiki."
Contributions are welcome! Please open an issue or submit a pull request.
0.0.1
05 June 2025
26 Jan 2024