Welcome to the MLflow Started Code repository! This repository provides a hands-on example of how to use MLflow for tracking experiments, comparing models, and managing machine learning workflows.
This project demonstrates how to:
- Create and manage MLflow experiments.
- Train and evaluate multiple machine learning models (Decision Tree and Random Forest).
- Log metrics, parameters, feature importances, and predictions.
- Save and load models using MLflow.
run_experiments.py
: Script to train models, log metrics, and save artifacts.requirements.txt
: Dependencies for the project.results/
: Directory where logs, model artifacts, and predictions will be saved.
Source: MLflow Documentation
git clone https://github.com/palbha/mlflow_started_code.git
cd mlflow_started_code
Make sure you have Python 3.6+ installed. Then, install the required packages:
pip install -r requirements.txt
Running the Experiment Script Execute the run_experiments.py script to start the MLflow experiment
python run_experiments.py
Once the file ran completely fine, take a look at the mlflow UI to see the results from your experiments
mlflow server --host 127.0.0.1 --port 8080
Open your browser & go to http://127.0.0.1:8080/ & You can see th experiments
Click on any experiments & take a look at artifacts to analyse & see the output further
One can also download the details of each run to create their own custom graphs & share results with stakeholders