Skip to content

open-energy-transition/solver-benchmark

Repository files navigation

Open Energy Benchmark

This repository contains code for benchmarking optimization solvers on problems from the energy planning domain, and an interactive website for analyzing the results. The live website can be viewed at:

https://openenergybenchmark.org/

Benchmark Problems

All our benchmark problems are open and available as LP/MPS files that can be downloaded in one click from our website's Benchmark Set page. Some problems have been generated by us using open source energy model frameworks, and for these we have configuration files and instructions for reproducing the problems.

Contributing Benchmark Problems

We welcome contributions of new benchmark problems from the community! See this page for details about our current benchmark set, and what gaps we have that we would love your help to fill.

To contribute, if you are familiar with git and GitHub, we request that you:

  1. Generate an MPS file (preferred; alternatively, LP files are also acceptable) for each optimization problem using your energy model framework. (The steps for how to do this depend on the framework, but reach out if you need help -- we are happy to support you through this.)

  2. Write up the details and classification of each problem you contribute in a YAML file, using this template as a guide.

  3. Upload the MPS file to any file sharing service of your choice.

  4. Open a pull request (PR) which adds the metadata file to a new directory benchmarks/<model-framework-or-source>/. We will review the contribution and work with you to add suitable problems to our platform.

Don't worry if you're not familiar with git or GitHub! Please write to us, or open an issue, and we can support you through the above steps. We thank you in advance for your contributions.

Generating Benchmark Problems

  1. The PyPSA benchmarks in benchmarks/pypsa/ can be generated by using the Dockerfile present in that directory. Please see the instructions for more details.

  2. The JuMP-HiGHS benchmarks in benchmarks/jump_highs_platform/ contain only the metadata for the benchmarks that are present in https://github.com/jump-dev/open-energy-modeling-benchmarks/tree/main/instances. These are fetched automatically by the benchmark runner from GitHub.

  3. The metadata of all benchmarks under benchmarks/ are collected by the following script to generate a unified results/metadata.yaml file, when run as follows:

    python benchmarks/merge_metadata.py

The unified results/metadata.yaml contains all details of each benchmark problem, including the download link, and is used by the benchmark runner (below).

Running Benchmarks

The benchmark runner script creates conda environments containing the solvers and other necessary pre-requisites, so a virtual environment is not necessary.

./runner/benchmark_all.sh ./results/metadata.yaml

The script will save the measured runtime and memory consumption into a CSV file in results/ that the website will then read and display. The script has options, e.g. to run only particular years, that you can see with the -h flag:

Usage: ./runner/benchmark_all.sh [-a] [-y "<space separated years>"] [-r <seconds>] [-u <run_id>] <benchmarks yaml file>
Runs the solvers from the specified years (default all) on the benchmarks in the given file
Options:
    -a    Append to the results CSV file instead of overwriting. Default: overwrite
    -y    A space separated string of years to run. Default: 2020 2021 2022 2023 2024 2025
    -r    Reference benchmark interval in seconds. Default: 0 (disabled)
    -u    Unique run ID to identify this benchmark run. Default: auto-generated

The benchmark_all.sh script activates the appropriate conda environment and then calls python runner/run_benchmarks.py. This script can also be called directly, if required, but you must be in a conda environment that contains the solvers you want to benchmark. For example:

python ./runner/run_benchmarks.py ./results/metadata.yaml 2024

Call python runner/run_benchmarks.py -h to see more options.

Solver Versions

We support the following versions of solvers: (We use the last released solver version in each calendar year.)

Solver 2020 2021 2022 2023 2024 2025
HiGHS Not on PyPI 1.5.0 1.6.0 1.9.0 1.10.0
SCIP Error Error 8.0.3 8.1.0 Error 9.2.2
CBC Bug Bug 2.10.11 2.10.12
GLPK 5.0.0
Gurobi Incompatible Incompatible 10.0.0 11.0.0 12.0.0

When determining which is the most recent version released in a particular year, we use the following resources:

Running the Website

NextJS Production Website

The website code is under website-nextjs/. To run the website locally, you need a recent version of node and npm installed. Then, run the following commands:

cd website-nextjs/
npm install
npm run build && npm run dev

Open http://localhost:3000 with your browser to see the website.

Running the Streamlit Website

Deprecated: this is an old proof-of-concept website, we no longer use this.

Before you begin, make sure your development environment includes Python.

Preferred use:

  • python: 3.12.4
  • pip: 24.1.2

We use Python virtual environments to manage the dependencies for the website. This is how to create a virtual environment:

python -m venv venv

This is how to activate one:

  • Windows
    .\venv\Scripts\activate
  • Linux/MacOS
    source venv/bin/activate

And this is how to install the required dependencies once a venv is activated:

  • Website:
    pip install -r website/requirements.txt

Remember to activate the virtual environment containing the website's requirements, and then run:

streamlit run website/app.py

The website will be running on: http://localhost:8501

Development

We use the ruff code linter and formatter, and GitHub Actions runs various pre-commit checks to ensure code and files are clean.

You can install a git pre-commit that will ensure that your changes are formatted and no lint issues are detected before creating new commits:

pip install pre-commit
pre-commit install

If you want to skip these pre-commit steps for a particular commit, you can run:

git commit --no-verify

About

A benchmark of (MI)LP solvers on energy modelling problems

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 7