Skip to content

[CI][Benchmark] Add comparison script to benchmark scripts + benchmark dashboard via sycl-docs.yml #17522

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 126 commits into from
Apr 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
126 commits
Select commit Hold shift + click to select a range
e6ca992
Move UR devops scripts to devops folder
ianayl Feb 27, 2025
3d42db2
Restrict number of cores used
ianayl Feb 28, 2025
fc70520
Merge branch 'sycl' of https://github.com/intel/llvm into unify-bench…
ianayl Mar 4, 2025
4f08dd6
Restore ur-benchmark*.yml
ianayl Mar 4, 2025
497dcce
[benchmarks] improve HTML and Markdown output
pbalcer Mar 5, 2025
3cbed5e
Test UR benchmarking suite
ianayl Mar 5, 2025
1936207
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 5, 2025
f79bbbf
Bump tolerance to 7%
ianayl Mar 5, 2025
ffc8139
Revert "Bump tolerance to 7%"
ianayl Mar 5, 2025
0a34e0d
[benchmarks] fix failing benchmarks, improve html output
pbalcer Mar 6, 2025
3f42420
[benchmarks] fix python formatting with black
pbalcer Mar 6, 2025
1c7b189
update driver version
pbalcer Mar 6, 2025
ad13e93
simplify preset implementation and fix normal preset
pbalcer Mar 6, 2025
68ed0c4
Add PVC and BMG as runners
ianayl Mar 6, 2025
18fff93
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 6, 2025
3a65b98
Install dependencies before running UR script
ianayl Mar 6, 2025
220121a
Use venv for python packages
ianayl Mar 6, 2025
37d361c
Install venv before using venv
ianayl Mar 6, 2025
07f1e10
[benchmarks] allow specifying custom results directories
pbalcer Mar 7, 2025
64cf79c
[benchmarks] sort runs by date for html output
pbalcer Mar 7, 2025
6c28d33
simplify presets, remove suites if all set
pbalcer Mar 10, 2025
e15b94f
[benchmarks] use python venv for scripts
pbalcer Mar 10, 2025
78fd037
Run apt with sudo
ianayl Mar 10, 2025
0ed1599
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 10, 2025
82b6e55
Ignore "missing" apt packages in workflow
ianayl Mar 10, 2025
162cba0
Change pip to install to user
ianayl Mar 10, 2025
848f741
Ignore system controlled python env
ianayl Mar 10, 2025
918604e
[CI] use realpaths when referring to SYCL
ianayl Mar 10, 2025
72d8730
[CI] use minimal preset when running benchmarks
ianayl Mar 10, 2025
066f5a6
[CI] Allow 2 bench scripts locations (#17394)
lukaszstolarczuk Mar 12, 2025
18e5291
add ulls compute benchmarks
pbalcer Mar 12, 2025
237750e
[CI][Benchmark] Decouple results from existing file structure, fetch …
ianayl Mar 11, 2025
ba1297f
[benchmark] Disabling UR test suites
ianayl Mar 12, 2025
cd6097f
update compute benchmarks and fix requirements
pbalcer Mar 13, 2025
c4e92c6
fix url updates
pbalcer Mar 13, 2025
ed8eecc
use timestamps in result file names
pbalcer Mar 13, 2025
130212d
add hostname to benchmark run
pbalcer Mar 13, 2025
a884df8
Merge branch 'sycl' of https://github.com/intel/llvm into unify-bench…
ianayl Mar 13, 2025
5323386
add SubmitGraph benchmark
pbalcer Mar 13, 2025
5bd1d56
Restore sycl-linux-run-tests benchmarking action
ianayl Mar 13, 2025
e9b1375
Restore old SYCL benchmarking CI
ianayl Mar 13, 2025
a3edf7a
Add benchmarking results to sycl-docs.yml
ianayl Mar 13, 2025
6620e4a
[CI] Bump compute bench (#17431)
lukaszstolarczuk Mar 13, 2025
f4a2e39
Initial implementation of unified benchmark workflow
ianayl Mar 13, 2025
5d3b0d9
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 13, 2025
38394bb
[CI] Use commit hash instead, fix issues with run
ianayl Mar 13, 2025
f232b93
add benchmark metadata
pbalcer Mar 14, 2025
30cd308
apply formatting
pbalcer Mar 14, 2025
5e0539a
fix multiple descriptions/notes
pbalcer Mar 14, 2025
137407a
fix benchmark descriptions
pbalcer Mar 14, 2025
e0f5ca6
fix remote html output
pbalcer Mar 14, 2025
1041db6
fix metadata collection with dry run
pbalcer Mar 14, 2025
fae04f4
cleanup compute bench, fix readme, use newer sycl-bench
pbalcer Mar 14, 2025
cfa4a9c
[CI] configure upload results
ianayl Mar 14, 2025
ca963e6
[CI] Change config to update during workflow run instead
ianayl Mar 14, 2025
45a02e1
[CI] Change save name depending on build
ianayl Mar 14, 2025
98f9d38
bump to 2024-2025
ianayl Mar 14, 2025
ef88ea0
[CI] Enforce commit hash to be string regardless
ianayl Mar 14, 2025
c65540d
Initial implementation of comparison code
ianayl Mar 18, 2025
b7acba2
cleanup options in js scripts and fix ordering on bar charts
pbalcer Mar 18, 2025
e330a50
use day on x axis for timeseries
pbalcer Mar 18, 2025
0bd7488
document + add main function to compare.py
ianayl Mar 18, 2025
0738185
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 19, 2025
25fd917
add compare to benchmark action
ianayl Mar 19, 2025
5ff2249
[test] hijack aggregate for testing
ianayl Mar 19, 2025
1469a2a
add missing \, standardize cmd arg opts
ianayl Mar 19, 2025
d22b45e
add missing \
ianayl Mar 19, 2025
ab25299
add curly braces to escape _
ianayl Mar 19, 2025
cde744c
Merge branch 'sycl' of https://github.com/intel/llvm into unify-bench…
ianayl Mar 19, 2025
cae7049
[benchmarks] Undo merging in prior tests
ianayl Mar 19, 2025
6bff3d6
add an option to limit build parallelism
pbalcer Mar 20, 2025
3662b43
tiny tweaks for benchmark tags
pbalcer Mar 20, 2025
d2610c3
add support for benchmark tags
pbalcer Mar 20, 2025
ffc60bf
support for tags in html
pbalcer Mar 20, 2025
75dd229
better and more tags
pbalcer Mar 20, 2025
cec8f05
formatting
pbalcer Mar 20, 2025
a0d8370
fix fetching tags from remote json
pbalcer Mar 20, 2025
c7f8d10
fix results /w descriptions and add url/commit of benchmarks
pbalcer Mar 20, 2025
1dad513
fix git repo/hash for benchmarks
pbalcer Mar 20, 2025
9f1df9a
[test] bump threshold to 0.01 to trigger failrues
ianayl Mar 20, 2025
8437b89
Merge branch 'sycl' of https://github.com/intel/llvm into unify-bench…
ianayl Mar 21, 2025
be7271c
Rename ambiguous 'benchmarks.yml' to a better name
ianayl Mar 24, 2025
b58cd91
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 25, 2025
c55313b
Remove sycl-benchmark-aggregate instrumentation
ianayl Mar 26, 2025
d0d1d3d
Enable build from PR and L0v2
ianayl Mar 28, 2025
4c51558
Introduce presets
ianayl Mar 28, 2025
63d2235
Fix typo
ianayl Mar 28, 2025
23330fc
Fix typo part 2.
ianayl Mar 28, 2025
0d79d89
Fix typo pt 3.
ianayl Mar 28, 2025
722e31e
Merge branch 'sycl' of https://github.com/intel/llvm into ianayl/benc…
ianayl Mar 31, 2025
a8048b2
Reset ur-build-hw.sh
ianayl Mar 31, 2025
29d125c
Add comments explaining executable section in presets.py
ianayl Mar 31, 2025
5a3afcb
Revert stuff that shouldnt be merged
ianayl Mar 31, 2025
b6d42d4
Finally no more reset_intel_gpu
ianayl Mar 31, 2025
8b3b79c
Remove streaming median
ianayl Mar 31, 2025
3a070d5
Add missing newlines
ianayl Mar 31, 2025
186b36e
Allegedly, runner name is already baked into github_env
ianayl Mar 31, 2025
de280a5
Modify save directory structure, amend hostname behavior for github r…
ianayl Apr 1, 2025
4f5ce71
typo fix
ianayl Apr 1, 2025
9bd519f
Ensure timezones are UTC
ianayl Apr 1, 2025
3726a7d
Clarify options
ianayl Apr 1, 2025
60d80a9
enforce UTC time in benchmark action
ianayl Apr 2, 2025
c69e874
Properly load repo/commit information in CI
ianayl Apr 3, 2025
6224eaa
[test] debug message
ianayl Apr 3, 2025
f0a9a97
I forgot a )
ianayl Apr 3, 2025
b68c119
misplaced )
ianayl Apr 3, 2025
cc17af9
revert test
ianayl Apr 3, 2025
64832a6
[test] debug statements
ianayl Apr 3, 2025
ab07001
Whitespace was causing issues?
ianayl Apr 3, 2025
2b94436
rename variables and remove sycl_ prefix
ianayl Apr 3, 2025
63c3092
Delete text message, fix whitespace
ianayl Apr 3, 2025
38bfe31
Merge branch 'sycl' of https://github.com/intel/llvm into ianayl/benc…
ianayl Apr 3, 2025
ca96184
Set up multiple push attempts in CI
ianayl Apr 4, 2025
a3d7ff6
Apply clang format
ianayl Apr 4, 2025
ba7df66
Archive benchmark runs
ianayl Apr 7, 2025
989441d
Remove legacy benchmarking code
ianayl Apr 7, 2025
850ccac
Fix typo
ianayl Apr 7, 2025
47d8861
Update nightly to use new workflow
ianayl Apr 7, 2025
56461ec
Fix bug with caching
ianayl Apr 7, 2025
bfbc7b6
Fix typo
ianayl Apr 7, 2025
3913619
Use no-assertions builds for benchmarking
ianayl Apr 7, 2025
2fab911
Use shared build in nightly
ianayl Apr 8, 2025
652fd39
Build no-assertion versions from scratch instead
ianayl Apr 8, 2025
5a41787
Add a message indicating compare script has indeed been ran
ianayl Apr 8, 2025
05994f2
Add comments to benchmark options in sycl-linux-run-tests
ianayl Apr 9, 2025
c148216
Revert changes to use linux_shared_build for benchmark runs
ianayl Apr 9, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .github/workflows/sycl-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,13 @@ jobs:
mkdir clang
mv $GITHUB_WORKSPACE/build/tools/sycl/doc/html/* .
mv $GITHUB_WORKSPACE/build/tools/clang/docs/html/* clang/
cp -r $GITHUB_WORKSPACE/repo/devops/scripts/benchmarks/html benchmarks
touch .nojekyll
# Update benchmarking dashboard configuration
cat << EOF > benchmarks/config.js
remoteDataUrl = 'https://raw.githubusercontent.com/intel/llvm-ci-perf-results/refs/heads/unify-ci/data.json';
defaultCompareNames = ["Baseline_PVC_L0"];
EOF
# Upload the generated docs as an artifact and deploy to GitHub Pages.
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
Expand Down
38 changes: 34 additions & 4 deletions .github/workflows/sycl-linux-run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ on:
required: False
tests_selector:
description: |
Three possible options: "e2e", "cts", and "compute-benchmarks".
Three possible options: "e2e", "cts", and "benchmarks".
type: string
default: "e2e"

Expand Down Expand Up @@ -111,6 +111,33 @@ on:
default: ''
required: False

benchmark_upload_results:
description: |
Set to true to upload results to git repository storing benchmarking
results.
type: string
default: 'false'
required: False
benchmark_save_name:
description: |
Save name to use for benchmark results: Save names are stored in
metadata of result file, and are used to identify benchmark results in
the same series (e.g. same configuration, same device, etc.).

Note: Currently, benchmark result filenames are in the format of
<benchmark_save_name>_<Device>_<Backend>_YYYYMMDD_HHMMSS.json
type: string
default: ''
required: False
benchmark_preset:
description: |
Name of benchmark preset to run.

See /devops/scripts/benchmarks/presets.py for all presets available.
type: string
default: 'Minimal'
required: False

workflow_dispatch:
inputs:
runner:
Expand Down Expand Up @@ -150,7 +177,7 @@ on:
options:
- e2e
- cts
- compute-benchmarks
- benchmarks

env:
description: |
Expand Down Expand Up @@ -303,11 +330,14 @@ jobs:
target_devices: ${{ inputs.target_devices }}
retention-days: ${{ inputs.retention-days }}

- name: Run compute-benchmarks on SYCL
if: inputs.tests_selector == 'compute-benchmarks'
- name: Run benchmarks
if: inputs.tests_selector == 'benchmarks'
uses: ./devops/actions/run-tests/benchmark
with:
target_devices: ${{ inputs.target_devices }}
upload_results: ${{ inputs.benchmark_upload_results }}
save_name: ${{ inputs.benchmark_save_name }}
preset: ${{ inputs.benchmark_preset }}
env:
RUNNER_TAG: ${{ inputs.runner }}
GITHUB_TOKEN: ${{ secrets.LLVM_SYCL_BENCHMARK_TOKEN }}
33 changes: 14 additions & 19 deletions .github/workflows/sycl-nightly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -274,35 +274,30 @@ jobs:
sycl_toolchain_archive: ${{ needs.build-win.outputs.artifact_archive_name }}
sycl_cts_artifact: sycl_cts_bin_win

aggregate_benchmark_results:
if: github.repository == 'intel/llvm' && !cancelled()
name: Aggregate benchmark results and produce historical averages
uses: ./.github/workflows/sycl-benchmark-aggregate.yml
secrets:
LLVM_SYCL_BENCHMARK_TOKEN: ${{ secrets.LLVM_SYCL_BENCHMARK_TOKEN }}
with:
lookback_days: 100

run-sycl-benchmarks:
needs: [ubuntu2204_build, aggregate_benchmark_results]
needs: [ubuntu2204_build]
if: ${{ always() && !cancelled() && needs.ubuntu2204_build.outputs.build_conclusion == 'success' }}
strategy:
fail-fast: false
matrix:
include:
- name: Run compute-benchmarks on L0 PVC
- ref: ${{ github.sha }}
save_name: Baseline
runner: '["PVC_PERF"]'
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: level_zero:gpu
backend: 'level_zero:gpu'
preset: Minimal
uses: ./.github/workflows/sycl-linux-run-tests.yml
secrets: inherit
with:
name: ${{ matrix.name }}
name: Run compute-benchmarks (${{ matrix.runner }}, ${{ matrix.backend }})
runner: ${{ matrix.runner }}
image_options: ${{ matrix.image_options }}
target_devices: ${{ matrix.target_devices }}
tests_selector: compute-benchmarks
repo_ref: ${{ github.sha }}
image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: ${{ matrix.backend }}
tests_selector: benchmarks
benchmark_upload_results: true
benchmark_save_name: ${{ matrix.save_name }}
benchmark_preset: ${{ matrix.preset }}
repo_ref: ${{ matrix.ref }}
sycl_toolchain_artifact: sycl_linux_default
sycl_toolchain_archive: ${{ needs.ubuntu2204_build.outputs.artifact_archive_name }}
sycl_toolchain_decompress_command: ${{ needs.ubuntu2204_build.outputs.artifact_decompress_command }}
Expand Down
140 changes: 133 additions & 7 deletions .github/workflows/sycl-ur-perf-benchmarking.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,138 @@
name: Benchmarks
name: Run Benchmarks

# This workflow is a WIP: this workflow file acts as a placeholder.
on:
workflow_call:
inputs:
preset:
type: string
description: |
Benchmark presets to run: See /devops/scripts/benchmarks/presets.py
required: false
default: 'Minimal' # Only compute-benchmarks
pr_no:
type: string
description: |
PR no. to build SYCL from if specified: SYCL will be built from HEAD
of incoming branch used by the specified PR no.

on: [ workflow_dispatch ]
If both pr_no and commit_hash are empty, the latest SYCL nightly build
will be used.
required: false
default: ''
commit_hash:
type: string
description: |
Commit hash (within intel/llvm) to build SYCL from if specified.

If both pr_no and commit_hash are empty, the latest commit in
deployment branch will be used.
required: false
default: ''
upload_results:
type: string # true/false: workflow_dispatch does not support booleans
required: true
runner:
type: string
required: true
backend:
type: string
required: true

workflow_dispatch:
inputs:
preset:
type: choice
description: |
Benchmark presets to run, See /devops/scripts/benchmarks/presets.py. Hint: Minimal is compute-benchmarks only.
options:
- Full
- SYCL
- Minimal
- Normal
- Test
default: 'Minimal' # Only compute-benchmarks
pr_no:
type: string
description: |
PR no. to build SYCL from:

SYCL will be built from HEAD of incoming branch.
required: false
default: ''
commit_hash:
type: string
description: |
Commit hash (within intel/llvm) to build SYCL from:

Leave both pr_no and commit_hash empty to use latest commit.
required: false
default: ''
upload_results:
description: 'Save and upload results'
type: choice
options:
- false
- true
default: true
runner:
type: choice
options:
- '["PVC_PERF"]'
backend:
description: Backend to use
type: choice
options:
- 'level_zero:gpu'
- 'level_zero_v2:gpu'
# As of #17407, sycl-linux-build now builds v2 by default

permissions: read-all

jobs:
do-nothing:
runs-on: ubuntu-latest
steps:
- run: echo 'This workflow is a WIP.'
build_sycl:
name: Build SYCL
uses: ./.github/workflows/sycl-linux-build.yml
with:
build_ref: |
${{
inputs.commit_hash != '' && inputs.commit_hash ||
inputs.pr_no != '' && format('refs/pull/{0}/head', inputs.pr_no) ||
github.ref
}}
build_cache_root: "/__w/"
build_artifact_suffix: "prod_noassert"
build_cache_suffix: "prod_noassert"
build_configure_extra_args: "--no-assertions"
build_image: "ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest"
cc: clang
cxx: clang++
changes: '[]'

run_benchmarks_build:
name: Run Benchmarks on Build
needs: [ build_sycl ]
strategy:
matrix:
include:
- ref: ${{ inputs.commit_hash != '' && inputs.commit_hash || format('refs/pull/{0}/head', inputs.pr_no) }}
save_name: ${{ inputs.commit_hash != '' && format('Commit{0}', inputs.commit_hash) || format('PR{0}', inputs.pr_no) }}
# Set default values if not specified:
runner: ${{ inputs.runner || '["PVC_PERF"]' }}
backend: ${{ inputs.backend || 'level_zero:gpu' }}
uses: ./.github/workflows/sycl-linux-run-tests.yml
secrets: inherit
with:
name: Run compute-benchmarks (${{ matrix.save_name }}, ${{ matrix.runner }}, ${{ matrix.backend }})
runner: ${{ matrix.runner }}
image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: ${{ matrix.backend }}
tests_selector: benchmarks
benchmark_upload_results: ${{ inputs.upload_results }}
benchmark_save_name: ${{ matrix.save_name }}
benchmark_preset: ${{ inputs.preset }}
repo_ref: ${{ matrix.ref }}
devops_ref: ${{ github.ref }}
sycl_toolchain_artifact: sycl_linux_prod_noassert
sycl_toolchain_archive: ${{ needs.build_sycl.outputs.artifact_archive_name }}
sycl_toolchain_decompress_command: ${{ needs.build_sycl.outputs.artifact_decompress_command }}
95 changes: 0 additions & 95 deletions devops/actions/benchmarking/aggregate/action.yml

This file was deleted.

Loading
Loading