The CoreMark benchmark as an OpenVM guest program.
The repo root is the guest crate, and host/ contains a separate host-side harness.
The guest program can be executed and/or proven using either cargo openvm or the host-side harness.
gitrustupwith the repo toolchain fromrust-toolchain.tomlcargo openvm(install via the official OpenVM docs)- A RISC-V GCC toolchain in
PATHfor guest builds
- NVIDIA tooling for CUDA/profiling flows:
nvidia-smi,compute-sanitizer, andnsys
If you want to use a specific RISC-V GCC for guest builds, set OPENVM_GUEST_GCC.
Otherwise build.rs tries common toolchain names in PATH:
riscv32-unknown-elf-gcc, riscv64-unknown-elf-gcc,
riscv32-linux-gnu-gcc, riscv64-linux-gnu-gcc, riscv-none-elf-gcc, and
riscv64-unknown-linux-gnu-gcc.
On Ubuntu or Debian, the same package used by CI is:
sudo apt-get update
sudo apt-get install -y gcc-riscv64-unknown-elfThen verify that one of the supported compiler names is available:
command -v riscv64-unknown-elf-gcc
riscv64-unknown-elf-gcc --versionIf your system installs a different supported binary name, that is also fine as
long as it is on PATH. If you want to force a specific compiler, export it
explicitly before building:
export OPENVM_GUEST_GCC=riscv64-unknown-elf-gccgit clone --recurse-submodules <repo-url>
cd openvm-coremarkIf you already cloned without submodules:
git submodule update --init --recursiveTo build and run the CoreMark guest program using OpenVM:
cargo openvm runTo generate an app proof of the CoreMark execution:
cargo openvm keygen --app-only
cargo openvm prove appTo generate an aggregated STARK proof of the CoreMark execution:
cargo openvm setup
cargo openvm keygen
cargo openvm prove starkTo use a fixed iteration count instead of the default build-time setting (i.e. 10000):
CFLAGS="-DITERATIONS=1000" cargo openvm run
CFLAGS="-DITERATIONS=1000" cargo openvm prove app
CFLAGS="-DITERATIONS=1000" cargo openvm prove starkFor more information on cargo openvm usage, see the official OpenVM docs.
The host harness currently expects a guest ELF at host/elf/openvm-coremark.
Build the guest ELF with cargo openvm build, and then copy the resulting ELF there:
mkdir -p host/elf
cp target/riscv32im-risc0-zkvm-elf/<profile>/openvm-coremark host/elf/openvm-coremarkThen run the host wrapper:
./host/scripts/run_coremark.shFor the cargo openvm flow, measure elapsed execution/proving time outside the guest
with a shell timing utility such as:
time cargo openvm runThe in-guest CoreMark timing hooks are stubbed, so the benchmark's printed timing-derived fields are not meaningful in this repo's current setup.
For the host-harness flow, ./host/scripts/run_coremark.sh writes metrics.json
in the normal non---nsys path. You can use openvm-prof
on that metrics.json output for profiling and benchmark analysis.
openvm-coremark/
├── Cargo.toml # Guest crate manifest
├── build.rs # Builds CoreMark C sources into the guest crate
├── openvm.toml # Guest VM configuration
├── src/
│ └── main.rs # OpenVM guest entrypoint
├── portme/
│ ├── core_portme.c # CoreMark porting layer implementation for OpenVM
│ └── core_portme.h # CoreMark porting layer definitions
├── coremark/ # CoreMark sources (git submodule)
└── host/
├── Cargo.toml # Host crate manifest
├── src/
│ └── main.rs # Host benchmark/prover entrypoint
└── scripts/
└── run_coremark.sh # Wrapper to build and run the host harness
CoreMark expects a core_portme.h/core_portme.c implementation that supplies:
- platform types/config (
ee_*typedefs,SEED_METHOD,MEM_METHOD, etc.) - timing hooks (
start_time,stop_time,get_time,time_in_secs) - printing (
ee_printf)
This repo’s portme has two notable features:
- Printing is implemented via OpenVM:
ee_printfis implemented in C, but it routes each emitted byte through a small Rust-exported symbol (coremark_putchar) which callsopenvm::io::print. The formatter is intentionally minimal and only supports the subset CoreMark uses (e.g.%s,%d/%i,%u,%x/%X,%c,%%, simple width/zero-padding like%04x, and%lu). - Timing is NOT implemented in-guest: OpenVM guest programs don’t currently expose a meaningful wall-clock/cycle counter to the guest. We measure elapsed time using host wall-clock around
cargo openvm run, and keep the CoreMark timing hooks as minimal stubs so the benchmark can run and print.
Warning
Because the timing hooks are stubbed, CoreMark will typically print:
ERROR! Must execute for at least 10 secs for a valid result!
This is expected in this repo’s current setup; use host wall-clock timing around
cargo openvm run instead of the in-guest timing-derived fields (Total ticks,
Total time (secs), and Iterations/Sec), which will not be accurate.
- Uses
openvm::entry!(main)to define the guest entrypoint. - Exposes
coremark_putchar(u8)for the Cee_printfimplementation. - Calls the C function
coremark_main(argc, argv)(CoreMark’s Cmainrenamed at build time) and returnsOk(())iff the return code is0.
The standalone host-side benchmark/proving binary lives under host/, separate from the guest crate at the repo root. The recommended entrypoint is:
./host/scripts/run_coremark.shThe wrapper script builds the host binary from host/, runs it against the
guest ELF staged at host/elf/openvm-coremark, and enables some host-specific
features automatically based on the machine it is running on.
By default, it runs in prove-stark mode with the release Cargo profile.
On x86_64, it also enables the host aot feature. If nvidia-smi is
available, the script automatically enables CUDA and records GPU memory usage to
gpu_memory_usage.csv. If no NVIDIA tooling is available, the host harness
still runs without those profiling features.
--mode <MODE>: choose one ofexecute,execute-metered,prove-app, orprove-stark--profile <PROFILE>: build the host binary withdev,release, or a custom Cargo profile such asprofiling--cuda: force CUDA acceleration instead of relying on auto-detection vianvidia-smi--nsys: run under NVIDIA Nsight Systems profiling; this implies CUDA and usessudo nsys profile--memcheck: run undercompute-sanitizer --tool memcheck--synccheck: run undercompute-sanitizer --tool synccheck--racecheck: run undercompute-sanitizer --tool racecheck
If you only want the standard host benchmark/prover flow, ./host/scripts/run_coremark.sh
is enough. The CUDA, compute-sanitizer, and nsys paths are optional and only
needed for GPU acceleration or profiling/debugging work.
The code in this repository is licensed under MIT; see LICENSE.
The bundled coremark/ directory is third-party code from EEMBC's CoreMark project
and remains subject to its upstream license terms; see coremark/LICENSE.md.