Feature gating: Production metrics are enabled by the
metricsfeature (or by the conveniencestandardfeature). They are not part of the minimal default unlessstandardis selected.
Production performance metrics for latency and throughput critical paths. Use Watch, Timer, and stopwatch! to record nanosecond timings with negligible overhead and export percentile snapshots for dashboards and alerts.
Observability requires low-overhead, high-fidelity measurements. The metrics feature uses an internal, zero-dependency histogram with lock-free recording to capture latencies precisely while minimizing contention. Snapshots expose p50/p90/p95/p99/p999, min/max, mean, and count.
Focused on production use: code instrumentation, real-time snapshots, health checks, and easy export to your existing monitoring stack. No external histogram dependency.
Add timing to hot paths and endpoints using stopwatch! or Timer. Name metrics with a stable, low-cardinality scheme (e.g., http.GET:/users/:id).
use benchmark::{stopwatch, Watch};
fn main() {
let watch = Watch::new();
stopwatch!(watch, "http.GET:/users/:id", {
std::thread::sleep(std::time::Duration::from_millis(2));
});
let s = &watch.snapshot()["http.GET:/users/:id"];
println!("count={} p95={}ns", s.count, s.p95);
}While this crate does not implement tracing, you can correlate timings with trace/span IDs by embedding them in metric names or exporting snapshots alongside trace context captured in your application.
Periodically call Watch::snapshot() on a background interval to emit metrics to logs, Prometheus textfiles, OpenTelemetry exporters, or a custom sink.
use benchmark::Watch;
fn export(w: &Watch) {
for (name, s) in w.snapshot() {
println!(
"name={} count={} min={} p50={} p90={} p99={} max={} mean={:.1}",
name, s.count, s.min, s.p50, s.p90, s.p99, s.max, s.mean
);
}
}Track periodic probes (DB ping, cache get, queue poll) to detect degradation early. Evaluate p99 and max against SLOs.
use benchmark::Watch;
fn db_ping() { /* ... */ }
fn main() {
let w = Watch::new();
for _ in 0..60 {
let start = std::time::Instant::now();
db_ping();
w.record("health.db.ping", start.elapsed().as_nanos() as u64);
}
let s = &w.snapshot()["health.db.ping"];
println!("p99={}ns max={}ns", s.p99, s.max);
}Integrate by transforming snapshots into your APM’s metric format. The histogram values are already percentiles, so export as summary metrics or discrete gauges per percentile.
// Pseudocode for Prometheus text format
// io_latency_p50{name="io"} 1200
// io_latency_p90{name="io"} 2500
// io_latency_p99{name="io"} 4100Tips for reliable exporting:
- Stable names: keep metric cardinality low.
- Shard if hot: use per-core suffixes and merge offline.
- Reset between windows: use
Watch::clear()after export to bound latency windows.
use benchmark::Watch;
fn prometheus_export(w: &Watch) -> String {
let mut out = String::new();
for (name, s) in w.snapshot() {
// Example: convert percentiles to summary-like gauges
out.push_str(&format!("benchmark_latency_p50{{name=\"{}\"}} {}\n", name, s.p50));
out.push_str(&format!("benchmark_latency_p90{{name=\"{}\"}} {}\n", name, s.p90));
out.push_str(&format!("benchmark_latency_p99{{name=\"{}\"}} {}\n", name, s.p99));
out.push_str(&format!("benchmark_latency_max{{name=\"{}\"}} {}\n", name, s.max));
out.push_str(&format!("benchmark_latency_mean{{name=\"{}\"}} {:.1}\n", name, s.mean));
out.push_str(&format!("benchmark_latency_count{{name=\"{}\"}} {}\n", name, s.count));
}
out
}// Requires adding dependencies:
// opentelemetry, opentelemetry-sdk, opentelemetry-metrics (versions per your stack)
use benchmark::Watch;
fn export_otlp(w: &Watch) {
// Pseudocode: acquire a Meter from your OTel SDK setup
// let meter = global::meter("benchmark");
// let p50 = meter.u64_gauge("benchmark.latency.p50").init();
// ... create instruments as needed
for (name, s) in w.snapshot() {
// p50.record(s.p50, &[KeyValue::new("name", name.clone())]);
// p90.record(s.p90, &labels);
// p99.record(s.p99, &labels);
// count.add(s.count as u64, &labels);
}
}COPYRIGHT © 2025 JAMES GOBER.