Skip to content

Conversation

@mikemcdougall
Copy link
Collaborator

Summary

Establishes performance baseline infrastructure for query latency and throughput measurement as specified in Issue #121.

Implementation

  • Honua.Benchmarks project with BenchmarkDotNet integration
  • QueryBenchmarks class with 5 performance test scenarios:
    • SimpleWhereQuery: p50 < 50ms, p99 < 300ms
    • SpatialBboxQuery: p50 < 50ms, p99 < 300ms
    • CombinedWhereAndSpatialQuery: p50 < 100ms, p99 < 500ms
    • PaginatedQuery: p50 < 20ms (small pages)
    • LargeResultSet: p50 < 150ms, p99 < 800ms

Infrastructure

  • CI Integration: .github/workflows/performance.yml runs on PRs
  • Regression Detection: 10% threshold with scripts/check-perf-regression.py
  • Local Testing: scripts/run-performance-tests.sh for development
  • Documentation: Complete guide in docs/performance-testing.md

Performance Targets

  • Latency: p50 < 50ms, p99 < 300ms (100 features)
  • Memory: < 50MB delta after 10k operations
  • Future: NBomber integration for throughput > 1000 rps

Test Plan

  • Local benchmark execution with dotnet run --project benchmarks/Honua.Benchmarks
  • CI workflow validation (performance.yml)
  • Code formatting compliance (dotnet format)
  • Build verification (Release configuration)
  • Regression detection script functionality

Next Steps

After merge:

  1. Integrate with actual Honua.Server endpoints (currently uses test stubs)
  2. Add NBomber load testing for throughput measurement
  3. Create realistic test datasets for accurate benchmarking
  4. Establish performance gates in CI pipeline

Resolves #121

🤖 Generated with Claude Code

…121)

- Add Honua.Benchmarks project with BenchmarkDotNet integration
- Implement QueryBenchmarks class with 5 performance test scenarios:
  - SimpleWhereQuery: p50 < 50ms, p99 < 300ms
  - SpatialBboxQuery: p50 < 50ms, p99 < 300ms
  - CombinedWhereAndSpatialQuery: p50 < 100ms, p99 < 500ms
  - PaginatedQuery: p50 < 20ms (small pages)
  - LargeResultSet: p50 < 150ms, p99 < 800ms

- Add performance testing script: scripts/run-performance-tests.sh
- Add CI workflow: .github/workflows/performance.yml
- Add regression detection: scripts/check-perf-regression.py
- Add baseline file: performance-baseline.json
- Add comprehensive documentation: docs/performance-testing.md

Infrastructure provides foundation for:
- Latency benchmarks with p50/p95/p99 measurements
- Memory allocation tracking and leak detection
- CI integration with regression detection (10% threshold)
- Local development performance testing
- Baseline management and trend tracking

Performance targets established:
- Query latency: p50 < 50ms, p99 < 300ms (100 features)
- Throughput: > 1000 rps (future NBomber integration)
- Memory: < 50MB delta after 10k operations

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4 <[email protected]>
@github-actions
Copy link

🤖 LLM Architecture Review

⚠️ Assessment: NEEDS_ATTENTION

🏗️ Architecture Review Summary

✅ Good Patterns Found:

  • Comprehensive documentation for benchmark tests in benchmarks/Honua.Benchmarks/QueryBenchmarks.cs:13-16, which clearly outlines performance targets and benchmarks.
  • Use of BenchmarkDotNet attributes for performance diagnostics in benchmarks/Honua.Benchmarks/QueryBenchmarks.cs:25, which is suitable for detailed performance analysis.
  • Proper disposal pattern implemented in benchmarks/Honua.Benchmarks/QueryBenchmarks.cs:33-40 ensuring resource cleanup post-benchmarking.

⚠️ Architecture Concerns:

  • BLOCKING_ISSUES: The PR is linked to an issue (test: Add performance baselines (latency/throughput) #121), but the issue's acceptance criteria primarily focus on documenting and measuring performance, which does not directly correlate with the changes made in the benchmark setup. This raises concerns about whether the changes fully address the acceptance criteria. Severity: Blocking.
  • BLOCKING_ISSUES: Use of services.AddControllers() in benchmarks/Honua.Benchmarks/QueryBenchmarks.cs:179 suggests reliance on MVC controllers, which conflicts with the architectural direction towards minimal APIs. Severity: Blocking.
  • NEEDS_ATTENTION: The project references Honua.Server in benchmarks/Honua.Benchmarks/Honua.Benchmarks.csproj:15, which could potentially lead to unwanted coupling between the benchmark tests and the main application. This might not be a direct violation but warrants attention to ensure isolation. Severity: Needs Attention.

💡 Recommendations:

  • To address the issue of MVC controllers, consider refactoring the benchmark setup to use minimal API patterns. Replace the controller setup with endpoint routing configurations:
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        app.UseRouting();
        app.UseEndpoints(endpoints =>
        {
            endpoints.MapGet("/rest/services/test/FeatureServer/0/query", async context =>
            {
                await context.Response.WriteAsync(@"{...}");
            });
        });
    }
  • Ensure that the benchmarks and their configurations are isolated from the main application logic to prevent unintended side effects and dependencies. Consider using a dedicated configuration or even a separate service layer for benchmarks.

📚 Educational Notes:

  • Using MVC controllers in a project that aims for minimal APIs and high performance can introduce unnecessary overhead and complexity. Minimal APIs are preferred for their lightweight nature and faster response times, which is critical in performance-sensitive applications like geospatial feature servers.
  • Ensuring that benchmarks are isolated and do not share dependencies with the main application codebase helps in obtaining accurate performance metrics and reduces the risk of side effects affecting the application's stability and behavior.

Overall Assessment: BLOCKING_ISSUES

The primary concerns are the potential mismatch between the PR changes and the linked issue's acceptance criteria, and the use of MVC controllers which contradicts the project's architectural guidelines towards minimal APIs. These issues need to be addressed before the PR can be approved.


Automated architectural analysis powered by OpenAI GPT-4
This review focuses on architectural patterns and design decisions
Human review still recommended for complex changes

@mikemcdougall mikemcdougall merged commit e5c576b into trunk Dec 22, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

test: Add performance baselines (latency/throughput)

2 participants