Skip to content

dvanhu/go-custom-load-balancer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Production Load Balancer — Go

A from-scratch HTTP reverse-proxy load balancer with health checks, retry logic, three routing algorithms, rate limiting, and a metrics endpoint. No external load-balancer frameworks — only Go's standard library.


Folder Structure

loadbalancer/
├── cmd/
│   └── loadbalancer/
│       └── main.go          ← Entry point: wiring, graceful shutdown
├── internal/
│   ├── balancer/
│   │   ├── balancer.go      ← Round-robin / Least-conn / IP-hash
│   │   └── balancer_test.go
│   ├── health/
│   │   └── checker.go       ← Background health-probe loop
│   ├── metrics/
│   │   └── metrics.go       ← Atomic counters + /lb-metrics handler
│   ├── middleware/
│   │   ├── middleware.go    ← Logger, Recover, RateLimit wrappers
│   │   └── ratelimit.go     ← Per-IP token-bucket rate limiter
│   ├── pool/
│   │   ├── pool.go          ← Backend struct + Pool manager
│   │   └── pool_test.go
│   └── proxy/
│       └── handler.go       ← Reverse-proxy + retry logic
├── config/
│   ├── config.go            ← Types + JSON loader + sane defaults
│   └── config.json          ← Example configuration file
├── backends/
│   └── server.go            ← Tiny echo server for local testing
├── Makefile
└── go.mod

Architecture

                    ┌───────────────────────────────────────────── ┐
                    │              Load Balancer (:8080)           │
                    │                                              │
  Client ──────────►│  middleware stack                            │
  HTTP request      │  ┌─────────┐  ┌────────┐  ┌─────────────┐    │
                    │  │ Recover │→ │ Logger │→ │ Rate Limit  │    │
                    │  └─────────┘  └────────┘  └──────┬──────┘    │
                    │                                   │          │
                    │              ┌────────────────────▼──────┐   │
                    │              │      HTTP Mux             │   │
                    │              │  /lb-metrics → metrics    │   │
                    │              │  /*           → proxy     │   │
                    │              └────────────┬──────────────┘   │
                    │                           │                  │
                    │              ┌────────────▼──────────────┐   │
                    │              │      Proxy Handler        │   │
                    │              │  1. Get healthy backends  │   │
                    │              │  2. Pick via Balancer     │   │
                    │              │  3. Forward + stream resp │   │
                    │              │  4. Retry on failure      │   │
                    │              └────────────┬──────────────┘   │
                    │                           │                  │
                    │              ┌────────────▼──────────────┐   │
                    │              │      Backend Pool         │   │
                    │              │  [backend-1] [backend-2]  │   │
                    │              │  [backend-3]              │   │
                    │              └────────────▲──────────────┘   │
                    │                           │                  │
                    │              ┌────────────┴──────────────┐   │
                    │              │    Health Checker         │   │
                    │              │  (background goroutine)   │   │
                    │              │  GET /health every 10s    │   │
                    │              └───────────────────────────┘   │
                    └──────────────────────────────────────────────┘
                                         │  │  │
                              ┌──────────┘  │  └──────────┐
                              ▼             ▼             ▼
                         backend-1    backend-2    backend-3
                          :3001        :3002        :3003

Component Roles

Component File Responsibility
Pool internal/pool/pool.go Owns all Backend structs; exposes thread-safe All() / Healthy() views; tracks active connections and counters atomically.
Balancer internal/balancer/balancer.go Stateless interface Next(backends, request) *Backend; implementations: RoundRobin, LeastConn, IPHash.
Health Checker internal/health/checker.go Background goroutine; probes every backend's /health concurrently; uses consecutive-failure / success thresholds to flip state.
Proxy Handler internal/proxy/handler.go HTTP handler; builds upstream request, streams response, retries on transport errors, records metrics.
Metrics internal/metrics/metrics.go Lock-free atomic counters; /lb-metrics JSON endpoint.
Middleware internal/middleware/ Logger (structured), Recover (panic → 500), RateLimit (per-IP token bucket).
Config config/config.go JSON loader with type-safe defaults for every field.
main cmd/loadbalancer/main.go Wires everything; starts health checker; handles SIGINT/SIGTERM gracefully.

Request Flow (Step-by-Step)

1.  Client sends   GET http://localhost:8080/api/users

2.  Recover MW     wraps with panic → 500 safety net

3.  Logger MW      records start time, wraps ResponseWriter

4.  RateLimit MW   checks per-IP token bucket; 429 if exhausted

5.  HTTP Mux       routes to proxy.Handler (not /lb-metrics)

6.  proxy.Handler
    a. pool.Healthy() → [backend-1, backend-2, backend-3]
    b. balancer.Next() → backend-2   (e.g. round-robin turn)
    c. backend-2.IncrConnections()
    d. http.NewRequestWithContext(10s timeout)
    e. Copy headers; add X-Forwarded-For / X-Real-IP
    f. transport.RoundTrip(req) → resp

7.  On success     stream response body back to client
                   backend-2.DecrConnections()
                   metrics.IncSuccess()

8.  On failure     backend-2.RecordFailure()
                   pick next candidate (exclude backend-2)
                   sleep RetryDelay (50ms)
                   retry up to MaxRetries times

9.  All failed     503 Service Unavailable

10. Logger MW      writes access log line with status + latency

Algorithms

Round Robin (round_robin)

Atomically increments a counter; idx % len(backends) selects the backend. Completely lock-free. Equal distribution over time.

Least Connections (least_conn)

Iterates all healthy backends, picks the one with the smallest activeConns counter (updated atomically on each request start/end). Best when backend processing times vary.

IP Hash (ip_hash)

MD5-hashes X-Forwarded-For (or RemoteAddr); maps to a backend by hash % len(backends). Provides sticky sessions without server-side session storage.


Health Check Logic

Every 10 seconds (configurable):
  For each backend (concurrent goroutines):
    GET <backend>/health (3s timeout)
    
    success → successCount++, failCount = 0
              if successCount >= healthyThreshold (2):
                mark HEALTHY, log warning
    
    failure → failCount++, successCount = 0
              if failCount >= unhealthyThreshold (2):
                mark UNHEALTHY, log warning

Threshold hysteresis prevents flapping: a backend needs 2 consecutive failures to go down and 2 consecutive successes to come back up.


How to Run Locally

Prerequisites

  • Go 1.21+ (go version)

1 — Build

cd loadbalancer
go build -o bin/loadbalancer ./cmd/loadbalancer

2 — Start 3 Backend Servers

Open three terminal tabs:

# Tab 1
go run backends/server.go -port 3001 -name backend-1

# Tab 2
go run backends/server.go -port 3002 -name backend-2

# Tab 3
go run backends/server.go -port 3003 -name backend-3

3 — Start the Load Balancer

LB_CONFIG=config/config.json ./bin/loadbalancer

Or with debug logging:

LB_DEBUG=1 LB_CONFIG=config/config.json ./bin/loadbalancer

Or use the Makefile:

make backends   # starts 3 backends in background
make run        # builds + starts LB

Test Cases

Basic Round-Robin Distribution

for i in $(seq 9); do
  curl -s http://localhost:8080/ | python3 -m json.tool | grep '"server"'
done

Expected output — each backend appears 3 times in sequence:

"server": "backend-1"
"server": "backend-2"
"server": "backend-3"
"server": "backend-1"
"server": "backend-2"
"server": "backend-3"
...

Health Check & Auto-Recovery

Kill backend-2:

kill $(lsof -t -i:3002)

After 2 health-check intervals (~20 s) the LB logs:

level=WARN component=health_checker msg="backend marked UNHEALTHY" backend=http://localhost:3002

Requests now only go to backend-1 and backend-3.

Restart backend-2:

go run backends/server.go -port 3002 -name backend-2 &

After 2 successful checks (~20 s):

level=INFO component=health_checker msg="backend marked HEALTHY" backend=http://localhost:3002

Retry Logic

Start a failing backend:

go run backends/server.go -port 3004 -name failing -fail &

Add it to config temporarily and watch the LB retry then serve from another backend — transparent to the client.

Metrics

curl -s http://localhost:8080/lb-metrics | python3 -m json.tool
{
  "uptime_seconds": 42.1,
  "total_requests": 100,
  "success_requests": 98,
  "failed_requests": 2,
  "retry_attempts": 4,
  "avg_latency_ms": 3.14,
  "healthy_backends": 3,
  "total_backends": 3,
  "backends": [
    {
      "url": "http://localhost:3001",
      "state": "healthy",
      "active_connections": 0,
      "total_requests": 34,
      "total_failures": 0,
      "last_checked": "2026-04-03T10:00:00Z"
    },
    ...
  ]
}

Least Connections

LB_ALGORITHM=least_conn LB_CONFIG=config/config.json ./bin/loadbalancer

Simulate a slow backend:

go run backends/server.go -port 3002 -name slow -latency 2s &

Under concurrent load, the LB will preferentially route to the faster backends that finish requests sooner.

IP Hash Sticky Sessions

LB_ALGORITHM=ip_hash LB_CONFIG=config/config.json ./bin/loadbalancer

All requests from the same IP always land on the same backend:

for i in $(seq 5); do
  curl -s http://localhost:8080/ | grep '"server"'
done
# → same backend every time

All Backends Down

Kill all 3 backends:

make stop-backends
curl -v http://localhost:8080/
# HTTP/1.1 503 Service Unavailable
# Service Unavailable: all backends are down

Unit Tests

go test ./...
# ok  github.com/loadbalancer/internal/balancer
# ok  github.com/loadbalancer/internal/pool

Environment Variables

Variable Default Description
LB_CONFIG (built-in defaults) Path to config.json
LB_ALGORITHM round_robin Override balancer algorithm
LB_DEBUG (off) Enable debug-level logging

Production Considerations

  • TLS termination — add srv.ListenAndServeTLS(cert, key) in main.go.
  • Weighted backendsBackendConfig.Weight is parsed; implement weighted round-robin by repeating backends in the pool slice.
  • Circuit breaker — extend Backend.RecordFailure() to trip after N failures within a time window.
  • Persistent metrics — swap metrics.Global for a Prometheus registry.
  • Configuration reload — watch config.json with fsnotify and update the pool without restarting.

About

High-performance HTTP reverse proxy load balancer in Go featuring health checks, retry logic, rate limiting, multiple routing algorithms, and real-time metrics endpoint.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors