A Temporal Cloud proxy that handles Temporal namespace authentication, payload encryption/decryption, and client/worker authentication for multiple workloads with different configurations.
- Multi-workload Support - Handle multiple Temporal workload configurations through a single proxy instance
- Payload Encryption/Decryption - AWS KMS and GCP KMS support with intelligent caching for performance
- Temporal Cloud Namespace Authentication - Support for mTLS and API keys
- Worker Authentication - Support for JWT (with JWKS) and SPIFFE/SPIRE
- Observability - Built-in Prometheus metrics, Grafana dashboards, and structured logging
- Go 1.24 or later
- AWS account with KMS permissions (for AWS KMS encryption)
- GCP account with KMS permissions (for GCP KMS encryption)
- Temporal Cloud account
- Clone the repository:
git clone <repository-url>
cd temporal-cloud-proxy- Build the binary:
make build- Copy and configure the sample config:
cp config.sample.yaml config.yaml
# Edit config.yaml with your settings- Run the proxy:
./tclp --config config.yamlThe proxy is configured via a YAML file. Here's the basic structure:
server:
port: 7233 # Proxy server port
host: "0.0.0.0" # Bind address
metrics:
port: 9090 # Prometheus metrics port
encryption:
caching:
max_cache: 100 # Maximum cached encryption keys
max_age: "10m" # Key cache TTL
max_usage: 100 # Maximum key usage count
workloads:
- workload_id: "my-workload"
temporal_cloud:
namespace: "my-namespace.my-account"
host_port: "my-namespace.my-account.tmprl.cloud:7233"
authentication:
# Choose one authentication method
tls:
cert_file: "/path/to/tls.crt"
key_file: "/path/to/tls.key"
# OR
api_key:
value: "your-api-key"
# OR env: "API_KEY_ENV_VAR"
encryption:
type: "aws-kms"
config:
key-id: "arn:aws:kms:region:account:key/key-id"
authentication:
type: "jwt"
config:
jwks-url: "https://your-auth-provider/.well-known/keys"
audiences: ["temporal_cloud_proxy"]workloads:
- workload_id: "production"
temporal_cloud:
namespace: "prod.company"
host_port: "prod.company.tmprl.cloud:7233"
authentication:
api_key:
env: "PROD_API_KEY"
encryption:
type: "aws-kms"
config:
key-id: "arn:aws:kms:us-east-1:123456789:key/prod-key"
authentication:
type: "spiffe"
config:
spiffe_ids: ["spiffe://company.com/prod.company"]
endpoint: "unix:///tmp/spire-agent/public/api.sock"
- workload_id: "staging"
temporal_cloud:
namespace: "staging.company"
host_port: "staging.company.tmprl.cloud:7233"
authentication:
tls:
cert_file: "/certs/staging.crt"
key_file: "/certs/staging.key"
encryption:
type: "gcp-kms"
config:
key-name: "projects/my-project/locations/us-central1/keyRings/staging/cryptoKeys/temporal"temporal_cloud:
authentication:
tls:
cert_file: "/path/to/client.crt"
key_file: "/path/to/client.key"temporal_cloud:
authentication:
api_key:
value: "your-api-key"
# OR use environment variable
env: "TEMPORAL_API_KEY"authentication:
type: "jwt"
config:
jwks-url: "https://auth.company.com/.well-known/keys"
audiences: ["temporal_cloud_proxy"]authentication:
type: "spiffe"
config:
spiffe_ids: ["spiffe://company.com/prod.company"]
endpoint: "unix:///tmp/spire-agent/public/api.sock"
audiences: ["temporal_cloud_proxy"]- Create a KMS key in AWS:
aws kms create-key --description "Temporal Cloud Proxy Encryption Key"- Configure the proxy:
encryption:
type: "aws-kms"
config:
key-id: "arn:aws:kms:region:account-id:key/key-id"- Ensure AWS credentials are configured:
export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_REGION=your_region- Create a KMS key in GCP:
gcloud kms keyrings create temporal-proxy --location=global
gcloud kms keys create encryption-key --location=global --keyring=temporal-proxy --purpose=encryption- Configure the proxy:
encryption:
type: "gcp-kms"
config:
key-name: "projects/PROJECT_ID/locations/global/keyRings/temporal-proxy/cryptoKeys/encryption-key"The proxy includes intelligent caching to optimize encryption performance:
encryption:
caching:
max_cache: 100 # Maximum number of cached keys
max_age: "10m" # Maximum age of cached keys
max_usage: 100 # Maximum usage count per keyThe proxy exposes metrics on the configured metrics port (default: 9090):
proxy_request_total- Total number of proxy requestsproxy_request_errors- Number of failed requestsproxy_request_success- Number of successful requestsproxy_latency- Request latency histogram- Encryption/decryption metrics per workload
- Authentication success/failure rates
A pre-configured Grafana dashboard is available at dashboards/grafana-dashboard.json. Import this dashboard to visualize:
- Request throughput and error rates
- Authentication success rates
- Encryption performance metrics
- Per-workload statistics
Configure log levels using the --log-level flag:
./tclp --config config.yaml --log-level debugAvailable levels: debug, info, warn, error
# Build the binary
make build
# Clean build artifacts
make clean
# Build and test
make all# Run all tests
make test
# Run tests with verbose output
make test-verbose
# Run tests with coverage
make test-coverage
# Run race condition tests
make test-race
# Run benchmarks
make benchmarkworkload-id: Identifies which workload configuration to useauthorization: Authentication token (when worker authentication is enabled)
Enable debug logging for detailed troubleshooting:
./tclp --config config.yaml --log-level debug