Minimal, production-ready library for emitting structured JSON metrics/logs (designed for GKE, compatible with Grafana log-based metrics) and optional Prometheus-format metric export.
Key features:
- JSON-based structured logging with fixed and extensible fields
- JSON metrics that accept
nameandvalue(no internal auto-increment) - Optional Prometheus-format metrics export
- Configuration precedence: env vars >
.configsJSON file > defaults - Minimal dependencies (stdlib only)
- Thread-safe metric aggregation
pip install gke_log_metricsOr for development:
cd /path/to/gke_log_metrics
pip install -e .from gke_log_metrics import Config, get_logger
# Load config (from .configs file and env var overrides)
cfg = Config()
# Get logger instance
logger = get_logger(cfg)
# Normal application log
logger.log("Application started", info={"phase": "startup"})
# Emit a log-based metric (JSON to stdout if METRICS_ENABLED)
logger.json_metric("backup_completed", 1, info={"job": "j1", "status": "success"})
# Unified metric API (updates Prometheus + emits JSON if enabled)
logger.metric("backup_checks_total", 1, labels={"job": "j1"})
# Export Prometheus metrics
print(logger.metrics_to_prometheus())Configuration is loaded in this order (later overrides earlier):
- Library defaults
.configsfile (environment variables format, or custom path viaCONFIG_FILEenv var)- OS environment variables
| Variable | Type | Default | Purpose |
|---|---|---|---|
APP_NAME |
string | "default_app" |
Application name in log entries |
APP_TYPE |
string | "default_type" |
Application type (e.g., gke_job, gke_api, shinyproxy_app) |
OWNER |
string | "default_owner" |
Owner or team name in log entries |
METRICS_ENABLED |
bool | true |
Enable JSON metrics to stdout (ignores LOG_LEVEL) |
PROMETHEUS_ENABLED |
bool | false |
Enable Prometheus-format metrics collection |
LOG_LEVEL |
string | "INFO" |
Python logging level (only affects logger.log(), not metrics) |
CONFIG_FILE |
string | ".configs" |
Path to environment variables format config file |
APP_NAME=my_backup_app
APP_TYPE=gke_job
OWNER=data_team
METRICS_ENABLED=true
PROMETHEUS_ENABLED=true
LOG_LEVEL=DEBUG
Loads configuration from file and environment. Validates on instantiation.
cfg = Config(config_file='.configs')
# or
cfg = Config() # uses default '.configs' or CONFIG_FILE envFactory function; creates and returns a validated Logger instance.
from gke_log_metrics import get_logger, ValidationError
try:
logger = get_logger(cfg)
except ValidationError as e:
print(f"Config error: {e}")
sys.exit(1)Normal application logging (respects LOG_LEVEL).
logger.log(
message: str,
info: Optional[Dict[str, Any]] = None,
level: str = "info", # "debug", "info", "warning", "error"
app_name: Optional[str] = None,
app_type: Optional[str] = None,
extra: Optional[Dict[str, Any]] = None
)Example:
logger.log("Backup started", info={"job_id": "12345"}, level="info")Emit a JSON metric to stdout (always prints if METRICS_ENABLED=true, regardless of LOG_LEVEL).
logger.json_metric(
name: str,
value: float = 1.0,
info: Optional[Dict[str, Any]] = None,
app_name: Optional[str] = None,
app_type: Optional[str] = None,
extra: Optional[Dict[str, Any]] = None,
message: Optional[str] = None,
)Emitted JSON schema:
{
"info": { "custom": "object" },
"app_name": "my_app",
"owner": "data_team",
"app_type": "gke_job",
"metric_name": "backup_completed",
"metric_value": 1.0,
"event_type": "metric",
"timestamp": "2026-02-24T19:12:52.123456+00:00",
"custom_field": "value"
}Example:
logger.json_metric(
"backup_verified",
1.0,
info={"job": "daily", "status": "success", "size_bytes": 1048576},
extra={"duration_seconds": 45.3}
)Update internal Prometheus-style metrics (counters, gauges, histograms).
logger.prometheus_metric(
name: str,
value: float = 1.0,
labels: Optional[Dict[str, str]] = None
)Example:
logger.prometheus_metric("backup_checks_total", 1, labels={"job": "daily", "status": "success"})Unified API: updates Prometheus metrics AND emits JSON metric (both if enabled).
logger.metric(
name: str,
value: float = 1.0,
labels: Optional[Dict[str, str]] = None,
message: Optional[str] = None,
info: Optional[Dict[str, Any]] = None,
extra: Optional[Dict[str, Any]] = None,
app_name: Optional[str] = None,
app_type: Optional[str] = None
)Example:
logger.metric(
"backup_checks_total",
1,
labels={"job": "daily", "status": "success"},
message="Backup check succeeded",
info={"duration": 45.3}
)Export accumulated Prometheus metrics in text format.
prom_text = logger.metrics_to_prometheus()
print(prom_text)See examples/basic_usage.py for a complete working example:
cd /path/to/gke_log_metrics
PYTHONPATH=. python3 examples/basic_usage.pyRun the test suite:
pytest tests/ -vWhen you create a GitHub Release, the CI/CD workflow automatically builds and publishes to PyPI:
-
Update version in
pyproject.toml:version = "0.2.0" # Increment as needed
-
Commit and push the version bump:
git add pyproject.toml git commit -m "Bump version to 0.2.0" git push origin master -
Create a GitHub Release:
- Go to: https://github.com/CityofEdmonton/gke_log_metrics/releases/new
- Tag:
v0.2.0(must match version inpyproject.toml) - Title:
Release v0.2.0 - Description: Document your changes and improvements
- Click "Publish release"
-
GitHub Actions automatically:
- Builds the package
- Publishes to PyPI
- Monitor progress at: https://github.com/CityofEdmonton/gke_log_metrics/actions
Result: Users can install with pip install gke_log_metrics
For users who need to publish without creating a GitHub Release:
-
Create
~/.pypircconfiguration file:cat > ~/.pypirc << 'EOF' [distutils] index-servers = pypi [pypi] username = __token__ password = pypi-<your-token-here> EOF
Replace
<your-token-here>with your PyPI API token from https://pypi.org/account/tokens/ -
Secure the file:
chmod 600 ~/.pypirc -
Install build tools:
pip install build twine
-
Build the package:
python -m build
-
Publish to PyPI:
twine upload dist/*
Why .pypirc?
- Cleaner than passing tokens on command line
- More secure (token not visible in shell history)
- Works consistently across different shells
- Standard Python packaging practice
Note: Requires PyPI account and API token from https://pypi.org/account/tokens/
Use json_metric() or metric() to emit structured logs that GKE/Stackdriver can parse as log-based metrics, then visualize in Grafana.
Example workflow:
- Your app calls
logger.json_metric("backup_status", info={"job":"daily", "status":"success"}) - JSON is printed to stdout
- GKE captures stdout as logs
- Stackdriver creates a log-based metric from the JSON
- Grafana queries and visualizes the metric
Set PROMETHEUS_ENABLED=true and expose metrics via your app's HTTP endpoint (caller responsible).
Example (Flask):
from flask import Flask
from gke_log_metrics import Config, get_logger
app = Flask(__name__)
cfg = Config()
logger = get_logger(cfg)
@app.route('/metrics')
def metrics():
return logger.metrics_to_prometheus(), 200, {'Content-Type': 'text/plain'}When METRICS_ENABLED=true:
json_metric()prints JSON to stdout always, ignoringLOG_LEVELmetric()emits JSON to stdout (ifMETRICS_ENABLED) and updates Prometheus (if enabled)json_metric()acceptsnameandvalue; there is no internal auto-incrementingcounterfield
When METRICS_ENABLED=false:
json_metric()does nothing (silent)metric()only updates Prometheus (if enabled), no JSON output
app_namedefaults toconfig.APP_NAMEwhen not providedapp_typedefaults toconfig.APP_TYPEwhen not provided- Both can be overridden per call
Notes:
- When you call
json_metric(...)ormetric(...)and do not provideapp_name/app_type, the library will useConfig.APP_NAMEandConfig.APP_TYPErespectively. metric(...)will pass these effective values to the JSON metric output so the emitted log always containsapp_nameandapp_type.
Clone and install in editable mode:
git clone https://github.com/CityofEdmonton/gke_log_metrics.git
cd gke_log_metrics
pip install -e .
pytest tests/This project is licensed under the MIT License. See the LICENSE file for details.
For issues, questions, or contributions, contact the maintainers.