Tekton Assistant helps explain failed Pipelines/TaskRuns and guide remediation.
- Pipeline Failure Analysis (Explain my failed pipeline)
- Retrieves and analyzes logs, status, and events for a failed PipelineRun/TaskRun
- Identifies the failed step and extracts relevant error messages
- Produces a concise summary (e.g., "Step 'build' failed due to missing dependency X")
- Suggests actionable fixes (e.g., permissions, image pull errors)
- Examples: "Check if secret 'docker-creds' exists in namespace Y", "Verify registry authentication"
graph TD
A[Input: TaskRun/PipelineRun ID or logs] --> B(Extractor Agent);
B -- "Fetches Manifest & Logs from Kubernetes API" --> C{Kubernetes Cluster};
C -- "Returns Run Data (status, events, logs)" --> B;
B -- "Parses & Extracts Error Context" --> D{RAG Agent};
D -- "Queries with Structured Error" --> E[(Vector DB: tekton_errors_db)];
E -- "Returns Relevant Solutions" --> D;
D -- "Generates Response with LLM" --> F[Output: Error, Failed Step, Suggested Fixes];
The overall system can be thought of as a two-phase pipeline:
Knowledge Ingestion Pipeline (Offline, Pre-processing) Debugging Runtime Pipeline (Online, Real-time)
graph LR
subgraph A [Knowledge Ingestion Pipeline]
direction LR
A1[Source Data<br/>Docs, GitHub, etc.] --> A2[Ingestion<br/>Parser/Formatter] --> A3[Vector DB<br/>tekton_knowledge_base]
end
subgraph B [Debugging Runtime Pipeline]
direction TB
B1[TaskRun ID] --> B2[Context Extractor] --> B3[Query Builder] --> B4[RAG Agent]
A3 -.-> B4
B4 --> B5[Response Generator]
end
A -- Pre-populates --> A3
This component prepares and indexes the knowledge base used by the runtime RAG agent.
- Go 1.22+
- A running LlamaStack endpoint (Ollama or Gemini distribution)
- Knowledge base JSON at
data/curated/kb.json
(same schema as the Python ingester)
Example to start LlamaStack with Gemini:
docker run -it --rm \
-v ./hack/gemini.yaml:/app/gemini.yaml:z \
-v ${SQLITE_STORE_DIR:-~/.llama/distributions/gemini}:/data \
-e GEMINI_API_KEY=$GEMINI_API_KEY \
-e SQLITE_STORE_DIR=/data \
-p 8321:8321 \
docker.io/llamastack/distribution-starter \
--config /app/gemini.yaml
go build ./cmd/ingester
./ingester --help
# Ingest curated knowledge base (defaults kb path to data/curated/kb.json)
./ingester ingest \
--endpoint http://localhost:8321 \
--vector-db tekton_errors_db \
--embedding-model text-embedding-004 \
--embedding-dim 384 \
--chunk-size 512 \
--batch-size 50
# Search examples (pretty output by default)
./ingester search --query "create-trusted-artifact" --max-results 5
./ingester search --query "kustomization.yaml" --max-results 5
# Stats / Validate / Reset
./ingester stats
./ingester validate
./ingester reset --force
# Raw JSON output
./ingester search --query "create-trusted-artifact" --max-results 5 --json-output
Flags mirror the Python version where possible. The KB path resolver also checks data/kb.json
and ../data/kb.json
if needed.
- Uses typed SDK services:
- Vector DB lifecycle:
client.VectorDBs.Register/List/Unregister
(routes underv1/vector-dbs
) - RAG indexing/query:
client.ToolRuntime.RagTool.Insert/Query
(routes underv1/tool-runtime/rag-tool
)
- Vector DB lifecycle:
- Documents are sent as
[]DocumentParam
withcontent
set to the Python-stylecombined_text
and metadata mirrored (error_type
,severity
, etc.).
Powered by the official Go SDK. See: llama-stack-client-go