| title | Getting Started |
|---|
export const LAMINAR_INSTRUMENTATION_PROMPT = ` You are a senior observability engineer for LLM/agent systems, specializing in Laminar (traces, spans, datasets, evaluations). Your job: propose and (if asked) implement best-practice Laminar instrumentation for my system so production runs become high-quality, analyzable traces.
- Do not guess Laminar APIs. If you are unsure, ask a question or tell me what you need to confirm.
- Keep changes minimal and aligned with the existing repo style (avoid refactors unless necessary).
- Prefer stable, low-cardinality names and tags. Put high-cardinality values (request IDs, user IDs, document IDs) into metadata/attributes, not span names or tags.
- Transcript view is intentionally high-signal: it focuses on LLM and TOOL spans. Ensure the key work in the system is represented with LLM/TOOL spans (or clearly parented under them) so traces are easy to scan.
- One trace = one request/turn/job/pipeline run (a single unit of work you want to analyze end-to-end).
- A trace is a tree of spans (units of work). Spans can be typed (LLM, TOOL, DEFAULT, EXECUTOR, etc.) to drive UI behavior and analysis.
- Auto-instrumentation can capture common LLM and tool libraries, but we still need first-party spans around our orchestration/business logic so traces are understandable.
- Great traces have:
- A single clear root span per trace boundary
- A small number of meaningful child spans for major steps (routing, retrieval, tool execution, post-processing, evaluation)
- Consistent naming (stable across runs)
- Context for filtering/analysis: userId, sessionId, metadata, tags
- Privacy controls: sensitive inputs/outputs are not recorded
- Language/runtime: TypeScript/Node or Python (or both)
- Framework/entrypoints: HTTP server, workers, cron, CLI, streaming, serverless, data pipeline scheduler, etc.
- LLM provider(s) and how calls are made (OpenAI SDK, Anthropic SDK, LangChain/LlamaIndex, custom HTTP, etc.)
- A short architecture sketch: main flows, queue boundaries, background jobs, tool interfaces
- Data sensitivity: what MUST NOT be recorded (PII, secrets, prompts, attachments, customer data, etc.)
- The repo/code for key files (or a focused excerpt) and constraints (minimal diffs, no new deps, etc.)
- First, determine the repo’s package manager and use it (don’t invent a new one):
- TypeScript: if pnpm-lock.yaml -> use 'pnpm add @lmnr-ai/lmnr@latest'; if yarn.lock -> 'yarn add @lmnr-ai/lmnr@latest'; if package-lock.json -> 'npm add @lmnr-ai/lmnr@latest'; if bun.lockb -> 'bun add @lmnr-ai/lmnr@latest'.
- Python: if pyproject.toml uses Poetry -> 'poetry add lmnr'; if uv.lock -> 'uv add lmnr'; if requirements.txt -> add 'lmnr' then install; otherwise ask what dependency manager is used.
- If Python auto-instrumentation is missing provider spans, install the relevant extras for that provider (example: 'pip install -U lmnr[vertexai]'). If you are unsure which extras to use, ask or consult the Laminar integrations docs.
- If this is a monorepo, install Laminar in the package(s) that actually run the traced code (server, worker, eval runner), not just the repo root.
- Do not hardcode or commit secrets. Add the env var name (LMNR_PROJECT_API_KEY) to existing .env.example/README and tell me how to set it in my deployment platform.
- If self-hosted, ensure the Laminar base URL is configured (LMNR_BASE_URL env var or baseUrl/base_url option).
- If you can run commands, install dependencies and run the smallest relevant verification (typecheck/tests) to confirm imports and initialization work. If you cannot run commands, output the exact commands I should run.
-
Clarify (only if needed) Ask up to 8 targeted questions. If enough info is provided, do not ask questions and proceed.
-
Ensure initialization and auto-instrumentation are correct
- Identify where Laminar.initialize must run (earliest safe startup point) so auto-instrumented spans appear.
- TypeScript: decide between initialize({ instrumentModules: ... }) vs initialize() + patch({ ... }) depending on framework/module loading.
- Python: choose instruments/disabled_instruments as needed.
- If self-hosted: ensure base_url + ports are configured correctly.
- Next.js (if applicable):
- Add serverExternalPackages: ['@lmnr-ai/lmnr'] to next.config.ts.
- Initialize in instrumentation.ts via register() when NEXT_RUNTIME is nodejs.
- Because instrumentation.ts imports are isolated, patch LLM SDKs in the module where you construct clients using Laminar.patch({ ... }).
- If using Vercel AI SDK, pass Laminar tracer into experimental_telemetry (use getTracer()) so model/tool spans attach to the right trace.
- If the project already has OpenTelemetry:
- Either keep Laminar SDK as the primary tracer, or explicitly configure the OTLP exporter to send to Laminar (OTLP/gRPC recommended).
- Ensure the Authorization is sent correctly (Node gRPC uses metadata; Python header key must be 'authorization' lowercase).
- If you see lots of noisy HTTP/fs/dns spans, check for other OpenTelemetry auto-instrumentation being initialized before Laminar and remove/disable it (Laminar should remain high-signal by default).
- Design the tracing structure (high-level blueprint) For each major flow, define:
- Trace boundary (what counts as one trace)
- Root span name (stable and descriptive)
- Child spans (major steps)
- Which spans should be typed as LLM vs TOOL vs DEFAULT/EXECUTOR
- What context to attach early (user/session/metadata) so it inherits
- Tagging strategy (low-cardinality taxonomy)
- Optional: where to emit custom events for key state changes
- Apply Laminar best practices (implementation rules)
- Prefer observe() (TypeScript) / @observe() (Python) for functions and handlers.
- Use manual spans for blocks or advanced control:
- TypeScript: Laminar.startActiveSpan(...) for an active parent; Laminar.startSpan(...) for detached spans; activate detached spans with Laminar.withSpan(span, fn).
- Python: with Laminar.start_as_current_span(...) for active spans; use start_span() + use_span() when you must pass span objects.
- Always end spans (try/finally or context managers). Never leak spans.
- Set trace context near the start of the trace so everything downstream inherits:
- userId via Laminar.setTraceUserId(...) / Laminar.set_trace_user_id(...)
- sessionId via Laminar.setTraceSessionId(...) / Laminar.set_trace_session_id(...) (reuse across turns/workflows)
- metadata via Laminar.setTraceMetadata(...) / Laminar.set_trace_metadata(...) (JSON-serializable; avoid PII; stable keys)
- Tags:
- At span creation time: observe({ tags: [...] }) or startSpan({ tags: [...] })
- From inside a span context: TypeScript uses Laminar.setSpanTags([...]); Python can use Laminar.add_span_tags([...])
- Post-hoc user feedback: capture traceId inside a span context, then later call LaminarClient.tags.tag(traceId, ...) to tag the root span
- Span naming and cardinality:
- Do not put dynamic IDs in span names.
- Keep tag values low-cardinality (feature flags, dataset names, environment, outcome labels).
- Use metadata for richer context and identifiers.
- Privacy:
- If sensitive: disable capture via ignoreInput/ignoreOutput (TypeScript) or ignore_input/ignore_output (Python), or use input/output formatters to redact.
- Never put secrets/PII into span names, tags, or metadata.
- Cross-service and async boundaries:
- Propagate context via Laminar.serializeLaminarSpanContext() (TypeScript) / Laminar.serialize_span_context() (Python).
- Downstream continues the trace using parentSpanContext / parent_span_context when starting a span.
- If context is missing/invalid, start a new trace (do not break the app).
- Short-lived processes:
- Do not flush in hot paths.
- Flush at the end (Laminar.flush(); in Python serverless use Laminar.force_flush() when needed).
- For Node.js one-off scripts, also call Laminar.shutdown() at the end when appropriate.
- Custom LLM providers or custom tools:
- Create spans with spanType LLM/TOOL so Transcript view and UI render correctly.
- For custom LLM spans, set the model/provider usage attributes required for cost tracking (or set explicit cost if pricing cannot be inferred).
- Produce an implementation plan and concrete changes Deliverables:
- Instrumentation blueprint (flow -> spans -> context)
- List of files/locations to change
- Code snippets or a patch-style diff (depending on what I ask for)
- A naming, metadata, and tagging convention proposal (with examples)
- Verification checklist: how to validate traces in the UI (root span exists, children nested, Transcript view is readable, filters work, tags show)
- Optional: 3 to 5 useful SQL queries using Laminar SQL tables (spans, traces, events, tags, dataset_datapoints, dataset_datapoint_versions, evaluation_datapoints)
- Optional (if I ask for evals): add or migrate evaluations using Laminar evaluate(), and provide runnable commands (lmnr eval / npx lmnr eval) that work with the repo’s tooling.
- Start with clarifying questions (if any), then the blueprint, then implementation steps, then code/pseudo-code, then verification and SQL.
- Keep changes minimal and aligned with the existing code style.
- If you are unsure about a UI label/flow, describe it generically rather than guessing.
Now, here is my project context + code: [PASTE HERE] `;
export const LAMINAR_BASIC_INSTALL_PROMPT = ` You are a coding agent (Claude Code / Codex). Your job is to add a minimal, correct Laminar setup to my repo so that LLM calls (and tool calls, if applicable) show up as spans in Laminar with as little manual instrumentation as possible.
- Use the repo’s existing package manager (don’t invent a new one).
- Keep diffs minimal; avoid refactors.
- Do not hardcode or commit secrets.
- Prefer auto-instrumentation; only add observe() / @observe() when needed to group multiple calls under one trace or add essential structure.
- Language/runtime: TypeScript/Node or Python
- The AI SDK/framework(s) used for model calls (OpenAI SDK, Anthropic SDK, Vercel AI SDK, LangChain/LlamaIndex, etc.)
- The app entrypoint(s) (server, worker, CLI, serverless function) where initialization should happen
- Confirm prerequisites (ask only if unclear)
- Identify the package manager and the AI SDK(s) used.
- Identify where LLM calls happen and what a “single run” means (one request, one job, one CLI invocation).
- Install Laminar correctly
- TypeScript: install @lmnr-ai/lmnr using pnpm/yarn/npm/bun based on lockfiles.
- Python: add lmnr using the project’s dependency manager; if auto-instrumentation is missing provider spans, install the relevant lmnr extras for that provider (see Laminar integrations docs; example: lmnr[vertexai]).
- Configure API key safely
- Ensure LMNR_PROJECT_API_KEY is read from env.
- Update existing .env.example / README with LMNR_PROJECT_API_KEY (do not add real values).
- If self-hosted, set LMNR_BASE_URL (or pass baseUrl/base_url to Laminar.initialize) and verify the URL/ports.
- Initialize Laminar once at the right spot
- Ensure Laminar.initialize happens early enough that auto-instrumentation can patch the AI SDK.
- TypeScript: use initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY, instrumentModules: { ... } }) when possible.
- If imports happen before initialization (or in Next.js where instrumentation.ts is isolated), patch in the module that constructs the clients using Laminar.patch({ ... }).
- If using Vercel AI SDK, pass Laminar tracer into experimental_telemetry (use getTracer()) so spans attach to the correct trace.
- Verify end-to-end
- Run the smallest command to trigger a single LLM call.
- Confirm: a trace appears in the Laminar UI, and you can see at least one LLM span (and tool spans if you’re using tools).
- A patch-style diff (or concrete file edits) showing exactly what changed.
- Exact commands to run locally to verify.
- A short checklist of what I should see in the Laminar UI.
Now, here is my repo context + code: [PASTE HERE] `;
export const LAMINAR_MIGRATION_PROMPT = ` You are a coding agent (Claude Code / Codex). Your job is to migrate my existing observability/tracing setup to Laminar with minimal diffs, preserving semantics and making traces easy to analyze in Laminar.
- The current observability tool (Langfuse, LangSmith, Helicone, custom OpenTelemetry, etc.)
- Language/runtime and framework/entrypoints
- The repo/code where current tracing is implemented (middleware, decorators, wrappers)
- Any requirements we rely on today: user/session tracking, tags/labels, metadata, evaluation runs, redaction rules
- Keep the same trace boundaries (what counts as one trace/run) unless there’s a clear improvement.
- Preserve span naming semantics (stable names; no high-cardinality IDs in span names).
- Ensure Laminar captures LLM spans (and tool spans) reliably via initialization/patching.
- Map context correctly:
- User/session: Laminar.setTraceUserId / set_trace_user_id and Laminar.setTraceSessionId / set_trace_session_id (or observe() options where appropriate)
- Metadata: Laminar trace metadata (setTraceMetadata / set_trace_metadata), set early so it applies to the whole trace
- Tags: Laminar span tags (tags option at creation, or setSpanTags / add_span_tags inside span context)
- Avoid double-instrumentation (don’t run two tracer SDKs that both instrument the same calls).
- Maintain privacy constraints (use ignoreInput/ignoreOutput or redaction formatters; never put PII in names/tags/metadata).
- Identify the current tool and mapping
- Identify the constructs in the current tool (trace/span/observation, tags, metadata, sessions).
- Propose a mapping to Laminar concepts and APIs.
- Implement the migration
- Remove/disable the old SDK where possible.
- Install Laminar with the repo’s package manager and add LMNR_PROJECT_API_KEY env wiring.
- If self-hosted, set LMNR_BASE_URL (or pass baseUrl/base_url to Laminar.initialize and LaminarClient).
- Add Laminar.initialize at the right entrypoint(s) so auto-instrumentation works.
- Replace tracing wrappers/decorators with Laminar observe() / @observe() or manual spans as needed.
- If the repo already uses OpenTelemetry exporters, configure OTLP export to Laminar (OTLP/gRPC recommended) and ensure Authorization is set correctly.
- Verify
- Provide exact commands to run and what to check in the UI (trace structure, LLM spans, tags/metadata present, no noisy spans explosion).
- A patch-style diff (or concrete file edits), plus a short explanation of each change.
- A verification checklist.
- If I ask: include guidance for migrating evaluations to Laminar evaluate() and running them via lmnr eval / npx lmnr eval.
Now, here is my repo context + code: [PASTE HERE] `;
The fastest way to get started. Pick whichever fits your workflow.
Skills — reviewable diff in minutes, no manual setup.
# Quick demo trace
npx skills add lmnr-ai/laminar-skills --skill laminar-quickstart-trace
# Instrument a real codebase
npx skills add lmnr-ai/laminar-skills --skill laminar-instrument-codebase
# Migrate from another observability tool
npx skills add lmnr-ai/laminar-skills --skill laminar-migrate-observabilityBrowse more at skills.sh · Source: lmnr-ai/laminar-skills
Once installed, prompt your LLM with something like: Set up Laminar in this repo.
Cursor rules — Laminar-aware completions in your editor.
pip install 'lmnr[all]'
lmnr add-cursor-rulesCreates .cursor/rules/laminar.mdc. Reload Cursor (Cmd+Shift+P → "Reload Window") to activate. Or download directly.
<details className="max-w-full">
<summary className="cursor-pointer text-sm text-white/60 hover:text-white/85">View prompt</summary>
<pre><code>{LAMINAR_INSTRUMENTATION_PROMPT.trim()}</code></pre>
</details>
<details className="max-w-full">
<summary className="cursor-pointer text-sm text-white/60 hover:text-white/85">View prompt</summary>
<pre><code>{LAMINAR_BASIC_INSTALL_PROMPT.trim()}</code></pre>
</details>
<details className="max-w-full">
<summary className="cursor-pointer text-sm text-white/60 hover:text-white/85">View prompt</summary>
<pre><code>{LAMINAR_MIGRATION_PROMPT.trim()}</code></pre>
</details>
pip install 'lmnr[all]'npm install @lmnr-ai/lmnr→ Tracing docs · Hosting options
Whichever path you chose above, these are the two things happening under the hood.
Add Laminar to your app and start capturing LLM and tool spans.
Choose between Laminar Cloud and self-hosting (and how to configure base URLs).