Skip to content

[Task]: Refactor complex query logic for /v1/workspaces/:workspace_name/messages #1223

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
alex-mcgovern opened this issue Mar 5, 2025 · 0 comments
Assignees

Comments

@alex-mcgovern
Copy link
Contributor

Description

After the initial work to add pagination to the query logic for messages (prompts+outputs+alerts) for the Dashboard, @yrobla raised some concerns around the complexity of the query that we landed on.

For context, the requirements are:

  • we chose to display a list of "conversations/messages", which begin with a prompt (user or system prompt), have all of the outputs associated with that prompt (LLM messages) and all associated alerts
  • we render this in a list in the dashboard, and have the ability to see the "detail" by clicking on a message
  • this list is filterable by alert trigger_type ("codegate-secrets" | "codgate-pii" | "codegate-context-retriever") and by alert trigger_category (`"info" | "critical")
  • a workaround is this list can also be filtered using a list of prompt IDs — this is a bit of a hack to make it easy to find the data needed for the "detail view"

In actuality, we probably don't need all of the outputs and full alerts detail when displaying this information in a list. It might make more sense to have a ConversationSummary (for lists, where just return counts) and a Conversation (for detail view). This would help simplify the query logic.

I imagine this would look like

  • GET /v1/workspaces/:workspace_name/messages -> List(ConversationSummary)

    • the initial prompt from the user (or FIM, etc)
    • a count of alerts related to this prompt
    • token usage
    • other metadata (timestamp, type, etc)
    • a row in this list looks like this:
      Image
  • GET /v1/workspaces/:workspace_name/messages/:prompt_id -> Conversation

    • the initial prompt from the user
    • all outputs related to this prompt (the LLM response)
    • all alerts related to this prompt
    • token usage
    • other metadata (timestamp, type, etc)
    • the detail view looks like this:
      Image

Related:

Additional Context

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants