Skip to content

Add a Product Review service with GenAI-powered summaries#2663

Merged
julianocosta89 merged 52 commits intoopen-telemetry:mainfrom
dmitchsplunk:add-product-review-service
Nov 7, 2025
Merged

Add a Product Review service with GenAI-powered summaries#2663
julianocosta89 merged 52 commits intoopen-telemetry:mainfrom
dmitchsplunk:add-product-review-service

Conversation

@dmitchsplunk
Copy link
Copy Markdown
Contributor

@dmitchsplunk dmitchsplunk commented Oct 17, 2025

Changes

This PR proposes adding Customer Product Reviews to the Astronomy Shop application, using Generative AI to summarize the reviews for each product. This addition will allow the community to demonstrate OpenTelemetry capabilities for instrumenting Generative AI interactions within the Astronomy Shop application.

Summary of changes:

  • Adds a new Python-based Product Review service with two functions: getProductReviews(productId) and getProductReviewSummary(productId).
  • Introduces a Python-based LLM service that mocks OpenAI’s Chat Completions API to generate AI summaries of product reviews.
  • Stores customer reviews in a MySQL database.
  • Updates the front-end product page with a new Reviews section, including an AI-generated summary and individual reviews.
  • Instruments GenAI interactions using opentelemetry-instrumentation-openai-v2 to capture relevant spans and attributes.

Here's a screenshot of the new Customer Reviews section of the product page:

Astronomy Shop - Product Reviews

And here's an example trace showing the Product Review Summary flow:

Product Review Summary Trace

The LLM service supports two new feature flags:

  • llmInaccurateResponse: when this feature flag is enabled the LLM service returns an inaccurate product summary for product ID L9ECAV7KIM
  • llmRateLimitError: when this feature flag is enabled, the LLM service intermittently returns a RateLimitError with HTTP status code 429

If the direction looks good, I’ll follow up with documentation and Helm chart changes. In the meantime, I’d welcome early feedback.

Merge Requirements

For new features contributions, please make sure you have completed the following
essential items:

  • CHANGELOG.md updated to document new feature additions
  • Appropriate documentation updates in the docs -> Docs PR
  • Appropriate Helm chart updates in the helm-charts -> Helm chart PR

Maintainers will not merge until the above have been completed. If you're unsure
which docs need to be changed ping the
@open-telemetry/demo-approvers.

@linux-foundation-easycla
Copy link
Copy Markdown

linux-foundation-easycla bot commented Oct 17, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@github-actions github-actions bot added docs-update-required Requires documentation update helm-update-required Requires an update to the Helm chart when released labels Oct 17, 2025
@dmitchsplunk dmitchsplunk changed the title Add a new GenAI-powered service for customer product reviews Add a Product Review service with GenAI-powered summaries Oct 17, 2025
@dmitchsplunk dmitchsplunk marked this pull request as ready for review October 17, 2025 22:49
@dmitchsplunk dmitchsplunk requested a review from a team as a code owner October 17, 2025 22:49
@julianocosta89
Copy link
Copy Markdown
Member

@dmitchsplunk thank you!
I'm flying today, but I'll take a look whenever I have a couple of minutes.

Really nice addition to the demo!
Im excited to test it out! 🥳

@julianocosta89
Copy link
Copy Markdown
Member

@dmitchsplunk I've updated your PR with a fix to work with OpenAI.
I've used Claude to fix it, and I've tested with and without OpenAI.

The Problem

The original code only handled the first tool call (tool_calls[0]), but when OpenAI's API returns multiple tool calls (e.g., both fetch_product_reviews and fetch_product_info), you must provide a response for each tool_call_id. The API was rejecting your request because it was missing responses for the additional tool calls.

The Solution

The updated code now:

  • Processes all tool calls in a loop instead of just the first one
  • Appends the assistant's message once before processing any tool calls
  • Appends a tool response for each tool call with the correct tool_call_id
  • Consolidates the final user prompt to avoid duplication based on which tool was called
  • Makes a single final LLM call with all tool results included

Key Changes

  • Changed from tool_call = tool_calls[0] to for tool_call in tool_calls:
  • Moved messages.append(response_message) outside the loop so it's only added once
  • Each tool call now gets its response appended to the messages array
  • Simplified the flow to eliminate redundant code paths for different tool types
Screenshot 2025-10-29 at 18 12 29 Screenshot 2025-10-29 at 18 14 07

@julianocosta89
Copy link
Copy Markdown
Member

I've also moved the configuration of OpenAI token to .env.override.
With that, users do not need to keep comment/uncomment code.

If they want to use OpenAI, they can uncomment the override file and that will take care of overriding the values from .env file.

@dmitchsplunk
Copy link
Copy Markdown
Contributor Author

@julianocosta89 thanks for the fix for the multiple tool calls, the questions I was testing with must have resulted in single tool calls only.

I made a small change to the mock LLM service to ensure it still returns product reviews successfully.

@dmitchsplunk
Copy link
Copy Markdown
Contributor Author

I've also moved the configuration of OpenAI token to .env.override. With that, users do not need to keep comment/uncomment code.

If they want to use OpenAI, they can uncomment the override file and that will take care of overriding the values from .env file.

Is there anything else I need to do? I tried updating .env.override and restarting the app with docker compose, but it didn't pick up the changes.

@dmitchsplunk
Copy link
Copy Markdown
Contributor Author

I've also moved the configuration of OpenAI token to .env.override. With that, users do not need to keep comment/uncomment code.
If they want to use OpenAI, they can uncomment the override file and that will take care of overriding the values from .env file.

Is there anything else I need to do? I tried updating .env.override and restarting the app with docker compose, but it didn't pick up the changes.

Please disregard, I got it working with the following command:

docker compose --env-file .env --env-file .env.override up --force-recreate --remove-orphans --detach --build

dependabot bot and others added 2 commits October 30, 2025 16:43
Bumps the actions-production-dependencies group with 1 update in the / directory: [github/codeql-action](https://github.com/github/codeql-action).


Updates `github/codeql-action` from 4.31.0 to 4.31.1
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](github/codeql-action@4e94bd1...5fe9434)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 4.31.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: actions-production-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
@dmitchsplunk
Copy link
Copy Markdown
Contributor Author

Hi @julianocosta89 , I've created draft PRs for the Helm chart and documentation updates:

open-telemetry/opentelemetry-helm-charts#1920
open-telemetry/opentelemetry.io#8294

Please let me know if there any other changes you'd like me to make as part of this PR.

@julianocosta89
Copy link
Copy Markdown
Member

Hello @dmitchsplunk I was at a conference the past week, but I'll be able to take a look latest next week.

I'm still concerned about the feature flags we are introducing with this PR, as they won't work with OpenAI.

We should either remove the feature flag, or think about a different way to illustrate that with OpenAI.

@dmitchsplunk
Copy link
Copy Markdown
Contributor Author

Hi @julianocosta89 - I've updated the service so that the feature flags work even when the real OpenAI API is used.

For the llmRateLimitError feature flag, I changed the logic such that, when the feature flag is active and the random number is less than 0.5, then the mock LLM is always called, even if the app is configured to use the real OpenAI API. The mock LLM then returns the rate limit error (HTTP status code 429). The UI will simply show that the system is unable to process the request at this time.

429 Error in UI 429 Rate Limit Error

For the llmInaccurateResponse feature flag, I modified the prompt sent to OpenAI when this feature flag is enabled to ask it to return an inaccurate response. It works well with my testing, please let me know what you think.

@julianocosta89
Copy link
Copy Markdown
Member

@dmitchsplunk I've spent some time fixing the tracetest and finally got it to work.
Not because of your changes, on yours I just had to update the method name.

The rest was all on me

@julianocosta89 julianocosta89 merged commit 5f83ad1 into open-telemetry:main Nov 7, 2025
35 checks passed
@dmitchsplunk
Copy link
Copy Markdown
Contributor Author

dmitchsplunk commented Nov 7, 2025

Thanks for all your help in getting this merged @julianocosta89 ! I'll go ahead and move the Helm chart and docs PRs out of draft stage:

open-telemetry/opentelemetry-helm-charts#1920
open-telemetry/opentelemetry.io#8352

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

docs-update-required Requires documentation update helm-update-required Requires an update to the Helm chart when released

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants