Add a Product Review service with GenAI-powered summaries#2663
Add a Product Review service with GenAI-powered summaries#2663julianocosta89 merged 52 commits intoopen-telemetry:mainfrom
Conversation
|
@dmitchsplunk thank you! Really nice addition to the demo! |
|
@dmitchsplunk I've updated your PR with a fix to work with OpenAI. The ProblemThe original code only handled the first tool call (tool_calls[0]), but when OpenAI's API returns multiple tool calls (e.g., both The SolutionThe updated code now:
Key Changes
|
|
I've also moved the configuration of OpenAI token to If they want to use OpenAI, they can uncomment the override file and that will take care of overriding the values from .env file. |
|
@julianocosta89 thanks for the fix for the multiple tool calls, the questions I was testing with must have resulted in single tool calls only. I made a small change to the mock LLM service to ensure it still returns product reviews successfully. |
Is there anything else I need to do? I tried updating .env.override and restarting the app with docker compose, but it didn't pick up the changes. |
Please disregard, I got it working with the following command:
|
Bumps the actions-production-dependencies group with 1 update in the / directory: [github/codeql-action](https://github.com/github/codeql-action). Updates `github/codeql-action` from 4.31.0 to 4.31.1 - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](github/codeql-action@4e94bd1...5fe9434) --- updated-dependencies: - dependency-name: github/codeql-action dependency-version: 4.31.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: actions-production-dependencies ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
Hi @julianocosta89 , I've created draft PRs for the Helm chart and documentation updates: open-telemetry/opentelemetry-helm-charts#1920 Please let me know if there any other changes you'd like me to make as part of this PR. |
|
Hello @dmitchsplunk I was at a conference the past week, but I'll be able to take a look latest next week. I'm still concerned about the feature flags we are introducing with this PR, as they won't work with OpenAI. We should either remove the feature flag, or think about a different way to illustrate that with OpenAI. |
|
Hi @julianocosta89 - I've updated the service so that the feature flags work even when the real OpenAI API is used. For the llmRateLimitError feature flag, I changed the logic such that, when the feature flag is active and the random number is less than 0.5, then the mock LLM is always called, even if the app is configured to use the real OpenAI API. The mock LLM then returns the rate limit error (HTTP status code 429). The UI will simply show that the system is unable to process the request at this time.
For the llmInaccurateResponse feature flag, I modified the prompt sent to OpenAI when this feature flag is enabled to ask it to return an inaccurate response. It works well with my testing, please let me know what you think. |
|
@dmitchsplunk I've spent some time fixing the tracetest and finally got it to work. The rest was all on me |
|
Thanks for all your help in getting this merged @julianocosta89 ! I'll go ahead and move the Helm chart and docs PRs out of draft stage: open-telemetry/opentelemetry-helm-charts#1920 |




Changes
This PR proposes adding Customer Product Reviews to the Astronomy Shop application, using Generative AI to summarize the reviews for each product. This addition will allow the community to demonstrate OpenTelemetry capabilities for instrumenting Generative AI interactions within the Astronomy Shop application.
Summary of changes:
Here's a screenshot of the new Customer Reviews section of the product page:
And here's an example trace showing the Product Review Summary flow:
The LLM service supports two new feature flags:
llmInaccurateResponse: when this feature flag is enabled the LLM service returns an inaccurate product summary for product ID L9ECAV7KIMllmRateLimitError: when this feature flag is enabled, the LLM service intermittently returns a RateLimitError with HTTP status code 429If the direction looks good, I’ll follow up with documentation and Helm chart changes. In the meantime, I’d welcome early feedback.
Merge Requirements
For new features contributions, please make sure you have completed the following
essential items:
CHANGELOG.mdupdated to document new feature additionsMaintainers will not merge until the above have been completed. If you're unsure
which docs need to be changed ping the
@open-telemetry/demo-approvers.