-
Notifications
You must be signed in to change notification settings - Fork 1.6k
GH-3786: Remove ProducerRecord duplicated traceparent header #3789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…header Fixes: spring-projects#3786 Issue link: spring-projects#3786 When tracing is enabled, the KafkaRecordSenderContext was adding a new traceparent header without removing existing ones, resulting in multiple traceparent headers in the same record. This commit fixes the issue by Updating KafkaRecordSenderContext to remove existing traceparent headers before adding new ones. **Auto-cherry-pick to `3.3.x` & `3.2.x`** Signed-off-by: Soby Chacko <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some review.
Thanks
super((carrier, key, value) -> { | ||
Headers headers = record.headers(); | ||
// For traceparent context headers, ensure there's only one | ||
if ("traceparent".equals(key)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not correct.
Different vendors provides different header name.
I have so far know here: https://github.com/spring-cloud/spring-cloud-stream-binder-aws-kinesis/blob/main/spring-cloud-stream-binder-kinesis/src/main/java/org/springframework/cloud/stream/binder/kinesis/KinesisMessageChannelBinder.java#L575-L577.
But that does not mean that anything else custom could be provided from the context supplier.
I think it is totally safe to replace whatever was asked from the supplier.
The point is to not mislead with extra entry from what might came from the previous observation.
We have to provide here what can be set from keys supplier.
Iterable<Header> traceparentHeaders = record.headers().headers("traceparent"); | ||
List<String> headerValues = StreamSupport.stream(traceparentHeaders.spliterator(), false) | ||
.map(header -> new String(header.value(), StandardCharsets.UTF_8)) | ||
.collect(Collectors.toList()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No reason in this. Just toList()
is enough.
} | ||
|
||
// https://github.com/spring-cloud/spring-cloud-stream/issues/3095#issuecomment-2707075861 | ||
// https://github.com/spring-projects/spring-kafka/issues/3786 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is a reason in this comments.
We always can find out that from the commit history.
super((carrier, key, value) -> { | ||
Headers headers = record.headers(); | ||
Iterable<Header> existingHeaders = headers.headers(key); | ||
if (existingHeaders.iterator().hasNext()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to do this check?
Why just plain remove()
not enough?
Auto-cherry-pick has failed. |
Fixes: #3786
Issue link: #3786
When tracing is enabled, the KafkaRecordSenderContext was adding a new traceparent header without removing existing ones, resulting in multiple traceparent headers in the same record. This commit fixes the issue by Updating KafkaRecordSenderContext to remove existing traceparent headers before adding new ones.
Auto-cherry-pick to
3.3.x
&3.2.x