Propagate opentelemetry spans across remote connections.#74
Propagate opentelemetry spans across remote connections.#74rread wants to merge 1 commit inton0-computer:mainfrom
Conversation
There was a problem hiding this comment.
Thanks! The feature sounds useful.
-
With this PR the wire format of a protocol changes based on feature flags for
irpc. We can't do this unfortunately: You would never want your wire messages to change because some dependency enabled a feature onirpcwhich will - due to feature unification - affect all crates in the workspace dependency tree. So, we need a different approach here. I'll have to do some thinking how best to do this. -
The PR makes the code generation dependant on feature flags on the proc macro crate. I think this is a no-go: Due to feature unification you can never know for sure if and where a feature got enabled. So this should be based on configuration via attributes on the
rpc_requestsmacro instead. -
The opentelemetry integration should be behind a separate feature
use-tracing-opentelemetryor similar
|
Thanks for your review. I updated this to make OpenTelemetry span propagation opt-in per service instead of being globally enabled via feature flags. Changes
|
a13164a to
dcc2dd5
Compare
Adds a new `rpc_request` feature `span_propagation` and a newcrate feature `use-tracing-opentelemetry`.When enalbed, the current span context is extracted and serialized with the message,and deserialized in `read_request_raw()` and saved in thread localstorage. The generated `RemoteService` creates a new span based onthe message variant name, and sets the new span's parent to theextracted context.
|
Fixed the CI errors. |
|
Any chance this can make the next release? I'd like to use this to get distributed traces of my app |
I will let @Frando push the button. But thanks a lot, this is very neat. |
There was a problem hiding this comment.
Thanks for your continued work on this. The impl now looks much better!
-
I'm wondering if the thread-local storage based approach is correct? irpc is usually run in a multi-threaded tokio runtime, so things can move between threads. Wouldn't that mean that the span context can end up being stored on a different thread than where it's being read from?
I'll think a bit if we can pass the span context without a thread local, that would be preferably IMO. -
irpc-irohshould also get theuse-tracing-opentelemetryfeature, if enabled it should propagate the feature toirpc -
I think we can rename the feature flag to just
tracing-opentelemetryif we setresolver = 2in theCargo.toml. Edit:I think we need to switch to edition 2024 for that -
Please add docs for the
span_propagationflag to the docs of therpc_requestsmacro (in irpc/src/lib.rs) -
We should add an end-to-end test to verify this all works as intended, and so that we don't break the feature accidentally in the future.
|
I tried to get this to work, but did not succeed. Maybe I don't know enough about the interaction between tracing and opentelemetry. I pushed a branch that contains a test example, and adds more debug logging: If you check out this I would have expected to get the Or did I get something wrong what this feature enables? |
|
Thanks for the feedback, I'll work on the suggestions and take a look at your test. IIRC resolver 2 only requires edition 2021. |
The current span context is extracted and serialized with the message, and deserialized in read_request_raw() and saved in thread local storage. The generated RemoteService now creates a new span based on the message variant name, and sets the new span's parent to the extracted context.
Fixes #71