Replies: 1 comment
-
|
Thanks for raising this idea! Indeed this is currently not possible with our SDKs. Tail sampling is usually achieved in OpenTelemetry via a collector that is staging all OTEL spans and applies their own sampling rules on the span batch prior to exporting. https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/tailsamplingprocessor |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the feature or potential improvement
From what I can tell, there is no way to implement tail sampling for Langfuse, or at least not with the Langfuse Python SDK.
Motivation
The above linked opentelemetry docs page does a better job of explaining the motivation behind tail sampling than I can, but I'll give a short version below:
It would be very useful to dynamically, i.e. as the traced code is being executed, be able to determine whether a trace should be sent to langfuse or just dropped. One obvious example of this would be to guarantee all traces for failed requests are kept, while still having a sampling rate below 1 to keep costs down.
The way sampling currently works, you can only set a static sampling rate applied to all traces. For some given error that has occurred you only have a chance -- with probability equal to your sampling rate -- of the trace containing the error to have been sent to langfuse. You effectively have to weigh cost of a high sampling rate against the probability of catching crucial errors.
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions