Support in Prompt Experiment to automatically resolve input media references #7823
Replies: 4 comments 1 reply
-
|
Thanks for raising this! We have improved multimodal support across our product on our near-term roadmap, including the idea you raised here |
Beta Was this translation helpful? Give feedback.
-
|
We also very much need this capability |
Beta Was this translation helpful? Give feedback.
-
|
I also really missing this capability to evaluate prompts that extract info from or classify images. |
Beta Was this translation helpful? Give feedback.
-
|
Honestly I find this very surprising and frustrating! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the feature or potential improvement
Media (image in our case) input experiments can be run only programatically as Langfuse does not automatically resolve media reference tokens (such as @@@langfuseMedia:type=image/png|id=...@@@) to base64 data or raw image content in the input passed to the LLM.
Our use cases are all image input based. We built a Jupyter notebook UI similar to the New Experiment one, however it would be great to have the experimentation in the Langfuse as well.
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions