feat: add OpenAI-compatible agent chat completions interface #108
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
feat: add OpenAI-compatible agent chat completions interface
Summary
This PR adds OpenAI SDK compatibility to the VLMRun Node SDK, allowing users to access agent chat completions using the familiar OpenAI API. The implementation mirrors PR #128 from the Python SDK.
Key changes:
client.agent.completionsgetter that returns OpenAI'schat.completionsobject configured for VLMRun's agent endpointopenaias an optional peer dependency (npm install openai)Usage:
Review & Testing Checklist for Human
Verify backend
/openaiendpoint exists and works - The implementation assumes{base_url}/openaisupports OpenAI's chat.completions API. This has NOT been tested against a real backend, only with mocks. Test withbaseURL: "https://agent.vlm.run/v1"and verify chat completions actually work.Confirm model name is correct - Examples use
model: "vlmrun-orion-1". Verify this is the correct model identifier for the agent endpoint.Test streaming functionality - Verify
client.agent.completions.create(..., stream: true)works correctly with the backend.Verify dynamic require works in ESM - The implementation uses
require("openai").defaultfor dynamic import. Test this works in both CommonJS and ESM environments.Recommended test plan:
openaiinstalledbaseURL: "https://agent.vlm.run/v1"and a valid API keyclient.agent.completions.create()with a simple message and verify responsestream: trueand verify chunks are received correctlyNotes
@cached_property)anyto avoid requiring openai types at compile timemain- Thenpm auditfailures forglobandjs-yamlvulnerabilities exist on the main branch and are not introduced by this PRSession: https://app.devin.ai/sessions/c5705e1a09884d61b03c8f236c6d555d
Requested by: [email protected]