Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot commented Nov 25, 2025

feat: add OpenAI-compatible agent chat completions interface

Summary

This PR adds OpenAI SDK compatibility to the VLMRun Node SDK, allowing users to access agent chat completions using the familiar OpenAI API. The implementation mirrors PR #128 from the Python SDK.

Key changes:

  • Added client.agent.completions getter that returns OpenAI's chat.completions object configured for VLMRun's agent endpoint
  • Added openai as an optional peer dependency (npm install openai)
  • Added 4 unit tests for the completions functionality
  • Updated README with usage examples including streaming

Usage:

import { VlmRun } from "vlmrun";

const client = new VlmRun({
  apiKey: "your-key",
  baseURL: "https://agent.vlm.run/v1"
});

const response = await client.agent.completions.create({
  model: "vlmrun-orion-1",
  messages: [{ role: "user", content: "Hello!" }]
});

Review & Testing Checklist for Human

⚠️ CRITICAL - Backend Integration Not Verified:

  • Verify backend /openai endpoint exists and works - The implementation assumes {base_url}/openai supports OpenAI's chat.completions API. This has NOT been tested against a real backend, only with mocks. Test with baseURL: "https://agent.vlm.run/v1" and verify chat completions actually work.

  • Confirm model name is correct - Examples use model: "vlmrun-orion-1". Verify this is the correct model identifier for the agent endpoint.

  • Test streaming functionality - Verify client.agent.completions.create(..., stream: true) works correctly with the backend.

  • Verify dynamic require works in ESM - The implementation uses require("openai").default for dynamic import. Test this works in both CommonJS and ESM environments.

Recommended test plan:

  1. Install the SDK in a test project with openai installed
  2. Initialize client with baseURL: "https://agent.vlm.run/v1" and a valid API key
  3. Call client.agent.completions.create() with a simple message and verify response
  4. Test streaming with stream: true and verify chunks are received correctly

Notes

  • All 183 unit tests pass (including 4 new tests for completions)
  • Build passes successfully
  • The completions getter uses lazy initialization with caching (similar to Python's @cached_property)
  • Return type is any to avoid requiring openai types at compile time
  • CI security check failure is a pre-existing issue on main - The npm audit failures for glob and js-yaml vulnerabilities exist on the main branch and are not introduced by this PR

Session: https://app.devin.ai/sessions/c5705e1a09884d61b03c8f236c6d555d
Requested by: [email protected]

- Add completions getter to Agent class that returns OpenAI chat.completions
- Add openai as optional peer dependency
- Add unit tests for completions functionality
- Update README with usage examples for chat completions and streaming

Co-Authored-By: [email protected] <[email protected]>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@devin-ai-integration
Copy link
Contributor Author

Note on CI Security Check Failure

The security job is failing due to npm audit --audit-level moderate detecting 2 vulnerabilities:

  1. glob 10.2.0 - 10.4.5 (high severity) - Command injection via -c/--cmd
  2. js-yaml <3.14.2 (moderate severity) - Prototype pollution in merge

This is a pre-existing issue on the main branch - I verified by checking out main and running npm audit --audit-level moderate, which shows the exact same 2 vulnerabilities. These are transitive dependencies from the existing tooling (Jest, ts-jest, etc.), not from the openai package I added.

I've also removed the yarn.lock file that was accidentally added, since the repo uses npm (package-lock.json).

The build and test jobs are being skipped because they depend on the security job passing. The actual feature implementation is complete and all 183 unit tests pass locally.

Suggested resolution options:

  1. Fix the vulnerabilities in a separate PR by running npm audit fix and testing the dependency updates
  2. Temporarily adjust the CI security check to allow the PR to proceed, then address the vulnerabilities separately
  3. Merge this PR and address the security issues in a follow-up

Let me know how you'd like to proceed!

- Introduced jest.e2e.config.js for end-to-end testing setup.
- Updated package.json to include a new script for running end-to-end tests.
- Added chat-completions.test.ts to implement end-to-end tests for the chat completions functionality.
- Updated CONTRIBUTING.md with instructions for running end-to-end tests.
@shahrear33 shahrear33 merged commit 5b6442c into main Nov 25, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants