Skip to content
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -73,5 +73,8 @@
"onlyBuiltDependencies": [
"@nestjs/core"
]
},
"dependencies": {
"tslib": "^2.8.1"
}
}
171 changes: 72 additions & 99 deletions src/HttpLlm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -16,25 +16,20 @@ import { ILlmSchema } from "./structures/ILlmSchema";
import { LlmDataMerger } from "./utils/LlmDataMerger";

/**
* LLM function calling application composer from OpenAPI document.
* LLM function calling application composer from OpenAPI documents.
*
* `HttpLlm` is a module for composing LLM (Large Language Model) function
* calling application from the {@link OpenApi.IDocument OpenAPI document}, and
* also for LLM function call execution and parameter merging.
* `HttpLlm` is a module for converting OpenAPI documents into LLM (Large Language Model)
* function calling applications. It handles schema conversion, function execution, and
* parameter merging for AI-powered API interactions.
*
* At first, you can construct the LLM function calling application by the
* {@link HttpLlm.application HttpLlm.application()} function. And then the LLM
* has selected a {@link IHttpLlmFunction function} to call and composes its
* arguments, you can execute the function by
* {@link HttpLlm.execute HttpLlm.execute()} or
* {@link HttpLlm.propagate HttpLlm.propagate()}.
* **Core workflow:**
* 1. Convert OpenAPI document to LLM application using {@link HttpLlm.application}
* 2. LLM selects and composes arguments for a {@link IHttpLlmFunction function}
* 3. Execute the function using {@link HttpLlm.execute} or {@link HttpLlm.propagate}
*
* By the way, if you have configured the
* {@link IHttpLlmApplication.IOptions.separate} option to separate the
* parameters into human and LLM sides, you can merge these human and LLM sides'
* parameters into one through
* {@link HttpLlm.mergeParameters HttpLlm.mergeParameters()} before the actual
* LLM function call execution.
* **Parameter separation:** If you configure {@link IHttpLlmApplication.IOptions.separate}
* to separate parameters between human and LLM sides, use {@link HttpLlm.mergeParameters}
* to combine them before execution.
*
* @author Jeongho Nam - https://github.com/samchon
*/
Expand Down Expand Up @@ -65,27 +60,20 @@ export namespace HttpLlm {
/**
* Convert OpenAPI document to LLM function calling application.
*
* Converts {@link OpenApi.IDocument OpenAPI document} or
* {@link IHttpMigrateApplication migrated application} to the
* {@link IHttpLlmApplication LLM function calling application}. Every
* {@link OpenApi.IOperation API operations} in the OpenAPI document are
* converted to the {@link IHttpLlmFunction LLM function} type, and they would
* be used for the LLM function calling.
*
* If you have configured the {@link IHttpLlmApplication.IOptions.separate}
* option, every parameters in the {@link IHttpLlmFunction} would be separated
* into both human and LLM sides. In that case, you can merge these human and
* LLM sides' parameters into one through {@link HttpLlm.mergeParameters}
* before the actual LLM function call execution.
*
* Additionally, if you have configured the
* {@link IHttpLlmApplication.IOptions.keyword} as `true`, the number of
* {@link IHttpLlmFunction.parameters} are always 1 and the first parameter
* type is always {@link ILlmSchemaV3.IObject}. I recommend this option because
* LLM can understand the keyword arguments more easily.
*
* @param props Properties for composition
* @returns LLM function calling application
* Transforms OpenAPI documents into LLM-compatible function calling applications.
* Each {@link OpenApi.IOperation API operation} becomes an {@link IHttpLlmFunction LLM function}
* that AI models can understand and invoke.
*
* **Parameter handling:**
* - **Separated mode:** When {@link IHttpLlmApplication.IOptions.separate} is enabled,
* parameters split between human and LLM sides. Use {@link HttpLlm.mergeParameters}
* to combine them before execution.
* - **Keyword mode:** When {@link IHttpLlmApplication.IOptions.keyword} is `true`,
* all parameters become a single {@link ILlmSchemaV3.IObject}. Recommended for
* better LLM understanding.
*
* @param props Configuration properties for the conversion
* @returns LLM function calling application ready for AI interaction
*/
export const application = <Model extends ILlmSchema.Model>(
props: IApplicationProps<Model>,
Expand Down Expand Up @@ -134,29 +122,21 @@ export namespace HttpLlm {
/**
* Execute the LLM function call.
*
* `HttmLlm.execute()` is a function executing the target
* {@link OpenApi.IOperation API endpoint} with with the connection information
* and arguments composed by Large Language Model like OpenAI (+human
* sometimes).
*
* By the way, if you've configured the
* {@link IHttpLlmApplication.IOptions.separate}, so that the parameters are
* separated to human and LLM sides, you have to merge these humand and LLM
* sides' parameters into one through {@link HttpLlm.mergeParameters}
* function.
*
* About the {@link IHttpLlmApplication.IOptions.keyword} option, don't worry
* anything. This `HttmLlm.execute()` function will automatically recognize
* the keyword arguments and convert them to the proper sequence.
*
* For reference, if the target API endpoinnt responds none 200/201 status,
* this would be considered as an error and the {@link HttpError} would be
* thrown. Otherwise you don't want such rule, you can use the
* {@link HttpLlm.propagate} function instead.
*
* @param props Properties for the LLM function call
* @returns Return value (response body) from the API endpoint
* @throws HttpError when the API endpoint responds none 200/201 status
* Executes an {@link OpenApi.IOperation API endpoint} using connection information
* and arguments composed by an LLM (with optional human input).
*
* **Parameter handling:**
* - **Separated parameters:** If {@link IHttpLlmApplication.IOptions.separate} is enabled,
* merge human and LLM parameters using {@link HttpLlm.mergeParameters} first.
* - **Keyword arguments:** Automatically handles {@link IHttpLlmApplication.IOptions.keyword}
* format conversion.
*
* **Error handling:** Throws {@link HttpError} for non-200/201 status responses.
* For custom error handling, use {@link HttpLlm.propagate} instead.
*
* @param props Properties containing application, function, connection, and input
* @returns API response body on successful execution
* @throws HttpError when API returns non-200/201 status
*/
export const execute = <Model extends ILlmSchema.Model>(
props: IFetchProps<Model>,
Expand All @@ -165,28 +145,22 @@ export namespace HttpLlm {
/**
* Propagate the LLM function call.
*
* `HttmLlm.propagate()` is a function propagating the target
* {@link OpenApi.IOperation API endpoint} with with the connection information
* and arguments composed by Large Language Model like OpenAI (+human
* sometimes).
*
* By the way, if you've configured the
* {@link IHttpLlmApplication.IOptions.separate}, so that the parameters are
* separated to human and LLM sides, you have to merge these humand and LLM
* sides' parameters into one through {@link HttpLlm.mergeParameters}
* function.
* Executes an {@link OpenApi.IOperation API endpoint} and returns the raw response
* regardless of HTTP status code. Unlike {@link HttpLlm.execute}, this method
* does not throw errors for non-200/201 responses.
*
* About the {@link IHttpLlmApplication.IOptions.keyword} option, don't worry
* anything. This `HttmLlm.propagate()` function will automatically recognize
* the keyword arguments and convert them to the proper sequence.
* **Parameter handling:**
* - **Separated parameters:** If {@link IHttpLlmApplication.IOptions.separate} is enabled,
* merge human and LLM parameters using {@link HttpLlm.mergeParameters} first.
* - **Keyword arguments:** Automatically handles {@link IHttpLlmApplication.IOptions.keyword}
* format conversion.
*
* For reference, the propagation means always returning the response from the
* API endpoint, even if the status is not 200/201. This is useful when you
* want to handle the response by yourself.
* **Use case:** Ideal when you need custom error handling or want to process
* all HTTP status codes manually.
*
* @param props Properties for the LLM function call
* @returns Response from the API endpoint
* @throws Error only when the connection is failed
* @param props Properties containing application, function, connection, and input
* @returns Complete HTTP response including status and headers
* @throws Error only when network connection fails
*/
export const propagate = <Model extends ILlmSchema.Model>(
props: IFetchProps<Model>,
Expand All @@ -208,38 +182,37 @@ export namespace HttpLlm {
}

/**
* Merge the parameters.
* Merge separated parameters.
*
* If you've configured the {@link IHttpLlmApplication.IOptions.separate}
* option, so that the parameters are separated to human and LLM sides, you
* can merge these humand and LLM sides' parameters into one through this
* `HttpLlm.mergeParameters()` function before the actual LLM function call
* wexecution.
* Combines human and LLM composed parameters into a single object when
* {@link IHttpLlmApplication.IOptions.separate} mode is enabled.
*
* On contrary, if you've not configured the
* {@link IHttpLlmApplication.IOptions.separate} option, this function would
* throw an error.
* **Usage scenario:** When parameters are separated between human and LLM sides,
* use this function before calling {@link HttpLlm.execute} or {@link HttpLlm.propagate}.
*
* @param props Properties for the parameters' merging
* @returns Merged parameter values
* **Error condition:** Throws an error if {@link IHttpLlmApplication.IOptions.separate}
* was not configured during application creation.
*
* @param props Configuration with function metadata and separated parameters
* @returns Merged parameter object ready for function execution
* @throws Error when separation mode was not enabled
*/
export const mergeParameters = <Model extends ILlmSchema.Model>(
props: IMergeProps<Model>,
): object => LlmDataMerger.parameters(props);

/**
* Merge two values.
*
* If both values are objects, then combines them in the properties level.
* Merge two values intelligently.
*
* Otherwise, returns the latter value if it's not null, otherwise the former
* value.
* Combines two values using intelligent merging logic:
* - **Objects:** Merges properties recursively
* - **Other types:** Returns the latter value if not null, otherwise the former
*
* - `return (y ?? x)`
* **Logic:** `return (y ?? x)`
*
* @param x Value X to merge
* @param y Value Y to merge
* @returns Merged value
* @param x First value to merge
* @param y Second value to merge (takes precedence when not null)
* @returns Intelligently merged result
*/
export const mergeValue = (x: unknown, y: unknown): unknown =>
LlmDataMerger.value(x, y);
Expand Down
40 changes: 18 additions & 22 deletions src/HttpMigration.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,31 +10,27 @@ import { IHttpMigrateRoute } from "./structures/IHttpMigrateRoute";
import { IHttpResponse } from "./structures/IHttpResponse";

/**
* HTTP migration application composer from OpenAPI document.
* HTTP migration application composer from OpenAPI documents.
*
* `HttpMigration` is a module for composing HTTP migration application from the
* {@link OpenApi.IDocument OpenAPI document}. It is designed for helping the
* OpenAPI generator libraries, which converts
* {@link OpenApi.IOperation OpenAPI operations} to an RPC (Remote Procedure
* Call) function.
* `HttpMigration` is a module for converting OpenAPI documents into HTTP migration
* applications. It is designed to help OpenAPI generator libraries convert
* {@link OpenApi.IOperation OpenAPI operations} to RPC (Remote Procedure
* Call) functions.
*
* The key feature of the `HttpModule` is the {@link HttpMigration.application}
* function. It converts the {@link OpenApi.IOperation OpenAPI operations} to the
* {@link IHttpMigrateRoute HTTP migration route}, and it normalizes the OpenAPI
* operations to the RPC function calling suitable route structure.
* **Key features:**
* - **Application conversion**: {@link HttpMigration.application} converts
* {@link OpenApi.IOperation OpenAPI operations} to
* {@link IHttpMigrateRoute HTTP migration routes}, normalizing OpenAPI
* operations into RPC function calling suitable route structures.
* - **HTTP execution**: {@link HttpMigration.execute} and
* {@link HttpMigration.propagate} execute HTTP requests to the HTTP server.
* - `execute`: Returns response body for 200/201 status codes, throws {@link HttpError} otherwise
* - `propagate`: Returns complete response information including status code, headers, and body
*
* The other functions, {@link HttpMigration.execute} and
* {@link HttpMigration.propagate}, are for executing the HTTP request to the
* HTTP server. The {@link HttpMigration.execute} function returns the response
* body from the API endpoint when the status code is `200` or `201`. Otherwise,
* it throws an {@link HttpError} when the status code is not `200` or `201`. The
* {@link HttpMigration.propagate} function returns the response information from
* the API endpoint, including the status code, headers, and response body.
*
* The {@link HttpLlm} module is a good example utilizing this `HttpMigration`
* module for composing RPC function calling application. The {@link HttpLlm}
* module composes LLM (Large Language Model) function calling application from
* the OpenAPI document bypassing through the {@link IHttpLlmApplication} type.
* **Usage example**: The {@link HttpLlm} module utilizes this `HttpMigration`
* module for composing RPC function calling applications. It composes LLM
* (Large Language Model) function calling applications from OpenAPI documents
* by passing through the {@link IHttpLlmApplication} type.
*
* @author Jeongho Nam - https://github.com/samchon
*/
Expand Down
26 changes: 13 additions & 13 deletions src/McpLlm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,19 @@ import { OpenApiTypeChecker } from "./utils/OpenApiTypeChecker";
import { OpenApiValidator } from "./utils/OpenApiValidator";

/**
* Application of LLM function calling from MCP document.
* LLM function calling application from MCP documents.
*
* `McpLlm` is a module for composing LLM (Large Language Model) function
* calling application from MCP (Model Context Protocol) document.
* `McpLlm` is a module for converting MCP (Model Context Protocol) documents
* into LLM (Large Language Model) function calling applications.
*
* The reasons why `@samchon/openapi` recommends to use the function calling
* The reasons why `@samchon/openapi` recommends using the function calling
* feature instead of directly using the
* [`mcp_servers`](https://openai.github.io/openai-agents-python/mcp/#using-mcp-servers)
* property of LLM API are:
*
* - Model Specification: {@link ILlmSchema}
* - Validation Feedback: {@link IMcpLlmFunction.validate}
* - Selector agent for reducing context: [Agentica > Orchestration
* - **Model Specification**: {@link ILlmSchema}
* - **Validation Feedback**: {@link IMcpLlmFunction.validate}
* - **Selector agent for reducing context**: [Agentica > Orchestration
* Strategy](https://wrtnlabs.io/agentica/docs/concepts/function-calling/#orchestration-strategy)
*
* @author Jeongho Nam - https://github.com/samchon
Expand All @@ -43,9 +43,9 @@ export namespace McpLlm {
*
* A list of tools defined in the MCP (Model Context Protocol) document.
*
* It would better to validate the tools by
* It is better to validate the tools using the
* [`typia.assert<T>()`](https://typia.io/docs/validate/assert) function for
* the type safety.
* type safety.
*/
tools: Array<IMcpTool>;

Expand All @@ -59,14 +59,14 @@ export namespace McpLlm {
* Converts MCP (Model Context Protocol) to LLM (Large Language Model)
* function calling application.
*
* The reasons why `@samchon/openapi` recommends to use the function calling
* The reasons why `@samchon/openapi` recommends using the function calling
* feature instead of directly using the
* [`mcp_servers`](https://openai.github.io/openai-agents-python/mcp/#using-mcp-servers)
* property of LLM API are:
*
* - Model Specification: {@link ILlmSchema}
* - Validation Feedback: {@link IMcpLlmFunction.validate}
* - Selector agent for reducing context: [Agentica > Orchestration
* - **Model Specification**: {@link ILlmSchema}
* - **Validation Feedback**: {@link IMcpLlmFunction.validate}
* - **Selector agent for reducing context**: [Agentica > Orchestration
* Strategy](https://wrtnlabs.io/agentica/docs/concepts/function-calling/#orchestration-strategy)
*
* @param props Properties for composition
Expand Down
Loading
Loading