Skip to content

Commit 99353e0

Browse files
manoelcalixtoaibrahim-oaifcoury-oaietraut-openaidylan-hurd-oai
authored
Sync upstream/main into main (openai#12)
* Mirror user text into realtime (openai#17520) - Let typed user messages submit while realtime is active and mirror accepted text into the realtime text stream. - Add integration coverage and snapshot for outbound realtime text. * feat(tui): add reverse history search to composer (openai#17550) ## Problem The TUI had shell-style Up/Down history recall, but `Ctrl+R` did not provide the reverse incremental search workflow users expect from shells. Users needed a way to search older prompts without immediately replacing the current draft, and the interaction needed to handle async persistent history, repeated navigation keys, duplicate prompt text, footer hints, and preview highlighting without making the main composer file even harder to review. https://github.com/user-attachments/assets/5165affd-4c9a-46e9-adbd-89088f5f7b6b <img width="1227" height="722" alt="image" src="https://github.com/user-attachments/assets/8bc83289-eeca-47c7-b0c3-8975101901af" /> ## Mental model `Ctrl+R` opens a temporary search session owned by the composer. The footer line becomes the search input, the composer body previews the current match only after the query has text, and `Enter` accepts that preview as an editable draft while `Esc` restores the draft that existed before search started. The history layer provides a combined offset space over persistent and local history, but search navigation exposes unique prompt text rather than every physical history row. ## Non-goals This change does not rewrite stored history, change normal Up/Down browsing semantics, add fuzzy matching, or add persistent metadata for attachments in cross-session history. Search deduplication is deliberately scoped to the active Ctrl+R search session and uses exact prompt text, so case, whitespace, punctuation, and attachment-only differences are not normalized. ## Tradeoffs The implementation keeps search state in the existing composer and history state machines instead of adding a new cross-module controller. That keeps ownership local and testable, but it means the composer still coordinates visible search status, draft restoration, footer rendering, cursor placement, and match highlighting while `ChatComposerHistory` owns traversal, async fetch continuation, boundary clamping, and unique-result caching. Unique-result caching stores cloned `HistoryEntry` values so known matches can be revisited without cache lookups; this is simple and robust for interactive search sizes, but it is not a global history index. ## Architecture `ChatComposer` detects `Ctrl+R`, snapshots the current draft, switches the footer to `FooterMode::HistorySearch`, and routes search-mode keys before normal editing. Query edits call `ChatComposerHistory::search` with `restart = true`, which starts from the newest combined-history offset. Repeated `Ctrl+R` or Up searches older; Down searches newer through already discovered unique matches or continues the scan. Persistent history entries still arrive asynchronously through `on_entry_response`, where a pending search either accepts the response, skips a duplicate, or requests the next offset. The composer-facing pieces now live in `codex-rs/tui/src/bottom_pane/chat_composer/history_search.rs`, leaving `chat_composer.rs` responsible for routing and rendering integration instead of owning every search helper inline. `codex-rs/tui/src/bottom_pane/chat_composer_history.rs` remains the owner of stored history, combined offsets, async fetch state, boundary semantics, and duplicate suppression. Match highlighting is computed from the current composer text while search is active and disappears when the match is accepted. ## Observability There are no new logs or telemetry. The practical debug path is state inspection: `ChatComposer.history_search` tells whether the footer query is idle, searching, matched, or unmatched; `ChatComposerHistory.search` tracks selected raw offsets, pending persistent fetches, exhausted directions, and unique match cache state. If a user reports skipped or repeated results, first inspect the exact stored prompt text, the selected offset, whether an async persistent response is still pending, and whether a query edit restarted the search session. ## Tests The change is covered by focused `codex-tui` unit tests for opening search without previewing the latest entry, accepting and canceling search, no-match restoration, boundary clamping, footer hints, case-insensitive highlighting, local duplicate skipping, and persistent duplicate skipping through async responses. Snapshot coverage captures the footer-mode visual changes. Local verification used `just fmt`, `cargo test -p codex-tui history_search`, `cargo test -p codex-tui`, and `just fix -p codex-tui`. * Remove context status-line meter (openai#17420) Addresses openai#17313 Problem: The visual context meter in the status line was confusing and continued to draw negative feedback, and context reporting should remain an explicit opt-in rather than part of the default footer. Solution: Remove the visual meter, restore opt-in context remaining/used percentage items that explicitly say "Context", keep existing context-usage configs working as a hidden alias, and update the setup text and snapshots. * Expose instruction sources (AGENTS.md) via app server (openai#17506) Addresses openai#17498 Problem: The TUI derived /status instruction source paths from the local client environment, which could show stale <none> output or incorrect paths when connected to a remote app server. Solution: Add an app-server v2 instructionSources snapshot to thread start/resume/fork responses, default it to an empty list when older servers omit it, and render TUI /status from that server-provided session data. Additional context: The app-server field is intentionally named instructionSources rather than AGENTS.md-specific terminology because the loaded instruction sources can include global instructions, project AGENTS.md files, AGENTS.override.md, user-defined instruction files, and future dynamic sources. * fix(mcp) pause timer for elicitations (openai#17566) ## Summary Stop counting elicitation time towards mcp tool call time. There are some tradeoffs here, but in general I don't think time spent waiting for elicitations should count towards tool call time, or at least not directly towards timeouts. Elicitations are not exactly like exec_command escalation requests, but I would argue it's ~roughly equivalent. ## Testing - [x] Added unit tests - [x] Tested locally * Add MCP tool wall time to model output (openai#17406) Include MCP wall time in the output so the model is aware of how long it's calls are taking. * Run exec-server fs operations through sandbox helper (openai#17294) ## Summary - run exec-server filesystem RPCs requiring sandboxing through a `codex-fs` arg0 helper over stdin/stdout - keep direct local filesystem execution for `DangerFullAccess` and external sandbox policies - remove the standalone exec-server binary path in favor of top-level arg0 dispatch/runtime paths - add sandbox escape regression coverage for local and remote filesystem paths ## Validation - `just fmt` - `git diff --check` - remote devbox: `cd codex-rs && bazel test --bes_backend= --bes_results_url= //codex-rs/exec-server:all` (6/6 passed) --------- Co-authored-by: Codex <noreply@openai.com> * Stabilize exec-server process tests (openai#17605) Problem: After openai#17294 switched exec-server tests to launch the top-level `codex exec-server` command, parallel remote exec-process cases can flake while waiting for the child server's listen URL or transport shutdown. Solution: Serialize remote exec-server-backed process tests and harden the harness so spawned servers are killed on drop and shutdown waits for the child process to exit. * feat: ignore keyring on 0.0.0 (openai#17221) To prevent the spammy: <img width="424" height="172" alt="Screenshot 2026-04-09 at 13 36 16" src="https://github.com/user-attachments/assets/b5ece9e3-c561-422f-87ec-041e7bd6813d" /> * Build remote exec env from exec-server policy (openai#17216) ## Summary - add an exec-server `envPolicy` field; when present, the server starts from its own process env and applies the shell environment policy there - keep `env` as the exact environment for local/embedded starts, but make it an overlay for remote unified-exec starts - move the shell-environment-policy builder into `codex-config` so Core and exec-server share the inherit/filter/set/include behavior - overlay only runtime/sandbox/network deltas from Core onto the exec-server-derived env ## Why Remote unified exec was materializing the shell env inside Core and forwarding the whole map to exec-server, so remote processes could inherit the orchestrator machine's `HOME`, `PATH`, etc. This keeps the base env on the executor while preserving Core-owned runtime additions like `CODEX_THREAD_ID`, unified-exec defaults, network proxy env, and sandbox marker env. ## Validation - `just fmt` - `git diff --check` - `cargo test -p codex-exec-server --lib` - `cargo test -p codex-core --lib unified_exec::process_manager::tests` - `cargo test -p codex-core --lib exec_env::tests` - `cargo test -p codex-core --lib exec_env_tests` (compile-only; filter matched 0 tests) - `cargo test -p codex-config --lib shell_environment` (compile-only; filter matched 0 tests) - `just bazel-lock-update` ## Known local validation issue - `just bazel-lock-check` is not runnable in this checkout: it invokes `./scripts/check-module-bazel-lock.sh`, which is missing. --------- Co-authored-by: Codex <noreply@openai.com> Co-authored-by: pakrym-oai <pakrym@openai.com> * nit: change consolidation model (openai#17633) * fix: stability exec server (openai#17640) * fix: dedup compact (openai#17643) * Make forked agent spawns keep parent model config (openai#17247) ## Summary When a `spawn_agent` call does a full-history fork, keep the parent's effective agent type and model configuration instead of applying child role/model overrides. This is the minimal config-inheritance slice of openai#16055. Prompt-cache key inheritance and MCP tool-surface stability are split into follow-up PRs. ## Design - Reject `agent_type`, `model`, and `reasoning_effort` for v1 `fork_context` spawns. - Reject `agent_type`, `model`, and `reasoning_effort` for v2 `fork_turns = "all"` spawns. - Keep v2 partial-history forks (`fork_turns = "N"`) configurable; requested model/reasoning overrides and role config still apply there. - Keep non-forked spawn behavior unchanged. ## Tests - `cargo +1.93.1 test -p codex-core spawn_agent_fork_context --lib` - `cargo +1.93.1 test -p codex-core multi_agent_v2_spawn_fork_turns --lib` - `cargo +1.93.1 test -p codex-core multi_agent_v2_spawn_partial_fork_turns_allows_agent_type_override --lib` * Fix custom tool output cleanup on stream failure (openai#17470) Addresses openai#16255 Problem: Incomplete Responses streams could leave completed custom tool outputs out of cleanup and retry prompts, making persisted history inconsistent and retries stale. Solution: Route stream and output-item errors through shared cleanup, and rebuild retry prompts from fresh session history after the first attempt. * Emit plan-mode prompt notifications for questionnaires (openai#17417) Addresses openai#17252 Problem: Plan-mode clarification questionnaires used the generic user-input notification type, so configs listening for plan-mode-prompt did not fire when request_user_input waited for an answer. Solution: Map request_user_input prompts to the plan-mode-prompt notification and remove the obsolete user-input TUI notification variant. * Wrap status reset timestamps in narrow layouts (openai#17481) Addresses openai#17453 Problem: /status rate-limit reset timestamps can be truncated in narrow layouts, leaving users with partial times or dates. Solution: Let narrow rate-limit rows drop the fixed progress bar to preserve the percent summary, and wrap reset timestamps onto continuation lines instead of truncating them. * Suppress duplicate compaction and terminal wait events (openai#17601) Addresses openai#17514 Problem: PR openai#16966 made the TUI render the deprecated context-compaction notification, while v2 could also receive legacy unified-exec interaction items alongside terminal-interaction notifications, causing duplicate "Context compacted" and "Waited for background terminal" messages. Solution: Suppress deprecated context-compaction notifications and legacy unified-exec interaction command items from the app-server v2 projection, and render canonical context-compaction items through the existing TUI info-event path. * Fix TUI compaction item replay (openai#17657) Problem: PR openai#17601 updated context-compaction replay to call a new ChatWidget handler, but the handler was never implemented, breaking codex-tui compilation on main. Solution: Render context-compaction replay through the existing info-message path, preserving the intended `Context compacted` UI marker without adding a one-off handler. * Do not fail thread start when trust persistence fails (openai#17595) Addresses openai#17593 Problem: A regression introduced in openai#16492 made thread/start fail when Codex could not persist trusted project state, which crashes startup for users with read-only config.toml. Solution: Treat trusted project persistence as best effort and keep the current thread's config trusted in memory when writing config.toml fails. * Use AbsolutePathBuf in skill loading and codex_home (openai#17407) Helps with FS migration later * feat: disable memory endpoint (openai#17626) * Include legacy deny paths in elevated Windows sandbox setup (openai#17365) ## Summary This updates the Windows elevated sandbox setup/refresh path to include the legacy `compute_allow_paths(...).deny` protected children in the same deny-write payload pipe added for split filesystem carveouts. Concretely, elevated setup and elevated refresh now both build deny-write payload paths from: - explicit split-policy deny-write paths, preserving missing paths so setup can materialize them before applying ACLs - legacy `compute_allow_paths(...).deny`, which includes existing `.git`, `.codex`, and `.agents` children under writable roots This lets the elevated backend protect `.git` consistently with the unelevated/restricted-token path, and removes the old janky hard-coded `.codex` / `.agents` elevated setup helpers in favor of the shared payload path. ## Root Cause The landed split-carveout PR threaded a `deny_write_paths` pipe through elevated setup/refresh, but the legacy workspace-write deny set from `compute_allow_paths(...).deny` was not included in that payload. As a result, elevated workspace-write did not apply the intended deny-write ACLs for existing protected children like `<cwd>/.git`. ## Notes The legacy protected children still only enter the deny set if they already exist, because `compute_allow_paths` filters `.git`, `.codex`, and `.agents` with `exists()`. Missing explicit split-policy deny paths are preserved separately because setup intentionally materializes those before applying ACLs. ## Validation - `cargo fmt --check -p codex-windows-sandbox` - `cargo test -p codex-windows-sandbox` - `cargo build -p codex-cli -p codex-windows-sandbox --bins` - Elevated `codex exec` smoke with `windows.sandbox='elevated'`: fresh git repo, attempted append to `.git/config`, observed `Access is denied`, marker not written, Deny ACE present on `.git` - Unelevated `codex exec` smoke with `windows.sandbox='unelevated'`: fresh git repo, attempted append to `.git/config`, observed `Access is denied`, marker not written, Deny ACE present on `.git` * feat: Avoid reloading curated marketplaces for tool-suggest discovera… (openai#17638) - stop `list_tool_suggest_discoverable_plugins()` from reloading the curated marketplace for each discoverable plugin - reuse a direct plugin-detail loader against the already-resolved marketplace entry The trigger was to stop those logs spamming: ``` d=019d81cf-6f69-7230-98aa-74294ff2dc5a}:submission_dispatch{otel.name="op.dispatch.user_input" submission.id="019d86c8-0a8e-7013-b442-109aabbf75c9" codex.op="user_input"}:turn{otel.name="session_task.turn" thread.id=019d81cf-6f69-7230-98aa-74294ff2dc5a turn.id=019d86c8-0a8e-7013-b442-109aabbf75c9 model=gpt-5.4}: ignoring interface.defaultPrompt: prompt must be at most 128 characters path=/Users/jif/.codex/.tmp/plugins/plugins/life-science-research/.codex-plugin/plugin.json 2026-04-13T12:27:30.402Z WARN [019d81cf-6f69-7230-98aa-74294ff2dc5a] codex_core::plugins::manifest - session_loop{thread_id=019d81cf-6f69-7230-98aa-74294ff2dc5a}:submission_dispatch{otel.name="op.dispatch.user_input" submission.id="019d86c8-0a8e-7013-b442-109aabbf75c9" codex.op="user_input"}:turn{otel.name="session_task.turn" thread.id=019d81cf-6f69-7230-98aa-74294ff2dc5a turn.id=019d86c8-0a8e-7013-b442-109aabbf75c9 model=gpt-5.4}: ignoring interface.defaultPrompt: prompt must be at most 128 characters path=/Users/jif/.codex/.tmp/plugins/plugins/build-ios-apps/.codex-plugin/plugin.json 2026-04-13T12:27:30.402Z WARN [019d81cf-6f69-7230-98aa-74294ff2dc5a] codex_core::plugins::manifest - session_loop{thread_id=019d81cf-6f69-7230-98aa-74294ff2dc5a}:submission_dispatch{otel.name="op.dispatch.user_input" submission.id="019d86c8-0a8e-7013-b442-109aabbf75c9" codex.op="user_input"}:turn{otel.name="session_task.turn" thread.id=019d81cf-6f69-7230-98aa-74294ff2dc5a turn.id=019d86c8-0a8e-7013-b442-109aabbf75c9 model=gpt-5.4}: ignoring interface.defaultPrompt: prompt must be at most 128 characters path=/Users/jif/.codex/.tmp/plugins/plugins/life-science-research/.codex-plugin/plugin.json 2026-04-13T12:27:30.405Z WARN [019d81cf-6f69-7230-98aa-74294ff2dc5a] codex_core::plugins::manifest - session_loop{thread_id=019d81cf-6f69-7230-98aa-74294ff2dc5a}:submission_dispatch{otel.name="op.dispatch.user_input" submission.id="019d86c8-0a8e-7013-b442-109aabbf75c9" codex.op="user_input"}:turn{otel.name="session_task.turn" thread.id=019d81cf-6f69-7230-98aa-74294ff2dc5a turn.id=019d86c8-0a8e-7013-b442-109aabbf75c9 model=gpt-5.4}: ignoring interface.defaultPrompt: prompt must be at most 128 characters path=/Users/jif/.codex/.tmp/plugins/plugins/build-ios-apps/.codex-plugin/plugin.json 2026-04-13T12:27:30.406Z WARN [019d81cf-6f69-7230-98aa-74294ff2dc5a] codex_core::plugins::manifest - session_loop{thread_id=019d81cf-6f69-7230-98aa-74294ff2dc5a}:submission_dispatch{otel.name="op.dispatch.user_input" submission.id="019d86c8-0a8e-7013-b442-109aabbf75c9" codex.op="user_input"}:turn{otel.name="session_task.turn" thread.id=019d81cf-6f69-7230-98aa-74294ff2dc5a turn.id=019d86c8-0a8e-7013-b442-109aabbf75c9 model=gpt-5.4}: ignoring interface.defaultPrompt: prompt must be at most 128 characters path=/Users/jif/.codex/.tmp/plugins/plugins/life-science-research/.codex-plugin/plugin.json 2026-04-13T12:27:30.408Z WARN [019d81cf-6f69-7230-98aa-74294ff2dc5a] codex_core::plugins::manifest - session_loop{thread_id=019d81cf-6f69-7230-98aa-74294ff2dc5a}:submission_dispatch{otel.name="op.dispatch.user_input" submission.id="019d86c8-0a8e-7013-b442-109aabbf75c9" codex.op="user_input"}:turn{otel.name="session_task.turn" thread.id=019d81cf-6f69-7230-98aa-74294ff2dc5a turn.id=019d86c8-0a8e-7013-b442-109aabbf75c9 model=gpt-5.4}: ignoring interface.defaultPrompt: prompt must be at most 128 characters path=/Users/jif/.codex/.tmp/plugins/plugins/build-ios-apps/.codex-plugin/plugin.json ``` * app-server: Only unload threads which were unused for some time (openai#17398) Currently app-server may unload actively running threads once the last connection disconnects, which is not expected. Instead track when was the last active turn & when there were any subscribers the last time, also add 30 minute idleness/no subscribers timer to reduce the churn. * only specify remote ports when the rule needs them (openai#17669) Windows gives an error when you combine `protocol = ANY` with `SetRemotePorts` This fixes that * Fix tui compilation (openai#17691) The recent release broke, codex suggested this as the fix Source failure: https://github.com/openai/codex/actions/runs/24362949066/job/71147202092 Probably from openai@ac82443 For why it got in: ``` The relevant setup: .github/workflows/rust-ci.yml (line 1) runs on PRs, but for codex-rs it only does: cargo fmt --check cargo shear argument-comment lint via Bazel no cargo check, no cargo clippy over the workspace, no cargo test over codex-tui .github/workflows/rust-ci-full.yml (line 1) runs on pushes to main and branches matching **full-ci**. That one does compile TUI because: codex-rs/Cargo.toml includes "tui" as a workspace member lint_build runs cargo clippy --target ... --tests --profile ... the matrix includes both dev and release profiles tests runs cargo nextest run ..., but only dev-profile tests Release CI also compiles it indirectly. .github/workflows/rust-release.yml (line 235) builds --bin codex, and cli/Cargo.toml (line 46) depends on codex-tui. ``` Codex tested locally with `cargo check -p codex-tui --release` and was able to repro, and verified that this fixed it * Update phase 2 memory model to gpt-5.4 (openai#17384) ### Motivation - Switch the default model used for memory Phase 2 (consolidation) to the newer `gpt-5.4` model. ### Description - Change the Phase 2 model constant from `"gpt-5.3-codex"` to `"gpt-5.4"` in `codex-rs/core/src/memories/mod.rs`. ### Testing - Ran `just fmt`, which completed successfully. - Attempted `cargo test -p codex-core`, but the build failed in this environment because the `codex-linux-sandbox` crate requires the system `libcap` pkg-config entry and the required system packages could not be installed, so the test run was blocked. ------ [Codex Task](https://chatgpt.com/codex/cloud/tasks/task_i_69d977693b48832a967e78d73c66dc8e) * Remove unnecessary tests (openai#17395) # External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request. * Cap realtime mirrored user turns (openai#17685) Cap mirrored user text sent to realtime with the existing 300-token turn budget while preserving the full model turn. Adds integration coverage for capped realtime mirror payloads. --------- Co-authored-by: Codex <noreply@openai.com> * change realtime tool description (openai#17699) # External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request. * Add `supports_parallel_tool_calls` flag to included mcps (openai#17667) ## Why For more advanced MCP usage, we want the model to be able to emit parallel MCP tool calls and have Codex execute eligible ones concurrently, instead of forcing all MCP calls through the serial block. The main design choice was where to thread the config. I made this server-level because parallel safety depends on the MCP server implementation. Codex reads the flag from `mcp_servers`, threads the opted-in server names into `ToolRouter`, and checks the parsed `ToolPayload::Mcp { server, .. }` at execution time. That avoids relying on model-visible tool names, which can be incomplete in deferred/search-tool paths or ambiguous for similarly named servers/tools. ## What was added Added `supports_parallel_tool_calls` for MCP servers. Before: ```toml [mcp_servers.docs] command = "docs-server" ``` After: ```toml [mcp_servers.docs] command = "docs-server" supports_parallel_tool_calls = true ``` MCP calls remain serial by default. Only tools from opted-in servers are eligible to run in parallel. Docs also now warn to enable this only when the server’s tools are safe to run concurrently, especially around shared state or read/write races. ## Testing Tested with a local stdio MCP server exposing real delay tools. The model/Responses side was mocked only to deterministically emit two MCP calls in the same turn. Each test called `query_with_delay` and `query_with_delay_2` with `{ "seconds": 25 }`. | Build/config | Observed | Wall time | | --- | --- | --- | | main with flag enabled | serial | `58.79s` | | PR with flag enabled | parallel | `31.73s` | | PR without flag | serial | `56.70s` | PR with flag enabled showed both tools start before either completed; main and PR-without-flag completed the first delay before starting the second. Also added an integration test. Additional checks: - `cargo test -p codex-tools` passed - `cargo test -p codex-core mcp_parallel_support_uses_exact_payload_server` passed - `git diff --check` passed --------- Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com> Co-authored-by: Felipe Coury <felipe.coury@openai.com> Co-authored-by: Eric Traut <etraut@openai.com> Co-authored-by: Dylan Hurd <dylan.hurd@openai.com> Co-authored-by: pakrym-oai <pakrym@openai.com> Co-authored-by: starr-openai <starr@openai.com> Co-authored-by: Codex <noreply@openai.com> Co-authored-by: jif-oai <jif@openai.com> Co-authored-by: friel-openai <friel@openai.com> Co-authored-by: iceweasel-oai <iceweasel@openai.com> Co-authored-by: Ruslan Nigmatullin <ruslan@openai.com> Co-authored-by: David Z Hao <david.hao@openai.com> Co-authored-by: Kevin Liu <kevin@kliu.io> Co-authored-by: josiah-openai <josiah@openai.com>
1 parent b3ef1ee commit 99353e0

144 files changed

Lines changed: 3412 additions & 1164 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

codex-rs/Cargo.lock

Lines changed: 2 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

codex-rs/app-server-protocol/schema/json/ClientRequest.json

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2809,6 +2809,13 @@
28092809
},
28102810
"type": "object"
28112811
},
2812+
"ThreadMemoryMode": {
2813+
"enum": [
2814+
"enabled",
2815+
"disabled"
2816+
],
2817+
"type": "string"
2818+
},
28122819
"ThreadMetadataGitInfoUpdateParams": {
28132820
"properties": {
28142821
"branch": {

codex-rs/app-server-protocol/schema/json/codex_app_server_protocol.schemas.json

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12099,7 +12099,7 @@
1209912099
"type": "string"
1210012100
},
1210112101
"path": {
12102-
"type": "string"
12102+
"$ref": "#/definitions/v2/AbsolutePathBuf"
1210312103
},
1210412104
"scope": {
1210512105
"$ref": "#/definitions/v2/SkillScope"
@@ -12152,7 +12152,7 @@
1215212152
"type": "string"
1215312153
},
1215412154
"path": {
12155-
"type": "string"
12155+
"$ref": "#/definitions/v2/AbsolutePathBuf"
1215612156
},
1215712157
"shortDescription": {
1215812158
"type": [
@@ -13660,6 +13660,13 @@
1366013660
"title": "ThreadLoadedListResponse",
1366113661
"type": "object"
1366213662
},
13663+
"ThreadMemoryMode": {
13664+
"enum": [
13665+
"enabled",
13666+
"disabled"
13667+
],
13668+
"type": "string"
13669+
},
1366313670
"ThreadMetadataGitInfoUpdateParams": {
1366413671
"properties": {
1366513672
"branch": {

codex-rs/app-server-protocol/schema/json/codex_app_server_protocol.v2.schemas.json

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9947,7 +9947,7 @@
99479947
"type": "string"
99489948
},
99499949
"path": {
9950-
"type": "string"
9950+
"$ref": "#/definitions/AbsolutePathBuf"
99519951
},
99529952
"scope": {
99539953
"$ref": "#/definitions/SkillScope"
@@ -10000,7 +10000,7 @@
1000010000
"type": "string"
1000110001
},
1000210002
"path": {
10003-
"type": "string"
10003+
"$ref": "#/definitions/AbsolutePathBuf"
1000410004
},
1000510005
"shortDescription": {
1000610006
"type": [
@@ -11508,6 +11508,13 @@
1150811508
"title": "ThreadLoadedListResponse",
1150911509
"type": "object"
1151011510
},
11511+
"ThreadMemoryMode": {
11512+
"enum": [
11513+
"enabled",
11514+
"disabled"
11515+
],
11516+
"type": "string"
11517+
},
1151111518
"ThreadMetadataGitInfoUpdateParams": {
1151211519
"properties": {
1151311520
"branch": {

codex-rs/app-server-protocol/schema/json/v2/PluginReadResponse.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -335,7 +335,7 @@
335335
"type": "string"
336336
},
337337
"path": {
338-
"type": "string"
338+
"$ref": "#/definitions/AbsolutePathBuf"
339339
},
340340
"shortDescription": {
341341
"type": [

codex-rs/app-server-protocol/schema/json/v2/SkillsListResponse.json

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
11
{
22
"$schema": "http://json-schema.org/draft-07/schema#",
33
"definitions": {
4+
"AbsolutePathBuf": {
5+
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
6+
"type": "string"
7+
},
48
"SkillDependencies": {
59
"properties": {
610
"tools": {
@@ -103,7 +107,7 @@
103107
"type": "string"
104108
},
105109
"path": {
106-
"type": "string"
110+
"$ref": "#/definitions/AbsolutePathBuf"
107111
},
108112
"scope": {
109113
"$ref": "#/definitions/SkillScope"
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
// GENERATED CODE! DO NOT MODIFY BY HAND!
2+
3+
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
4+
5+
export type ThreadMemoryMode = "enabled" | "disabled";

codex-rs/app-server-protocol/schema/typescript/index.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,7 @@ export type { SessionSource } from "./SessionSource";
6868
export type { Settings } from "./Settings";
6969
export type { SubAgentSource } from "./SubAgentSource";
7070
export type { ThreadId } from "./ThreadId";
71+
export type { ThreadMemoryMode } from "./ThreadMemoryMode";
7172
export type { Tool } from "./Tool";
7273
export type { Verbosity } from "./Verbosity";
7374
export type { WebSearchAction } from "./WebSearchAction";
Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
// GENERATED CODE! DO NOT MODIFY BY HAND!
22

33
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
4+
import type { AbsolutePathBuf } from "../AbsolutePathBuf";
45
import type { SkillDependencies } from "./SkillDependencies";
56
import type { SkillInterface } from "./SkillInterface";
67
import type { SkillScope } from "./SkillScope";
@@ -9,4 +10,4 @@ export type SkillMetadata = { name: string, description: string,
910
/**
1011
* Legacy short_description from SKILL.md. Prefer SKILL.json interface.short_description.
1112
*/
12-
shortDescription?: string, interface?: SkillInterface, dependencies?: SkillDependencies, path: string, scope: SkillScope, enabled: boolean, };
13+
shortDescription?: string, interface?: SkillInterface, dependencies?: SkillDependencies, path: AbsolutePathBuf, scope: SkillScope, enabled: boolean, };
Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
// GENERATED CODE! DO NOT MODIFY BY HAND!
22

33
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
4+
import type { AbsolutePathBuf } from "../AbsolutePathBuf";
45
import type { SkillInterface } from "./SkillInterface";
56

6-
export type SkillSummary = { name: string, description: string, shortDescription: string | null, interface: SkillInterface | null, path: string, enabled: boolean, };
7+
export type SkillSummary = { name: string, description: string, shortDescription: string | null, interface: SkillInterface | null, path: AbsolutePathBuf, enabled: boolean, };

0 commit comments

Comments
 (0)