Describe the bug
Recently, running multiple instances of the Gemini CLI (Antigravity) causes severe process leaks and complete hangs upon initialization. The CLI fails to terminate previous background processes, leading to massive memory consumption and eventual deadlocks.
To Reproduce
- Open multiple workspace directories.
- Run the CLI (
gemini --yolo) in each directory simultaneously.
- Close and reopen some terminals.
- Attempt to open a new project. The CLI will hang indefinitely on startup.
Expected behavior
The CLI should cleanly terminate its Node.js and MCP server sub-processes when the session ends or is closed. Multiple instances should run independently without causing a system-wide deadlock.
System Information:
- OS: Linux (Ubuntu via WSL2 on Windows)
- Node/V8: Processes show up as
node --no-warnings=DEP0040
- Related MCPs:
mcp-server-memory, mcp-server-filesystem, mcp-server-github are also left hanging as orphan processes.
Additional context (Diagnostic Output):
Running ps aux | grep gemini shows over 13+ orphaned node processes consuming 200MB~400MB of RAM each. The issue seems to have worsened in the most recent updates, indicating a regression in the teardown/cleanup logic of the CLI's core lifecycle or MCP process management.
Describe the bug
Recently, running multiple instances of the Gemini CLI (Antigravity) causes severe process leaks and complete hangs upon initialization. The CLI fails to terminate previous background processes, leading to massive memory consumption and eventual deadlocks.
To Reproduce
gemini --yolo) in each directory simultaneously.Expected behavior
The CLI should cleanly terminate its Node.js and MCP server sub-processes when the session ends or is closed. Multiple instances should run independently without causing a system-wide deadlock.
System Information:
node --no-warnings=DEP0040mcp-server-memory,mcp-server-filesystem,mcp-server-githubare also left hanging as orphan processes.Additional context (Diagnostic Output):
Running
ps aux | grep geminishows over 13+ orphanednodeprocesses consuming 200MB~400MB of RAM each. The issue seems to have worsened in the most recent updates, indicating a regression in the teardown/cleanup logic of the CLI's core lifecycle or MCP process management.