An autonomous financial research agent built on AgentField. Argus uses a 5-agent Investment Committee to produce two parallel research reports - one for short-term (1-6 months) and one for long-term (1-5 year) investment horizons.
User Query
|
[1] Manager - Decomposes query -> ResearchPlan (sequential)
|
[2] yfinance - 9 data fetches in parallel (asyncio.gather)
| annual + quarterly income/cashflow, balance sheet,
| company facts, analyst price targets, insider
| transactions, news (20 articles)
|
[3] Analyst --+ - LLM calls dispatched concurrently (asyncio.gather)
[3] Contrarian -+ Both see: financials, targets, insiders, news
|
[4] EditorShort -+ - Also parallel (asyncio.gather). Model: gpt-oss-20b
[4] EditorLong -+ Short: quarterly trends + near-term signals
| Long: annual data + moat + valuation
DualResearchReport -> tabbed UI (Short Term | Long Term)
Parallel note: yfinance data (9 fetches), Analyst+Contrarian LLM calls, and both Editor LLM calls each use
asyncio.gather- three separate parallelism stages. The Editors usegpt-oss-20b(better confidence calibration); all other agents usegpt-oss-120b.
Each agent writes its step-by-step reasoning into a reasoning_steps: list[str] field before producing its conclusion. This is structured chain-of-thought - the model explains what it found, why it weighs evidence the way it does, and how it arrived at its answer. These steps appear live in the UI as each agent completes.
| Agent | Model | Role | Runs |
|---|---|---|---|
| Manager | gpt-oss-120b | Decomposes query, dispatches committee | Sequential |
| Analyst | gpt-oss-120b | Bull case: revenue, margins, growth, free cash flow, catalysts | Parallel [2] |
| Contrarian | gpt-oss-120b | Bear case: risks, lawsuits, competition, valuation | Parallel [2] |
| EditorShort | gpt-oss-20b | Short-term verdict (1-6 months) - focuses on catalysts & momentum | Parallel [3] |
| EditorLong | gpt-oss-20b | Long-term verdict (1-5 years) - focuses on moat & intrinsic value | Parallel [3] |
| Skills | - | yfinance wrappers: 7 data endpoints, all fetched in parallel | Parallel [2] |
- Python 3.8-3.13
- A Nebius API key
cd argus-agentfield
# Create venv + install dependencies (fast)
uv venv && uv pip install -r requirements.txtcp .env.example .envEdit .env and fill in your key:
NEBIUS_API_KEY=sk-...Note: No other API keys are needed -
yfinance(free, no registration) covers all financial data, and AgentField'sAgent()class requires no API key.
Supported tickers: Argus works best with real, actively traded stocks listed on major exchanges (NYSE, NASDAQ, LSE, etc.) - e.g.
AAPL,NVDA,TSLA,MSFT,INTC. Avoid: delisted companies, OTC/penny stocks, crypto tokens, and ETFs - yfinance data for these is often incomplete or missing, which degrades analysis quality.
uv run python3 src/main.pyThe agent starts on http://localhost:8080.
Visit http://localhost:8080 in your browser. Type any query (e.g. "Should I invest in NVDA?") and watch the 5-agent committee work in real time - cards glow, thought drawers type out reasoning live, then the tabbed report appears with separate Short Term and Long Term verdicts.
Argus exposes two ways to run the full 5-agent pipeline. Both use the same agents, the same parallel execution, and produce the same DualResearchReport (short-term + long-term verdict). The difference is how results are delivered.
The streaming API is what the browser UI uses. It sends events in real-time as each agent works, so you can watch the committee think step by step.
Why streaming? The full pipeline takes 30-90 seconds. With streaming you get live updates - agent cards glow, thought drawers type out reasoning, and the report appears the moment the editors finish. Without streaming you'd stare at a blank page for a minute.
How it works (2-step):
# Step 1 - Start a session, get a session_id
SESSION=$(curl -s -X POST http://localhost:8080/research/stream/start \
-H "Content-Type: application/json" \
-d '{"query": "Should I invest in NVDA?"}' | python3 -c "import sys,json; print(json.load(sys.stdin)['session_id'])")
# Step 2 - Connect and receive live SSE events
curl -s "http://localhost:8080/research/stream/events/$SESSION"SSE event types you'll receive:
| Event | Agent | When |
|---|---|---|
agent_start |
manager / analyst / contrarian / editor_short / editor_long | Agent begins working |
agent_note |
any | Progress update mid-task |
agent_complete |
any | Agent done - includes reasoning_steps, verdict, confidence |
complete |
system | All done - full { short_term: {...}, long_term: {...} } payload |
error |
system | Something went wrong |
The complete event payload:
{
"short_term": {
"time_horizon": "short_term",
"ticker": "NVDA",
"verdict": "BUY",
"confidence": 78,
"summary": "Near-term momentum driven by...",
"bull_case": "...", "bear_case": "...",
"key_metrics": ["Revenue Growth: 122% YoY", "..."],
"risks": ["..."],
"reasoning": "..."
},
"long_term": {
"time_horizon": "long_term",
"ticker": "NVDA",
"verdict": "BUY",
"confidence": 85,
"summary": "Structural AI infrastructure leader with...",
...
}
}For scripts, integrations, or testing individual agents. Blocks until complete, returns a single JSON response.
# Full pipeline - returns DualResearchReport (short + long term)
curl -X POST http://localhost:8080/research \
-H "Content-Type: application/json" \
-d '{"query": "Should I invest in AAPL?"}'Response:
{
"short_term": { "ticker": "AAPL", "verdict": "HOLD", "confidence": 65, ... },
"long_term": { "ticker": "AAPL", "verdict": "BUY", "confidence": 80, ... }
}# Analyst only (bull case) - returns AnalystFinding
curl -X POST http://localhost:8080/research/analyst \
-H "Content-Type: application/json" \
-d '{"plan": {"ticker": "TSLA", "company_name": "Tesla Inc.", "hypotheses": ["EV dominance"], "data_needs": ["revenue"], "focus_areas": ["growth"], "reasoning_steps": []}}'
# Contrarian only (bear case) - requires plan + analyst_findingRule of thumb: use the streaming API for anything user-facing; use the direct API for scripts, CI checks, or when you just want a clean JSON result.
All skills are also exposed as REST endpoints and can be called directly:
| Skill | Endpoint | Description |
|---|---|---|
get_income_statement |
POST /skills/get_income_statement |
Revenue, net income, EBITDA (annual and quarterly) |
get_balance_sheet |
POST /skills/get_balance_sheet |
Assets, liabilities, equity |
get_cash_flow_statement |
POST /skills/get_cash_flow_statement |
Operating, investing, financing cash flows |
search_market_news |
POST /skills/search_market_news |
Recent news articles for a ticker (default 20) |
get_company_facts |
POST /skills/get_company_facts |
P/E, market cap, sector, margins, 52-week range |
get_analyst_targets |
POST /skills/get_analyst_targets |
Price targets (low/mean/high), consensus rating, upside% |
get_insider_transactions |
POST /skills/get_insider_transactions |
Recent insider buys/sells with shares and $ value |
Note:
PARA(Paramount Global) was delisted after the Skydance merger and returns a 404 from yfinance. UseWBD(Warner Bros. Discovery) as an alternative for testing uncertain media stocks.
argus-agentfield/
├── proposal.md # Original design proposal
├── requirements.txt # Python dependencies
├── .env.example # Environment variable template
├── README.md # This file
├── ui/
│ └── index.html # Single-page live UI (served at GET /)
└── src/
├── __init__.py # Shared Agent singleton + AIConfig (openai/gpt-oss-120b)
├── schemas.py # Pydantic models with reasoning_steps fields
├── skills.py # yfinance data-fetching skills
├── reasoners.py # 4-agent investment committee (AgentField reasoners)
├── stream.py # SSE streaming backend + raw FastAPI routes
└── main.py # Entry point
Argus is a persistent Python server (FastAPI + uvicorn + SSE streaming). It cannot run on serverless platforms (Cloudflare Workers, Vercel, Lambda). Use any platform that supports long-running Python processes.
| Requirement | Detail |
|---|---|
| Python | 3.10+ |
NEBIUS_API_KEY |
Set as an environment variable |
PORT |
Platform-injected or set manually (default 8080) |
| Outbound internet | For Nebius API + yfinance data |
npm i -g @railway/cli
railway login
railway init # link or create a project
railway up # deploys from current directorySet NEBIUS_API_KEY in the Railway dashboard -> Variables. Railway auto-injects PORT.
- New Web Service -> connect GitHub repo
- Build command:
pip install -r requirements.txt - Start command:
python src/main.py - Add
NEBIUS_API_KEYin Environment -> Secret Files - Use a paid instance (free tier sleeps after 15 min, which kills SSE connections)
fly launch # generates fly.toml
fly secrets set NEBIUS_API_KEY=sk-...
fly deploygit clone <repo>
cd argus-agentfield
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env # fill in NEBIUS_API_KEY
# Run with a process manager:
PORT=8080 nohup python3 src/main.py &
# Or use systemd / PM2 / supervisor to keep it aliveNote: The app prints some AgentField "degraded mode" warnings on startup - these are harmless. All research and UI features work fully without a cloud AgentField hub.