A multi-stage AI-powered research workflow agent that automates comprehensive web research, analysis, and report generation using Agno, Scrapegraph, and Nebius AI.
- Multi-Stage Research Workflow: Automated pipeline for searching, analyzing, and reporting
- Web Scraping: Advanced data extraction with Scrapegraph
- AI-Powered Analysis: Uses Nebius AI for intelligent synthesis
- Streamlit Web UI: Modern, interactive interface
- MCP Server: Model Context Protocol server for integration
- Command-Line Support: Run research tasks directly from the terminal
- Searcher: Finds and extracts high-quality, up-to-date information from the web using Scrapegraph and Nebius AI.
- Analyst: Synthesizes, interprets, and organizes the research findings, highlighting key insights and trends.
- Writer: Crafts a clear, structured, and actionable report, including references and recommendations.
Workflow:
- Input a research topic or question
- The agent orchestrates web search, analysis, and report writing in sequence
- Results are presented in a user-friendly format (web or CLI)
- Python 3.10+
- uv for dependency management
- API keys for Nebius AI and Scrapegraph
Follow these steps to set up the Deep Researcher Agent on your machine:
-
Install
uv(if you don’t have it):curl -LsSf https://astral.sh/uv/install.sh | sh -
Clone the repository:
git clone https://github.com/Arindam200/awesome-ai-apps.git
-
Navigate to the Deep Researcher Agent directory:
cd awesome-ai-apps/advance_ai_agents/deep_researcher_agent -
Install all dependencies:
uv sync
Create a .env file in the project root with your API keys:
NEBIUS_API_KEY=your_nebius_api_key_here
SGAI_API_KEY=your_scrapegraph_api_key_hereYou can use the Deep Researcher Agent in three ways. Each method below includes a demo image so you know what to expect.
Run the Streamlit app:
uv run streamlit run app.pyOpen your browser at http://localhost:8501
What it looks like:
Run research directly from the command line:
uv run python agents.pyWhat it looks like:
Add the following configuration to your .cursor/mcp.json or Claude/claude_desktop_config.json file (adjust paths and API keys as needed):
{
"mcpServers": {
"deep_researcher_agent": {
"command": "python",
"args": [
"--directory",
"/Your/Path/to/directory/awesome-ai-apps/advance_ai_agents/deep_researcher_agent",
"run",
"server.py"
],
"env": {
"NEBIUS_API_KEY": "your_nebius_api_key_here",
"SGAI_API_KEY": "your_scrapegraph_api_key_here"
}
}
}
}This allows tools like Claude Desktop to manage and launch the MCP server automatically.
deep_researcher_agent/
├── app.py # Streamlit web interface
├── agents.py # Core agent workflow
├── server.py # MCP server
├── assets/ # Static assets (images)
├── pyproject.toml # Project configuration
└── README.md # This file
uv run black .
uv run isort .uv run mypy .uv run pytestContributions are welcome! Please feel free to submit a Pull Request or open an issue.
- Agno for agent orchestration
- Scrapegraph for web scraping
- Nebius Token Factory for AI model access
- Streamlit for the web interface
Developed with ❤️ by Arindam Majumder




