Skip to content

Latest commit

 

History

History
193 lines (125 loc) · 4.31 KB

File metadata and controls

193 lines (125 loc) · 4.31 KB

Deep Researcher Agent

Demo

A multi-stage AI-powered research workflow agent that automates comprehensive web research, analysis, and report generation using Agno, Scrapegraph, and Nebius AI.

Features

  • Multi-Stage Research Workflow: Automated pipeline for searching, analyzing, and reporting
  • Web Scraping: Advanced data extraction with Scrapegraph
  • AI-Powered Analysis: Uses Nebius AI for intelligent synthesis
  • Streamlit Web UI: Modern, interactive interface
  • MCP Server: Model Context Protocol server for integration
  • Command-Line Support: Run research tasks directly from the terminal

How It Works

Workflow

  1. Searcher: Finds and extracts high-quality, up-to-date information from the web using Scrapegraph and Nebius AI.
  2. Analyst: Synthesizes, interprets, and organizes the research findings, highlighting key insights and trends.
  3. Writer: Crafts a clear, structured, and actionable report, including references and recommendations.

Workflow:

  • Input a research topic or question
  • The agent orchestrates web search, analysis, and report writing in sequence
  • Results are presented in a user-friendly format (web or CLI)

Prerequisites

Installation

Follow these steps to set up the Deep Researcher Agent on your machine:

  1. Install uv (if you don’t have it):

    curl -LsSf https://astral.sh/uv/install.sh | sh
  2. Clone the repository:

    git clone https://github.com/Arindam200/awesome-ai-apps.git
  3. Navigate to the Deep Researcher Agent directory:

    cd awesome-ai-apps/advance_ai_agents/deep_researcher_agent
  4. Install all dependencies:

    uv sync

Environment Setup

Create a .env file in the project root with your API keys:

NEBIUS_API_KEY=your_nebius_api_key_here
SGAI_API_KEY=your_scrapegraph_api_key_here

Usage

usage

You can use the Deep Researcher Agent in three ways. Each method below includes a demo image so you know what to expect.

Web Interface

Run the Streamlit app:

uv run streamlit run app.py

Open your browser at http://localhost:8501

What it looks like:

demo

Command Line

Run research directly from the command line:

uv run python agents.py

What it looks like:

Terminal Demo

MCP Server

Add the following configuration to your .cursor/mcp.json or Claude/claude_desktop_config.json file (adjust paths and API keys as needed):

{
  "mcpServers": {
    "deep_researcher_agent": {
      "command": "python",
      "args": [
        "--directory",
        "/Your/Path/to/directory/awesome-ai-apps/advance_ai_agents/deep_researcher_agent",
        "run",
        "server.py"
      ],
      "env": {
        "NEBIUS_API_KEY": "your_nebius_api_key_here",
        "SGAI_API_KEY": "your_scrapegraph_api_key_here"
      }
    }
  }
}

This allows tools like Claude Desktop to manage and launch the MCP server automatically.

Claude Desktop Demo

Project Structure

deep_researcher_agent/
├── app.py              # Streamlit web interface
├── agents.py           # Core agent workflow
├── server.py           # MCP server
├── assets/             # Static assets (images)
├── pyproject.toml      # Project configuration
└── README.md           # This file

Development

Code Formatting

uv run black .
uv run isort .

Type Checking

uv run mypy .

Testing

uv run pytest

Contributing

Contributions are welcome! Please feel free to submit a Pull Request or open an issue.


Acknowledgments

Author

Developed with ❤️ by Arindam Majumder