This project uses AI to analyze Trino release notes and identify breaking changes between different versions. It combines several components:
The system consists of multiple services:
- Ollama: Runs the local LLM for text summarization
- Backend: FastAPI service for web scraping
- Agent: Main analysis service using LangGraph
- Gradio UI: Web interface for user interactions
- Docker and Docker Compose
- Python 3.13
- At least 16GB RAM recommended for running the LLM
- Clone the repository:
git clone https://github.com/snowsky/langgraph-trino-breaking-changes.git
cd langgraph-trino-breaking-changes
- Update environment variables for start and end versions:
cp .env.example .env
# Edit .env file or docker-compose.yaml file
- Start all services using Docker Compose and see live logs:
docker-compose up --build
This will start:
- Ollama LLM service on port 11434
- Ollama Web UI on port 8080 (accessible at http://localhost:8080)
- Backend service on port 8000
- Agent service for analysis
If you prefer to run services separately:
- Create and activate virtual environment:
python3 -m venv venv
source venv/bin/activate
- Install dependencies:
pip install -r my_agent/requirements.txt
- Start the backend service:
cd backend
uvicorn app:app --host 0.0.0.0 --port 8000
pip install -r gradio/requirements.txt
python gradio/gradio_trino_changes.py
Once the Gradio interface is running, open your browser and navigate to http://localhost:7860 to access the UI. After each analysis run, you can copy the results directly from the interface for further use.
python run_agent.py
Example summary and markdown files can be found under output
folder.
Key environment variables (configured in docker-compose.yaml):
COLLECTION_NAME=my_collection
BACKEND_CRAWLSVC_URL=http://backend:8000
TRINO_RELEASE_URL=https://trino.io/docs/current/release.html
OLLAMA_MODEL_NAME=llama3.2
OLLAMA_BASE_URL=http://ollama:11434
OLLAMA_NUM_CTX=8192
├── backend/ # FastAPI web scraping service
├── gradio/ # Gradio web interface
├── my_agent/ # Main analysis logic
├── ollama/ # Ollama model configurations
└── docker-compose.yaml # Service orchestration
Contributions are welcome! Please feel free to submit a Pull Request.
This project is Apache licensed.