The openBIS chatbot is a standalone Python package that provides a conversational interface for querying openBIS documentation using RAG (Retrieval Augmented Generation) with conversation memory powered by LangGraph.
- Conversational Interface: Chat with openBIS documentation using natural language
- Memory Persistence: Maintains conversation context across multiple interactions
- Multiple Interfaces: Both CLI and web interfaces available
- RAG Integration: Uses processed data from RAGbis for accurate responses
- LangGraph Architecture: Advanced conversation flow with state management
- Web Interface: Modern, responsive web UI with markdown rendering
- Python 3.8 or higher
- Ollama installed and running
- The
qwen3
model installed in Ollama (or another compatible model) - Processed data from RAGbis
- Install Ollama from https://ollama.ai/
- Pull the required chat model:
ollama pull qwen3
The openBIS chatbot requires processed data from ragBIS. The repository is available here. You have two options:
-
Run ragBIS first (recommended):
# In ragBIS project directory python -m ragbis # Then copy the data directory to the openBIS chatbot project
-
Point to existing data: Use the
--data-dir
option to specify the location of processed data
- Clone or download this project
- Navigate to the openBIS chatbot_project directory
- Create a virtual environment (recommended):
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
pip install chatbis
- Ensure you have processed data in
./data/processed/
(from RAGbis) - Run the CLI interface:
python -m chatbis
- Or run the web interface:
python -m chatbis --web
Start an interactive chat session:
python -m chatbis
The CLI supports these commands:
- Type your questions naturally
clear
- Clear conversation historyquit
,exit
, orbye
- Exit the application
Start the web server:
python -m chatbis --web
Then open your browser to http://localhost:5000
python -m chatbis --help
Available options:
--data-dir DIR
: Directory containing processed data (default: ./data/processed)--model MODEL
: Ollama model to use (default: qwen3)--web
: Run web interface instead of CLI--host HOST
: Host for web interface (default: 127.0.0.1)--port PORT
: Port for web interface (default: 5000)--debug
: Enable debug mode for web interface--top-k N
: Number of chunks to retrieve for CLI (default: 5)--verbose
: Enable verbose logging
Run with custom data directory:
python -m chatbis --data-dir /path/to/processed/data
Run web interface on all interfaces:
python -m chatbis --web --host 0.0.0.0 --port 8080
Use a different Ollama model:
python -m chatbis --model llama2
Run with verbose logging:
python -m chatbis --verbose
The openBIS chatbot expects the following file structure in the data directory:
data/processed/
├── chunks.json # Required: Main data file with embeddings
└── conversation_memory.db # Created automatically for conversation memory
The chunks.json
file should contain processed chunks from RAGbis with the following structure:
[
{
"title": "Page Title",
"url": "https://...",
"content": "Chunk content...",
"embedding": [0.1, 0.2, ...],
"chunk_id": "unique_id"
}
]
The openBIS chatbot maintains conversation context using LangGraph's SQLite checkpointer:
- Remembers previous questions and answers
- Maintains context across multiple interactions
- Supports multiple concurrent sessions (web interface)
- Persistent storage across application restarts
- Responsive Design: Works on desktop and mobile devices
- Markdown Rendering: Properly formats responses with headers, lists, code blocks
- Session Management: Maintains conversation state in browser
- Real-time Chat: Instant responses with loading indicators
- Clear History: Option to clear conversation history
- Interactive Chat: Natural conversation flow
- Command Support: Built-in commands for managing conversations
- Session Persistence: Maintains context during the session
- Keyboard Shortcuts: Ctrl+C to exit gracefully
OLLAMA_HOST
: Ollama server host (default: localhost)OLLAMA_PORT
: Ollama server port (default: 11434)SECRET_KEY
: Flask secret key for web interface (auto-generated if not set)
You can modify the assistant's behavior by editing the system message in the conversation engine:
- Adjust response style and tone
- Add domain-specific instructions
- Modify conversation guidelines
-
"Processed data not found" Error
- Ensure you have run RAGbis first
- Check that
chunks.json
exists in the data directory - Use
--data-dir
to specify the correct path
-
Ollama Connection Error
- Ensure Ollama is running:
ollama serve
- Check if the model is installed:
ollama list
- Install the model if missing:
ollama pull qwen3
- Ensure Ollama is running:
-
Web Interface Not Loading
- Check if the port is already in use
- Try a different port:
--port 8080
- Check firewall settings
-
Memory Issues
- Conversation memory is stored in SQLite
- Database file is created automatically
- Clear memory by deleting the
.db
file
Enable verbose logging to debug issues:
python -m chatbis --verbose
For web interface debugging:
python -m chatbis --web --debug
GET /
: Main chat interfacePOST /api/chat
: Send a message and get responseGET /api/chat/history/<session_id>
: Get conversation historyPOST /api/chat/clear/<session_id>
: Clear conversation history
POST /api/chat
{
"query": "Your question here",
"session_id": "optional-session-id"
}
Response
{
"success": true,
"answer": "Assistant response",
"session_id": "session-id",
"metadata": {
"token_count": 150,
"rag_chunks_used": 3,
"conversation_length": 4
}
}
pytest
black src/
mypy src/
The openBIS chatbot is designed to work seamlessly with RAGbis:
- Run RAGbis to generate processed data
- Copy or link the data directory to the openBIS chatbot
- Start the openBIS chatbot with the processed data
This project is licensed under the MIT License - see the LICENSE file for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
For issues and questions:
- Create an issue on GitHub
- Check the troubleshooting section above
- Ensure RAGbis data is properly generated
- Verify Ollama is properly configured