
A modern, beautiful web interface for Ollama AI models
Transform your local AI interactions with an elegant, feature-rich chat interface
Demo • Features • Installation • Usage • Contributing • Roadmap
BeautifyOllama is an open-source web interface that enhances your local Ollama AI model interactions with a beautiful, modern design. Built with cutting-edge web technologies, it provides a seamless chat experience with stunning visual effects and enterprise-grade functionality.
🎉 Latest Updates (v1.6.9)
- 🔧 Custom Port Support - Configure Ollama to run on any port
- ⚙️ Comprehensive Settings Panel - Complete Ollama service management
- 🔍 Enhanced Web Search - Improved search with source tracking and reliability
- 📱 Cross-Platform Desktop App - Native desktop application with Tauri
- 🧠 Advanced AI Features - Thinking mode, verbose stats, and conversation management
- 🛠️ Developer Experience - Improved error handling and Windows compatibility
⚠️ Early Development Notice
This project is in active development. Features and APIs may change. We welcome contributions and feedback from the community.
Video Demo
beautifyollamavideocompressed.mp4
- 🔍 Intelligent Web Search - Real-time internet search with SearxNG integration and source tracking
- 🧠 Thinking Mode Control - Toggle AI reasoning traces on/off with clean rendering
- 🌐 Multi-Engine Fallback - Multiple SearxNG instances for reliability and uptime
- 🔧 Custom Port Support - Configure Ollama to run on any port (not just 11434)
- ⚙️ Comprehensive Settings - Complete Ollama management, model downloads, and configuration
- 🎬 Animated Shine Borders - Eye-catching animated message borders with color cycling
- 📱 Responsive Design - Mobile-first approach with seamless cross-device compatibility
- 🌙 Theme System - Dark/light mode with system preference detection
- ⚡ Real-time Streaming - Live response streaming from Ollama models with typing effects
- 🎯 Clean Interface - Simplified message rendering focused on readability
- 🔄 Advanced Model Management - Download, delete, and switch between Ollama models
- 📊 Verbose Statistics - Toggle detailed timing and performance stats for responses
- 💬 Conversation Management - Persistent chat history with sidebar navigation
- 🖥️ Cross-Platform Support - Windows, macOS, and Linux compatibility with platform-specific optimizations
- ⌨️ Smart Input - Keyboard shortcuts (Enter to send, Shift+Enter for newlines)
- 🎨 Modern UI/UX - Glassmorphism effects, smooth micro-animations, and polished design
- ** File Upload Support** - Document and image processing capabilities
- 🌐 Multi-language Support - Internationalization for global users
- 📊 Advanced Usage Analytics - Enhanced token usage tracking and conversation insights
- 🔌 Plugin System - Extensible architecture for third-party integrations
- ☁️ Cloud Sync - Optional cloud backup for conversations and settings
- 🔐 Multi-API Support - Integration with OpenAI, Anthropic, and other AI providers
- 🎯 Advanced Prompt Templates - Pre-built and custom prompt management
- 🔒 Enhanced Security - API key encryption and secure credential storage
Ensure you have the following installed on your system:
- Node.js (v18 or higher)
- npm, yarn, or pnpm
- Ollama (for local AI model serving)
# macOS
brew install ollama
# Windows
# Download from https://ollama.ai/download
# Start Ollama service
ollama serve
# Pull recommended models
ollama pull llama2
ollama pull codellama
ollama pull mistral
# For web search feature, also pull a small model:
ollama pull qwen2:0.5b
# Verify installation
ollama list
For enhanced web search capabilities, BeautifyOllama includes integrated Python-based web search:
# Install Python dependencies for web search
pip install ollama requests
# The web search feature uses multiple SearxNG instances
# No additional setup required - it's built-in!
For detailed web search setup and configuration, see Web Search Integration Guide.
Install The Latest Release.
Open your App Launcher And Search for BeautifyOllama
- Start a Conversation: Type your message in the input field
- Send Messages: Press
Enter
or click the send button - New Lines: Use
Shift + Enter
for multi-line messages - Switch Models: Use the model selector in the header
- Theme Toggle: Click the theme button to switch between light/dark modes
- Enable Features: Use the toggle buttons below the input for:
- Stats Mode: View detailed response timing and performance
- Thinking Mode: See AI reasoning process (when supported)
- Web Search: Include real-time internet search in responses
- Settings Panel: Click the gear icon to access:
- Connection Settings: Configure custom Ollama ports
- Model Management: Download new models or delete existing ones
- Service Control: Start/stop Ollama service
- Command Logs: View detailed operation logs
- Conversation Management: Navigate between chats using the sidebar
- Response Features: View sources for web search results and detailed statistics
- Access Sidebar: Tap the menu button on mobile devices
- Touch Gestures: Swipe gestures for navigation
- Responsive Layout: Optimized for all screen sizes
Layer | Technology | Purpose |
---|---|---|
Frontend | Next.js 15 + React 19 | Modern React framework with App Router |
Backend | Tauri + Rust | Native desktop integration and system calls |
Styling | TailwindCSS 4 | Utility-first CSS framework |
Animation | Framer Motion | Smooth animations and transitions |
Language | TypeScript + Rust | Type safety and high-performance backend |
State Management | React Hooks | Local state management |
Theme | next-themes | Dark/light mode functionality |
Search | Python + SearxNG | Integrated web search capabilities |
beautifyollama/
├── src/
│ ├── app/ # Next.js App Router
│ │ ├── globals.css # Global styles
│ │ ├── layout.tsx # Root layout
│ │ ├── page.tsx # Home page
│ │ └── services/ # Service layer
│ │ └── ollamaService.ts
│ ├── components/ # React components
│ │ ├── Chat.tsx # Main chat interface
│ │ ├── Settings.tsx # Settings modal
│ │ ├── MarkdownRenderer.tsx
│ │ ├── ThinkingRenderer.tsx
│ │ └── ui/ # Reusable UI components
│ ├── config/ # Configuration files
│ ├── hooks/ # Custom React hooks
│ ├── lib/ # Utility functions
│ └── types/ # TypeScript type definitions
├── src-tauri/ # Tauri Rust backend
│ ├── src/
│ │ ├── main.rs # Main Tauri entry
│ │ └── lib.rs # Core backend logic
│ └── Cargo.toml # Rust dependencies
├── tools/ # External tools
│ ├── web-search/ # Python web search integration
│ └── README.md # Tools documentation
├── public/ # Static assets
├── docs/ # Documentation
└── tests/ # Test files
We welcome contributions from the community! BeautifyOllama is an early-stage project with lots of opportunities to make an impact.
- 🐛 Bug Reports - Help us identify and fix issues
- 💡 Feature Requests - Suggest new functionality
- 📝 Code Contributions - Submit pull requests
- 📚 Documentation - Improve README, guides, and code comments
- 🎨 Design - UI/UX improvements and suggestions
- 🧪 Testing - Help test new features and edge cases
- Fork the repository on GitHub
- Clone your fork locally:
git clone https://github.com/your-username/BeautifyOllama.git cd BeautifyOllama
- Create a branch for your feature:
git checkout -b feature/your-feature-name
- Install dependencies:
npm install
- Start development server:
npm run dev
- Code Style: Follow the existing code style and use TypeScript
- Commits: Use conventional commit messages (
feat:
,fix:
,docs:
, etc.) - Testing: Add tests for new features when applicable
- Documentation: Update README and inline comments for new features
- Pull Requests: Provide clear descriptions and link related issues
npm run dev # Start development server
npm run build # Build for production
npm run start # Start production server
npm run lint # Run ESLint
npm run type-check # Run TypeScript compiler
npm run tauri dev # Start Tauri development mode
npm run tauri build # Build Tauri desktop application
npm test # Run tests (when available)
- Basic chat interface with real-time streaming
- Ollama integration with custom port support
- Theme system (dark/light mode)
- Responsive design for all devices
- Web search integration with SearxNG
- Comprehensive settings and model management
- Conversation history and management
- Advanced thinking and verbose modes
- Cross-platform support (Windows, macOS, Linux)
- Enhanced error handling and user feedback
- Performance optimizations for large conversations
- File upload and document processing
- Advanced prompt templates and management
- Export/import conversations (JSON, Markdown)
- Custom model parameter configuration
- Plugin architecture foundation
- Enhanced search within conversation history
- Multi-user support
- Cloud synchronization
- Plugin architecture
- Usage analytics
- Advanced security features
- Mobile applications
- Desktop applications
- API for third-party integrations
- Marketplace for extensions
Feature | Status | Priority |
---|---|---|
Core Chat | ✅ Complete | High |
Web Search | ✅ Complete | High |
Settings Panel | ✅ Complete | High |
Model Management | ✅ Complete | High |
Theme System | ✅ Complete | High |
Mobile Support | ✅ Complete | High |
Custom Ports | ✅ Complete | Medium |
File Upload | 📋 Planned | Medium |
Multi-API Support | 📋 Planned | Medium |
Cloud Sync | 📋 Planned | Low |
Ollama Connection Failed
# Check if Ollama is running
ollama serve
# Verify models are available
ollama list
# Test API endpoint
curl http://localhost:11434/api/tags
# For custom ports, test the specific port
curl http://localhost:YOUR_PORT/api/tags
Model Loading Issues (Windows)
# Force refresh models in the app or try:
ollama list
ollama pull llama2
# Then refresh the model list in BeautifyOllama settings
Web Search Not Working
# Ensure Python dependencies are installed
pip install ollama requests
# Check if SearxNG instances are accessible
# The app will automatically try multiple instances
Build Errors
# Clear Next.js cache
rm -rf .next
# Reinstall dependencies
rm -rf node_modules package-lock.json
npm install
# For Tauri build issues
cd src-tauri && cargo clean
Hydration Errors
- Clear browser cache and localStorage
- Restart development server
- Check for theme provider issues
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama Team - For the excellent local AI runtime
- Next.js Team - For the amazing React framework
- Vercel - For seamless deployment platform
- TailwindCSS - For the utility-first CSS framework
- Framer Motion - For beautiful animations
- All Contributors - For making this project better
Made with ❤️ by the BeautifyOllama team