Our Text Evaluator Tool is a web application built using Open WebUI with Ollama for text evaluation. This project aims to support writers in analyzing their work through different modes of evaluation using LLMs. For text evaluation, users simply need to input a short story, select a mode of evaluation, and select a model. By simplifying this process and incorporating a user-friendly UI, users can easily have their text evaluated with additional explanations.
Evaluates the reliability of the narrator in a short story written in first-person. This is analyzed through 3 lenses:
- Intra-narrational: Examines inconsistencies within the narrator’s own account (ex: contradictions, gaps in memory, confused sequencing).
- Inter-narrational: Examines discrepancies between the narrator’s version of events and the perspectives of other characters or objective facts within the story.
- Inter-textual: Examines how the narrator fits within or subverts established literary archetypes (ex: the trickster, antihero, or picaro)
Evaluates whether the short story is predominantly showing or telling. Also, evaluates where in the short story the author is telling rather than showing.
Before you begin, ensure you have the following software installed and available in your system's PATH:
- Git: (https://git-scm.com/downloads)
- Conda (Miniconda recommended): (https://docs.conda.io/projects/miniconda/en/latest/)
- Node.js & npm: (LTS version recommended) (https://nodejs.org/)
- Ollama: (https://ollama.com/) - Must be installed and running separately.
- ngrok: (https://ngrok.com/download) - Required for helper scripts.
- jq: (e.g.,
brew install jq,apt-get install jq) - Required for helper scripts. - curl: (Usually pre-installed on Linux/macOS) - Required for helper scripts.
You can verify if the command-line tools are detected by running:
sh scripts/check_prereqs.sh-
Clone the Repository (if you haven't already):
git clone https://github.com/YummyYohan/Open-Webui # Replace <repository_url> with the actual URL cd Open-Webui # Or your repository directory name
-
Run the Setup Script: This script primarily installs the required frontend Node.js dependencies.
sh scripts/setup.sh
-
Set up Backend Python Environment: The backend requires a specific Python environment. Create and activate it using Conda:
# Create the environment (only needs to be done once) conda create -n open-webui python=3.11 -y # Activate the environment (needs to be done in the terminal session where you run the backend) conda activate open-webui
-
Install Backend Dependencies: While the conda environment is active, install the required Python packages:
# Make sure you are in the project root directory # Ensure 'conda activate open-webui' was run in this terminal pip install -r backend/requirements.txt
You'll need three separate terminal windows/tabs: one for Ollama, one for the backend, and one for the frontend.
-
Start Ollama: Ensure the Ollama service or desktop application is running. You might start it with:
ollama serve
(Keep this running).
-
Start the Backend:
- Navigate to the backend directory:
cd backend - Activate the Conda Environment: Make sure you are in the correct environment before running the backend:
conda activate open-webui
- Run the backend development script:
sh dev.sh
- Keep this terminal open. The backend typically runs on port
8080.
- Navigate to the backend directory:
-
Start the Frontend:
- Navigate back to the project root directory (the one containing this README):
cd .. # Or `cd <project_root>` if you are not in the backend directory
- Run the frontend development script:
npm run dev
- Keep this terminal open. The frontend typically runs on port
5173.
- Navigate back to the project root directory (the one containing this README):
-
Access the WebUI: Open your web browser and navigate to
http://localhost:5173(or the specific port shown in thenpm run devoutput).
First, set up a Conda environment with Python 3.11:
conda create -n open-webui python=3.11
conda activate open-webuiNext, navigate to the backend directory and install the required Python packages:
cd backend
pip install -r requirements.txtThen, launch the backend server:
sh dev.shThis will start the backend API server, usually on http://localhost:8080.
Navigate back to the project root directory (Open-Webui) and install the frontend dependencies:
cd ..
npm installNow, start the frontend development server:
npm run devThis will typically make the web UI available at http://localhost:5173 (check the output of the command for the exact URL).
Accessing the UI: Open your web browser and navigate to the frontend URL (e.g., http://localhost:5173), not the backend URL (http://localhost:8080).
Troubleshooting:
-
Vite Errors (
_metadata.jsonnot found,Outdated Optimize Dep, etc.): If you encounter errors related to Vite ornode_modules/.viteduringnpm run devornpm install, or if the frontend UI gets stuck on the loading logo, try removing the Vite cache and reinstalling dependencies:# In the Open-Webui root directory rm -rf node_modules/.vite rm -rf node_modules npm install npm run dev -
{"detail":"Not Found"}Error: This usually means you are trying to access the backend URL (e.g.,http://localhost:8080) directly in your browser. Make sure you are accessing the frontend URL provided by thenpm run devcommand (e.g.,http://localhost:5173). -
Prompt Sent, No Response: If the UI loads but sending a prompt results in no response (the loading indicator spins indefinitely):
- Check Backend Logs: Look at the terminal running
sh dev.sh. Are there errors after the initialPOST /api/v1/chats/...log? - Check Ollama Logs: Ensure
ollama serveis running (preferably started manually in a dedicated terminal so you can see logs). Check its output for errors or activity when you send the prompt. - Check Browser Network Tab: Open browser DevTools (F12), go to Network, send the prompt, and check the status of the chat request (e.g.,
POST /api/v1/chats/...). Is it pending? Did it error out?
- Check Backend Logs: Look at the terminal running
-
Ollama Port Conflict (
address already in use) or Unresponsive Ollama: If you cannot startollama servedue to the port being in use, or if the backend/Ollama logs indicate Ollama isn't responding correctly, the existing Ollama process (often started by the desktop app or a system service) might need to be stopped forcefully:- Find the process ID (PID) using the port (default 11434):
lsof -ti :11434 - Stop the process using its PID:
kill <PID>(e.g.,kill 12345) - If it restarts or
killdoesn't work, usekill -9 <PID>. - Once stopped, run
ollama servemanually in a dedicated terminal to monitor its logs directly.
- Find the process ID (PID) using the port (default 11434):
Instructions available here.
Additional documentation can be found in the User Manual.
Created by Team N