Ollama integration as a new AI provider#1208
Conversation
✅ Deploy Preview for afmg ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Hello. I'm not against Ollama, but this PR looks AI-generated. It has both good and bad parts, at least I would expect a human to work on it before presenting to public. Also I'm not sure why should we stick with Ollama and not e.g. LM Studio. It would make more sense to allow ANY local model by just letting user to populate endpoint and other required fields. In this case I also expect a tutorial to be linked on what is it and how to use it. |
well i check the code and it is all right but im new in coding so i may miss something, ollama works a 30% faster than lm studio you can check it thats why use ollama its better, also ollama api is not like lm studio api so it would need a diferent implementation , oh i think it is explained just in the pull request the user just need to write the model name instead the api and select ollama thats all |
I won't say so. It adds some mess that we don't need. Changelog is in a separate file, not in Readme. Some comments are redundant, some vars are renamed for unclear reason, and I don't get why does it change the way modules work with |
Okey okey i will review all and i will get back to you, sorry for make you lost time! |
There was a problem hiding this comment.
I've removed the Recent Changes section from the README.md to keep the changelog separate.
I've also gone through the modules/ui/ai-generator.js file to delete all the comments and unnecessary staff
I've removed the window.generateWithAi = generateWithAi; line, as it was indeed an oversight on my part and unnecessary. The function is now only exposed via modules.generateWithAi, respecting the existing module structure.
take a look and let me know what you think¡
|
Hello, sorry for the delay, I was pretty busy recently. Once I find time, I will try to merge it. It still requires some clean up to follow the patterns, but it would be easier to just do it then discuss. |
hey no problem about the delay, thanks mate¡ |
|
I will merge it to branch and then do some changes before merging to master. |
Description
This pull request introduces Ollama integration as a new AI provider for the text generation feature within the Fantasy Map Generator. Users can now leverage locally running Ollama models (e.g., Llama 3, Mistral) to generate descriptive text for their map notes.
Motivation and Context:
The primary motivation was to offer users more flexibility and control over the AI models used for text generation, particularly by enabling the use of local models which can be beneficial for privacy, cost, and offline access. This also serves as an alternative to cloud-based AI providers.
NOTE: this will only work for people who is using it locally and has ollama running in the same machine
Summary of Changes:
Ollama Provider Implementation (
modules/ui/ai-generator.js):PROVIDERSandMODELSconstants).http://localhost:11434/api/generate.generateWithOllamawas created to construct and send the request to the Ollama API, including the model name, prompt, system message, and temperature.handleStreamfunction was updated to correctly parse the newline-delimited JSON objects streamed by the Ollama API.AI Generator Dialog Enhancements (
modules/ui/ai-generator.js):updateDialogElements) and the setup of its internal event listeners (for the help button and model selection dropdown) are now performed within the jQuery dialog'sopenevent. This ensures that the DOM elements are available and ready before any manipulation attempts.Notes Editor Integration (
modules/ui/notes-editor.js):openAiGeneratorfunction, triggered by the "generate text for notes" button, was verified to correctly call the maingenerateWithAifunction, ensuring the dialog opens as expected.<p>tags, no headings, no markdown). (Note: This change was present in the development process; the user may have reverted this specific prompt modification in their local version. The core functionality for Ollama integration remains.)How it Works:
When a user selects "Ollama (enter model in key field)" from the AI generator dropdown:
generateWithOllamafunction sends a POST request tohttp://localhost:11434/api/generatewith the specified model, prompt, and other parameters.handleStreamfunction processes this stream, extracting the content from each JSON object and appending it to the result text area in real-time.This integration allows for a seamless experience using local LLMs for content generation directly within the Fantasy Map Generator.
Type of change
Versioning